title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
βŒ€
url
stringlengths
79
342
1.4. Storage
1.4. Storage Red Hat Virtualization uses a centralized storage system for virtual disks, templates, snapshots, and ISO files. Storage is logically grouped into storage pools, which are comprised of storage domains. A storage domain is a combination of storage capacity and metadata that describes the internal structure of the storage. See Storage Domain Types The data domain is the only one required by each data center. A data storage domain is exclusive to a single data center. Export and ISO domains are optional. Storage domains are shared resources, and must be accessible to all hosts in a data center.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/technical_reference/storage1
Release notes for Red Hat Trusted Application Pipeline 1.4
Release notes for Red Hat Trusted Application Pipeline 1.4 Red Hat Trusted Application Pipeline 1.4 Explore new features in this release and learn about known issues. Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_trusted_application_pipeline/1.4/html/release_notes_for_red_hat_trusted_application_pipeline_1.4/index
4.181. mksh
4.181. mksh 4.181.1. RHBA-2011:0923 - mksh bug fix update An updated mksh package that fixes one bug is now available for Red Hat Enterprise Linux 6. The mksh package provides the MirBSD version of the Korn Shell, which implements the ksh-88 programming language for both interactive and shell script use. Bug Fix BZ# 712355 Prior to this update, the mksh package did not specify all requirements for RPM scriptlets. As a result, the requirements were not installed during the post install setup and the scriptlets were not able to work correctly. With this update, the bug has been fixed, and the mksh package now specifies the requirements and installs them as expected. All users of mksh are advised to upgrade to this updated package, which fixes this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/mksh
Chapter 1. Broker configuration properties
Chapter 1. Broker configuration properties advertised.listeners Type: string Default: null Importance: high Dynamic update: per-broker Listeners to publish to ZooKeeper for clients to use, if different than the listeners config property. In IaaS environments, this may need to be different from the interface to which the broker binds. If this is not set, the value for listeners will be used. Unlike listeners , it is not valid to advertise the 0.0.0.0 meta-address. Also unlike listeners , there can be duplicated ports in this property, so that one listener can be configured to advertise another listener's address. This can be useful in some cases where external load balancers are used. auto.create.topics.enable Type: boolean Default: true Importance: high Dynamic update: read-only Enable auto creation of topic on the server. auto.leader.rebalance.enable Type: boolean Default: true Importance: high Dynamic update: read-only Enables auto leader balancing. A background thread checks the distribution of partition leaders at regular intervals, configurable by leader.imbalance.check.interval.seconds. If the leader imbalance exceeds leader.imbalance.per.broker.percentage, leader rebalance to the preferred leader for partitions is triggered. background.threads Type: int Default: 10 Valid Values: [1,... ] Importance: high Dynamic update: cluster-wide The number of threads to use for various background processing tasks. broker.id Type: int Default: -1 Importance: high Dynamic update: read-only The broker id for this server. If unset, a unique broker id will be generated.To avoid conflicts between ZooKeeper generated broker id's and user configured broker id's, generated broker ids start from reserved.broker.max.id + 1. compression.type Type: string Default: producer Valid Values: [uncompressed, zstd, lz4, snappy, gzip, producer] Importance: high Dynamic update: cluster-wide Specify the final compression type for a given topic. This configuration accepts the standard compression codecs ('gzip', 'snappy', 'lz4', 'zstd'). It additionally accepts 'uncompressed' which is equivalent to no compression; and 'producer' which means retain the original compression codec set by the producer. control.plane.listener.name Type: string Default: null Importance: high Dynamic update: read-only Name of listener used for communication between controller and brokers. A broker will use the control.plane.listener.name to locate the endpoint in listeners list, to listen for connections from the controller. For example, if a broker's config is: listeners = INTERNAL://192.1.1.8:9092, EXTERNAL://10.1.1.5:9093, CONTROLLER://192.1.1.8:9094listener.security.protocol.map = INTERNAL:PLAINTEXT, EXTERNAL:SSL, CONTROLLER:SSLcontrol.plane.listener.name = CONTROLLER On startup, the broker will start listening on "192.1.1.8:9094" with security protocol "SSL". On the controller side, when it discovers a broker's published endpoints through ZooKeeper, it will use the control.plane.listener.name to find the endpoint, which it will use to establish connection to the broker. For example, if the broker's published endpoints on ZooKeeper are: "endpoints" : ["INTERNAL://broker1.example.com:9092","EXTERNAL://broker1.example.com:9093","CONTROLLER://broker1.example.com:9094"] and the controller's config is: listener.security.protocol.map = INTERNAL:PLAINTEXT, EXTERNAL:SSL, CONTROLLER:SSLcontrol.plane.listener.name = CONTROLLER then the controller will use "broker1.example.com:9094" with security protocol "SSL" to connect to the broker. If not explicitly configured, the default value will be null and there will be no dedicated endpoints for controller connections. If explicitly configured, the value cannot be the same as the value of inter.broker.listener.name . controller.listener.names Type: string Default: null Importance: high Dynamic update: read-only A comma-separated list of the names of the listeners used by the controller. This is required if running in KRaft mode. When communicating with the controller quorum, the broker will always use the first listener in this list. Note: The ZooKeeper-based controller should not set this configuration. controller.quorum.election.backoff.max.ms Type: int Default: 1000 (1 second) Importance: high Dynamic update: read-only Maximum time in milliseconds before starting new elections. This is used in the binary exponential backoff mechanism that helps prevent gridlocked elections. controller.quorum.election.timeout.ms Type: int Default: 1000 (1 second) Importance: high Dynamic update: read-only Maximum time in milliseconds to wait without being able to fetch from the leader before triggering a new election. controller.quorum.fetch.timeout.ms Type: int Default: 2000 (2 seconds) Importance: high Dynamic update: read-only Maximum time without a successful fetch from the current leader before becoming a candidate and triggering an election for voters; Maximum time a leader can go without receiving valid fetch or fetchSnapshot request from a majority of the quorum before resigning. controller.quorum.voters Type: list Default: "" Valid Values: non-empty list Importance: high Dynamic update: read-only Map of id/endpoint information for the set of voters in a comma-separated list of {id}@{host}:{port} entries. For example: 1@localhost:9092,2@localhost:9093,3@localhost:9094 . delete.topic.enable Type: boolean Default: true Importance: high Dynamic update: read-only Enables delete topic. Delete topic through the admin tool will have no effect if this config is turned off. early.start.listeners Type: string Default: null Importance: high Dynamic update: read-only A comma-separated list of listener names which may be started before the authorizer has finished initialization. This is useful when the authorizer is dependent on the cluster itself for bootstrapping, as is the case for the StandardAuthorizer (which stores ACLs in the metadata log.) By default, all listeners included in controller.listener.names will also be early start listeners. A listener should not appear in this list if it accepts external traffic. eligible.leader.replicas.enable Type: boolean Default: false Importance: high Dynamic update: read-only Enable the Eligible leader replicas. leader.imbalance.check.interval.seconds Type: long Default: 300 Valid Values: [1,... ] Importance: high Dynamic update: read-only The frequency with which the partition rebalance check is triggered by the controller. leader.imbalance.per.broker.percentage Type: int Default: 10 Importance: high Dynamic update: read-only The ratio of leader imbalance allowed per broker. The controller would trigger a leader balance if it goes above this value per broker. The value is specified in percentage. listeners Type: string Default: PLAINTEXT://:9092 Importance: high Dynamic update: per-broker Listener List - Comma-separated list of URIs we will listen on and the listener names. If the listener name is not a security protocol, listener.security.protocol.map must also be set. Listener names and port numbers must be unique unless one listener is an IPv4 address and the other listener is an IPv6 address (for the same port). Specify hostname as 0.0.0.0 to bind to all interfaces. Leave hostname empty to bind to default interface. Examples of legal listener lists: PLAINTEXT://myhost:9092,SSL://:9091 CLIENT://0.0.0.0:9092,REPLICATION://localhost:9093 PLAINTEXT://127.0.0.1:9092,SSL://[::1]:9092 log.dir Type: string Default: /tmp/kafka-logs Importance: high Dynamic update: read-only The directory in which the log data is kept (supplemental for log.dirs property). log.dirs Type: string Default: null Importance: high Dynamic update: read-only A comma-separated list of the directories where the log data is stored. If not set, the value in log.dir is used. log.flush.interval.messages Type: long Default: 9223372036854775807 Valid Values: [1,... ] Importance: high Dynamic update: cluster-wide The number of messages accumulated on a log partition before messages are flushed to disk. log.flush.interval.ms Type: long Default: null Importance: high Dynamic update: cluster-wide The maximum time in ms that a message in any topic is kept in memory before flushed to disk. If not set, the value in log.flush.scheduler.interval.ms is used. log.flush.offset.checkpoint.interval.ms Type: int Default: 60000 (1 minute) Valid Values: [0,... ] Importance: high Dynamic update: read-only The frequency with which we update the persistent record of the last flush which acts as the log recovery point. log.flush.scheduler.interval.ms Type: long Default: 9223372036854775807 Importance: high Dynamic update: read-only The frequency in ms that the log flusher checks whether any log needs to be flushed to disk. log.flush.start.offset.checkpoint.interval.ms Type: int Default: 60000 (1 minute) Valid Values: [0,... ] Importance: high Dynamic update: read-only The frequency with which we update the persistent record of log start offset. log.retention.bytes Type: long Default: -1 Importance: high Dynamic update: cluster-wide The maximum size of the log before deleting it. log.retention.hours Type: int Default: 168 Importance: high Dynamic update: read-only The number of hours to keep a log file before deleting it (in hours), tertiary to log.retention.ms property. log.retention.minutes Type: int Default: null Importance: high Dynamic update: read-only The number of minutes to keep a log file before deleting it (in minutes), secondary to log.retention.ms property. If not set, the value in log.retention.hours is used. log.retention.ms Type: long Default: null Importance: high Dynamic update: cluster-wide The number of milliseconds to keep a log file before deleting it (in milliseconds), If not set, the value in log.retention.minutes is used. If set to -1, no time limit is applied. log.roll.hours Type: int Default: 168 Valid Values: [1,... ] Importance: high Dynamic update: read-only The maximum time before a new log segment is rolled out (in hours), secondary to log.roll.ms property. log.roll.jitter.hours Type: int Default: 0 Valid Values: [0,... ] Importance: high Dynamic update: read-only The maximum jitter to subtract from logRollTimeMillis (in hours), secondary to log.roll.jitter.ms property. log.roll.jitter.ms Type: long Default: null Importance: high Dynamic update: cluster-wide The maximum jitter to subtract from logRollTimeMillis (in milliseconds). If not set, the value in log.roll.jitter.hours is used. log.roll.ms Type: long Default: null Importance: high Dynamic update: cluster-wide The maximum time before a new log segment is rolled out (in milliseconds). If not set, the value in log.roll.hours is used. log.segment.bytes Type: int Default: 1073741824 (1 gibibyte) Valid Values: [14,... ] Importance: high Dynamic update: cluster-wide The maximum size of a single log file. log.segment.delete.delay.ms Type: long Default: 60000 (1 minute) Valid Values: [0,... ] Importance: high Dynamic update: cluster-wide The amount of time to wait before deleting a file from the filesystem. If the value is 0 and there is no file to delete, the system will wait 1 millisecond. Low value will cause busy waiting. message.max.bytes Type: int Default: 1048588 Valid Values: [0,... ] Importance: high Dynamic update: cluster-wide The largest record batch size allowed by Kafka (after compression if compression is enabled). If this is increased and there are consumers older than 0.10.2, the consumers' fetch size must also be increased so that they can fetch record batches this large. In the latest message format version, records are always grouped into batches for efficiency. In message format versions, uncompressed records are not grouped into batches and this limit only applies to a single record in that case.This can be set per topic with the topic level max.message.bytes config. metadata.log.dir Type: string Default: null Importance: high Dynamic update: read-only This configuration determines where we put the metadata log for clusters in KRaft mode. If it is not set, the metadata log is placed in the first log directory from log.dirs. metadata.log.max.record.bytes.between.snapshots Type: long Default: 20971520 Valid Values: [1,... ] Importance: high Dynamic update: read-only This is the maximum number of bytes in the log between the latest snapshot and the high-watermark needed before generating a new snapshot. The default value is 20971520. To generate snapshots based on the time elapsed, see the metadata.log.max.snapshot.interval.ms configuration. The Kafka node will generate a snapshot when either the maximum time interval is reached or the maximum bytes limit is reached. metadata.log.max.snapshot.interval.ms Type: long Default: 3600000 (1 hour) Valid Values: [0,... ] Importance: high Dynamic update: read-only This is the maximum number of milliseconds to wait to generate a snapshot if there are committed records in the log that are not included in the latest snapshot. A value of zero disables time based snapshot generation. The default value is 3600000. To generate snapshots based on the number of metadata bytes, see the metadata.log.max.record.bytes.between.snapshots configuration. The Kafka node will generate a snapshot when either the maximum time interval is reached or the maximum bytes limit is reached. metadata.log.segment.bytes Type: int Default: 1073741824 (1 gibibyte) Valid Values: [12,... ] Importance: high Dynamic update: read-only The maximum size of a single metadata log file. metadata.log.segment.ms Type: long Default: 604800000 (7 days) Importance: high Dynamic update: read-only The maximum time before a new metadata log file is rolled out (in milliseconds). metadata.max.retention.bytes Type: long Default: 104857600 (100 mebibytes) Importance: high Dynamic update: read-only The maximum combined size of the metadata log and snapshots before deleting old snapshots and log files. Since at least one snapshot must exist before any logs can be deleted, this is a soft limit. metadata.max.retention.ms Type: long Default: 604800000 (7 days) Importance: high Dynamic update: read-only The number of milliseconds to keep a metadata log file or snapshot before deleting it. Since at least one snapshot must exist before any logs can be deleted, this is a soft limit. min.insync.replicas Type: int Default: 1 Valid Values: [1,... ] Importance: high Dynamic update: cluster-wide When a producer sets acks to "all" (or "-1"), min.insync.replicas specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful. If this minimum cannot be met, then the producer will raise an exception (either NotEnoughReplicas or NotEnoughReplicasAfterAppend ). When used together, min.insync.replicas and acks allow you to enforce greater durability guarantees. A typical scenario would be to create a topic with a replication factor of 3, set min.insync.replicas to 2, and produce with acks of "all". This will ensure that the producer raises an exception if a majority of replicas do not receive a write. node.id Type: int Default: -1 Importance: high Dynamic update: read-only The node ID associated with the roles this process is playing when process.roles is non-empty. This is required configuration when running in KRaft mode. num.io.threads Type: int Default: 8 Valid Values: [1,... ] Importance: high Dynamic update: cluster-wide The number of threads that the server uses for processing requests, which may include disk I/O. num.network.threads Type: int Default: 3 Valid Values: [1,... ] Importance: high Dynamic update: cluster-wide The number of threads that the server uses for receiving requests from the network and sending responses to the network. Noted: each listener (except for controller listener) creates its own thread pool. num.recovery.threads.per.data.dir Type: int Default: 1 Valid Values: [1,... ] Importance: high Dynamic update: cluster-wide The number of threads per data directory to be used for log recovery at startup and flushing at shutdown. num.replica.alter.log.dirs.threads Type: int Default: null Importance: high Dynamic update: read-only The number of threads that can move replicas between log directories, which may include disk I/O. num.replica.fetchers Type: int Default: 1 Importance: high Dynamic update: cluster-wide Number of fetcher threads used to replicate records from each source broker. The total number of fetchers on each broker is bound by num.replica.fetchers multiplied by the number of brokers in the cluster.Increasing this value can increase the degree of I/O parallelism in the follower and leader broker at the cost of higher CPU and memory utilization. offset.metadata.max.bytes Type: int Default: 4096 (4 kibibytes) Importance: high Dynamic update: read-only The maximum size for a metadata entry associated with an offset commit. offsets.commit.required.acks Type: short Default: -1 Importance: high Dynamic update: read-only DEPRECATED: The required acks before the commit can be accepted. In general, the default (-1) should not be overridden. offsets.commit.timeout.ms Type: int Default: 5000 (5 seconds) Valid Values: [1,... ] Importance: high Dynamic update: read-only Offset commit will be delayed until all replicas for the offsets topic receive the commit or this timeout is reached. This is similar to the producer request timeout. offsets.load.buffer.size Type: int Default: 5242880 Valid Values: [1,... ] Importance: high Dynamic update: read-only Batch size for reading from the offsets segments when loading offsets into the cache (soft-limit, overridden if records are too large). offsets.retention.check.interval.ms Type: long Default: 600000 (10 minutes) Valid Values: [1,... ] Importance: high Dynamic update: read-only Frequency at which to check for stale offsets. offsets.retention.minutes Type: int Default: 10080 Valid Values: [1,... ] Importance: high Dynamic update: read-only For subscribed consumers, committed offset of a specific partition will be expired and discarded when 1) this retention period has elapsed after the consumer group loses all its consumers (i.e. becomes empty); 2) this retention period has elapsed since the last time an offset is committed for the partition and the group is no longer subscribed to the corresponding topic. For standalone consumers (using manual assignment), offsets will be expired after this retention period has elapsed since the time of last commit. Note that when a group is deleted via the delete-group request, its committed offsets will also be deleted without extra retention period; also when a topic is deleted via the delete-topic request, upon propagated metadata update any group's committed offsets for that topic will also be deleted without extra retention period. offsets.topic.compression.codec Type: int Default: 0 Importance: high Dynamic update: read-only Compression codec for the offsets topic - compression may be used to achieve "atomic" commits. offsets.topic.num.partitions Type: int Default: 50 Valid Values: [1,... ] Importance: high Dynamic update: read-only The number of partitions for the offset commit topic (should not change after deployment). offsets.topic.replication.factor Type: short Default: 3 Valid Values: [1,... ] Importance: high Dynamic update: read-only The replication factor for the offsets topic (set higher to ensure availability). Internal topic creation will fail until the cluster size meets this replication factor requirement. offsets.topic.segment.bytes Type: int Default: 104857600 (100 mebibytes) Valid Values: [1,... ] Importance: high Dynamic update: read-only The offsets topic segment bytes should be kept relatively small in order to facilitate faster log compaction and cache loads. process.roles Type: list Default: "" Valid Values: [broker, controller] Importance: high Dynamic update: read-only The roles that this process plays: 'broker', 'controller', or 'broker,controller' if it is both. This configuration is only applicable for clusters in KRaft (Kafka Raft) mode (instead of ZooKeeper). Leave this config undefined or empty for ZooKeeper clusters. queued.max.requests Type: int Default: 500 Valid Values: [1,... ] Importance: high Dynamic update: read-only The number of queued requests allowed for data-plane, before blocking the network threads. replica.fetch.min.bytes Type: int Default: 1 Importance: high Dynamic update: read-only Minimum bytes expected for each fetch response. If not enough bytes, wait up to replica.fetch.wait.max.ms (broker config). replica.fetch.wait.max.ms Type: int Default: 500 Importance: high Dynamic update: read-only The maximum wait time for each fetcher request issued by follower replicas. This value should always be less than the replica.lag.time.max.ms at all times to prevent frequent shrinking of ISR for low throughput topics. replica.high.watermark.checkpoint.interval.ms Type: long Default: 5000 (5 seconds) Importance: high Dynamic update: read-only The frequency with which the high watermark is saved out to disk. replica.lag.time.max.ms Type: long Default: 30000 (30 seconds) Importance: high Dynamic update: read-only If a follower hasn't sent any fetch requests or hasn't consumed up to the leaders log end offset for at least this time, the leader will remove the follower from isr. replica.socket.receive.buffer.bytes Type: int Default: 65536 (64 kibibytes) Importance: high Dynamic update: read-only The socket receive buffer for network requests to the leader for replicating data. replica.socket.timeout.ms Type: int Default: 30000 (30 seconds) Importance: high Dynamic update: read-only The socket timeout for network requests. Its value should be at least replica.fetch.wait.max.ms. request.timeout.ms Type: int Default: 30000 (30 seconds) Importance: high Dynamic update: read-only The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted. sasl.mechanism.controller.protocol Type: string Default: GSSAPI Importance: high Dynamic update: read-only SASL mechanism used for communication with controllers. Default is GSSAPI. socket.receive.buffer.bytes Type: int Default: 102400 (100 kibibytes) Importance: high Dynamic update: read-only The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. socket.request.max.bytes Type: int Default: 104857600 (100 mebibytes) Valid Values: [1,... ] Importance: high Dynamic update: read-only The maximum number of bytes in a socket request. socket.send.buffer.bytes Type: int Default: 102400 (100 kibibytes) Importance: high Dynamic update: read-only The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. transaction.max.timeout.ms Type: int Default: 900000 (15 minutes) Valid Values: [1,... ] Importance: high Dynamic update: read-only The maximum allowed timeout for transactions. If a client's requested transaction time exceed this, then the broker will return an error in InitProducerIdRequest. This prevents a client from too large of a timeout, which can stall consumers reading from topics included in the transaction. transaction.state.log.load.buffer.size Type: int Default: 5242880 Valid Values: [1,... ] Importance: high Dynamic update: read-only Batch size for reading from the transaction log segments when loading producer ids and transactions into the cache (soft-limit, overridden if records are too large). transaction.state.log.min.isr Type: int Default: 2 Valid Values: [1,... ] Importance: high Dynamic update: read-only The minimum number of replicas that must acknowledge a write to transaction topic in order to be considered successful. transaction.state.log.num.partitions Type: int Default: 50 Valid Values: [1,... ] Importance: high Dynamic update: read-only The number of partitions for the transaction topic (should not change after deployment). transaction.state.log.replication.factor Type: short Default: 3 Valid Values: [1,... ] Importance: high Dynamic update: read-only The replication factor for the transaction topic (set higher to ensure availability). Internal topic creation will fail until the cluster size meets this replication factor requirement. transaction.state.log.segment.bytes Type: int Default: 104857600 (100 mebibytes) Valid Values: [1,... ] Importance: high Dynamic update: read-only The transaction topic segment bytes should be kept relatively small in order to facilitate faster log compaction and cache loads. transactional.id.expiration.ms Type: int Default: 604800000 (7 days) Valid Values: [1,... ] Importance: high Dynamic update: read-only The time in ms that the transaction coordinator will wait without receiving any transaction status updates for the current transaction before expiring its transactional id. Transactional IDs will not expire while a the transaction is still ongoing. unclean.leader.election.enable Type: boolean Default: false Importance: high Dynamic update: cluster-wide Indicates whether to enable replicas not in the ISR set to be elected as leader as a last resort, even though doing so may result in data loss. zookeeper.connect Type: string Default: null Importance: high Dynamic update: read-only Specifies the ZooKeeper connection string in the form hostname:port where host and port are the host and port of a ZooKeeper server. To allow connecting through other ZooKeeper nodes when that ZooKeeper machine is down you can also specify multiple hosts in the form hostname1:port1,hostname2:port2,hostname3:port3 . The server can also have a ZooKeeper chroot path as part of its ZooKeeper connection string which puts its data under some path in the global ZooKeeper namespace. For example to give a chroot path of /chroot/path you would give the connection string as hostname1:port1,hostname2:port2,hostname3:port3/chroot/path . zookeeper.connection.timeout.ms Type: int Default: null Importance: high Dynamic update: read-only The max time that the client waits to establish a connection to ZooKeeper. If not set, the value in zookeeper.session.timeout.ms is used. zookeeper.max.in.flight.requests Type: int Default: 10 Valid Values: [1,... ] Importance: high Dynamic update: read-only The maximum number of unacknowledged requests the client will send to ZooKeeper before blocking. zookeeper.metadata.migration.enable Type: boolean Default: false Importance: high Dynamic update: read-only Enable ZK to KRaft migration. zookeeper.session.timeout.ms Type: int Default: 18000 (18 seconds) Importance: high Dynamic update: read-only Zookeeper session timeout. zookeeper.set.acl Type: boolean Default: false Importance: high Dynamic update: read-only Set client to use secure ACLs. broker.heartbeat.interval.ms Type: int Default: 2000 (2 seconds) Importance: medium Dynamic update: read-only The length of time in milliseconds between broker heartbeats. Used when running in KRaft mode. broker.id.generation.enable Type: boolean Default: true Importance: medium Dynamic update: read-only Enable automatic broker id generation on the server. When enabled the value configured for reserved.broker.max.id should be reviewed. broker.rack Type: string Default: null Importance: medium Dynamic update: read-only Rack of the broker. This will be used in rack aware replication assignment for fault tolerance. Examples: RACK1 , us-east-1d . broker.session.timeout.ms Type: int Default: 9000 (9 seconds) Importance: medium Dynamic update: read-only The length of time in milliseconds that a broker lease lasts if no heartbeats are made. Used when running in KRaft mode. compression.gzip.level Type: int Default: -1 Valid Values: [1,... ,9] or -1 Importance: medium Dynamic update: cluster-wide The compression level to use if compression.type is set to 'gzip'. compression.lz4.level Type: int Default: 9 Valid Values: [1,... ,17] Importance: medium Dynamic update: cluster-wide The compression level to use if compression.type is set to 'lz4'. compression.zstd.level Type: int Default: 3 Valid Values: [-131072,... ,22] Importance: medium Dynamic update: cluster-wide The compression level to use if compression.type is set to 'zstd'. connections.max.idle.ms Type: long Default: 600000 (10 minutes) Importance: medium Dynamic update: read-only Idle connections timeout: the server socket processor threads close the connections that idle more than this. connections.max.reauth.ms Type: long Default: 0 Importance: medium Dynamic update: read-only When explicitly set to a positive number (the default is 0, not a positive number), a session lifetime that will not exceed the configured value will be communicated to v2.2.0 or later clients when they authenticate. The broker will disconnect any such connection that is not re-authenticated within the session lifetime and that is then subsequently used for any purpose other than re-authentication. Configuration names can optionally be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.oauthbearer.connections.max.reauth.ms=3600000. controlled.shutdown.enable Type: boolean Default: true Importance: medium Dynamic update: read-only Enable controlled shutdown of the server. controlled.shutdown.max.retries Type: int Default: 3 Importance: medium Dynamic update: read-only Controlled shutdown can fail for multiple reasons. This determines the number of retries when such failure happens. controlled.shutdown.retry.backoff.ms Type: long Default: 5000 (5 seconds) Importance: medium Dynamic update: read-only Before each retry, the system needs time to recover from the state that caused the failure (Controller fail over, replica lag etc). This config determines the amount of time to wait before retrying. controller.quorum.append.linger.ms Type: int Default: 25 Importance: medium Dynamic update: read-only The duration in milliseconds that the leader will wait for writes to accumulate before flushing them to disk. controller.quorum.request.timeout.ms Type: int Default: 2000 (2 seconds) Importance: medium Dynamic update: read-only The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted. controller.socket.timeout.ms Type: int Default: 30000 (30 seconds) Importance: medium Dynamic update: read-only The socket timeout for controller-to-broker channels. default.replication.factor Type: int Default: 1 Importance: medium Dynamic update: read-only The default replication factors for automatically created topics. delegation.token.expiry.time.ms Type: long Default: 86400000 (1 day) Valid Values: [1,... ] Importance: medium Dynamic update: read-only The token validity time in milliseconds before the token needs to be renewed. Default value 1 day. delegation.token.master.key Type: password Default: null Importance: medium Dynamic update: read-only DEPRECATED: An alias for delegation.token.secret.key, which should be used instead of this config. delegation.token.max.lifetime.ms Type: long Default: 604800000 (7 days) Valid Values: [1,... ] Importance: medium Dynamic update: read-only The token has a maximum lifetime beyond which it cannot be renewed anymore. Default value 7 days. delegation.token.secret.key Type: password Default: null Importance: medium Dynamic update: read-only Secret key to generate and verify delegation tokens. The same key must be configured across all the brokers. If using Kafka with KRaft, the key must also be set across all controllers. If the key is not set or set to empty string, brokers will disable the delegation token support. delete.records.purgatory.purge.interval.requests Type: int Default: 1 Importance: medium Dynamic update: read-only The purge interval (in number of requests) of the delete records request purgatory. fetch.max.bytes Type: int Default: 57671680 (55 mebibytes) Valid Values: [1024,... ] Importance: medium Dynamic update: read-only The maximum number of bytes we will return for a fetch request. Must be at least 1024. fetch.purgatory.purge.interval.requests Type: int Default: 1000 Importance: medium Dynamic update: read-only The purge interval (in number of requests) of the fetch request purgatory. group.consumer.assignors Type: list Default: org.apache.kafka.coordinator.group.assignor.UniformAssignor,org.apache.kafka.coordinator.group.assignor.RangeAssignor Importance: medium Dynamic update: read-only The server side assignors as a list of full class names. The first one in the list is considered as the default assignor to be used in the case where the consumer does not specify an assignor. group.consumer.heartbeat.interval.ms Type: int Default: 5000 (5 seconds) Valid Values: [1,... ] Importance: medium Dynamic update: read-only The heartbeat interval given to the members of a consumer group. group.consumer.max.heartbeat.interval.ms Type: int Default: 15000 (15 seconds) Valid Values: [1,... ] Importance: medium Dynamic update: read-only The maximum heartbeat interval for registered consumers. group.consumer.max.session.timeout.ms Type: int Default: 60000 (1 minute) Valid Values: [1,... ] Importance: medium Dynamic update: read-only The maximum allowed session timeout for registered consumers. group.consumer.max.size Type: int Default: 2147483647 Valid Values: [1,... ] Importance: medium Dynamic update: read-only The maximum number of consumers that a single consumer group can accommodate. This value will only impact the new consumer coordinator. To configure the classic consumer coordinator check group.max.size instead. group.consumer.migration.policy Type: string Default: disabled Valid Values: (case insensitive) [DISABLED, DOWNGRADE, UPGRADE, BIDIRECTIONAL] Importance: medium Dynamic update: read-only The config that enables converting the non-empty classic group using the consumer embedded protocol to the non-empty consumer group using the consumer group protocol and vice versa; conversions of empty groups in both directions are always enabled regardless of this policy. bidirectional: both upgrade from classic group to consumer group and downgrade from consumer group to classic group are enabled, upgrade: only upgrade from classic group to consumer group is enabled, downgrade: only downgrade from consumer group to classic group is enabled, disabled: neither upgrade nor downgrade is enabled. group.consumer.min.heartbeat.interval.ms Type: int Default: 5000 (5 seconds) Valid Values: [1,... ] Importance: medium Dynamic update: read-only The minimum heartbeat interval for registered consumers. group.consumer.min.session.timeout.ms Type: int Default: 45000 (45 seconds) Valid Values: [1,... ] Importance: medium Dynamic update: read-only The minimum allowed session timeout for registered consumers. group.consumer.session.timeout.ms Type: int Default: 45000 (45 seconds) Valid Values: [1,... ] Importance: medium Dynamic update: read-only The timeout to detect client failures when using the consumer group protocol. group.coordinator.append.linger.ms Type: int Default: 10 Valid Values: [0,... ] Importance: medium Dynamic update: read-only The duration in milliseconds that the coordinator will wait for writes to accumulate before flushing them to disk. Transactional writes are not accumulated. group.coordinator.rebalance.protocols Type: list Default: classic Valid Values: [consumer, classic, unknown] Importance: medium Dynamic update: read-only The list of enabled rebalance protocols. Supported protocols: consumer,classic,unknown. The consumer rebalance protocol is in early access and therefore must not be used in production. group.coordinator.threads Type: int Default: 1 Valid Values: [1,... ] Importance: medium Dynamic update: read-only The number of threads used by the group coordinator. group.initial.rebalance.delay.ms Type: int Default: 3000 (3 seconds) Importance: medium Dynamic update: read-only The amount of time the group coordinator will wait for more consumers to join a new group before performing the first rebalance. A longer delay means potentially fewer rebalances, but increases the time until processing begins. group.max.session.timeout.ms Type: int Default: 1800000 (30 minutes) Importance: medium Dynamic update: read-only The maximum allowed session timeout for registered consumers. Longer timeouts give consumers more time to process messages in between heartbeats at the cost of a longer time to detect failures. group.max.size Type: int Default: 2147483647 Valid Values: [1,... ] Importance: medium Dynamic update: read-only The maximum number of consumers that a single consumer group can accommodate. group.min.session.timeout.ms Type: int Default: 6000 (6 seconds) Importance: medium Dynamic update: read-only The minimum allowed session timeout for registered consumers. Shorter timeouts result in quicker failure detection at the cost of more frequent consumer heartbeating, which can overwhelm broker resources. initial.broker.registration.timeout.ms Type: int Default: 60000 (1 minute) Importance: medium Dynamic update: read-only When initially registering with the controller quorum, the number of milliseconds to wait before declaring failure and exiting the broker process. inter.broker.listener.name Type: string Default: null Importance: medium Dynamic update: read-only Name of listener used for communication between brokers. If this is unset, the listener name is defined by security.inter.broker.protocolIt is an error to set this and security.inter.broker.protocol properties at the same time. inter.broker.protocol.version Type: string Default: 3.8-IV0 Valid Values: [0.8.0, 0.8.1, 0.8.2, 0.9.0, 0.10.0-IV0, 0.10.0-IV1, 0.10.1-IV0, 0.10.1-IV1, 0.10.1-IV2, 0.10.2-IV0, 0.11.0-IV0, 0.11.0-IV1, 0.11.0-IV2, 1.0-IV0, 1.1-IV0, 2.0-IV0, 2.0-IV1, 2.1-IV0, 2.1-IV1, 2.1-IV2, 2.2-IV0, 2.2-IV1, 2.3-IV0, 2.3-IV1, 2.4-IV0, 2.4-IV1, 2.5-IV0, 2.6-IV0, 2.7-IV0, 2.7-IV1, 2.7-IV2, 2.8-IV0, 2.8-IV1, 3.0-IV0, 3.0-IV1, 3.1-IV0, 3.2-IV0, 3.3-IV0, 3.3-IV1, 3.3-IV2, 3.3-IV3, 3.4-IV0, 3.5-IV0, 3.5-IV1, 3.5-IV2, 3.6-IV0, 3.6-IV1, 3.6-IV2, 3.7-IV0, 3.7-IV1, 3.7-IV2, 3.7-IV3, 3.7-IV4, 3.8-IV0, 3.9-IV0] Importance: medium Dynamic update: read-only Specify which version of the inter-broker protocol will be used. . This is typically bumped after all brokers were upgraded to a new version. Example of some valid values are: 0.8.0, 0.8.1, 0.8.1.1, 0.8.2, 0.8.2.0, 0.8.2.1, 0.9.0.0, 0.9.0.1 Check MetadataVersion for the full list. log.cleaner.backoff.ms Type: long Default: 15000 (15 seconds) Valid Values: [0,... ] Importance: medium Dynamic update: cluster-wide The amount of time to sleep when there are no logs to clean. log.cleaner.dedupe.buffer.size Type: long Default: 134217728 Importance: medium Dynamic update: cluster-wide The total memory used for log deduplication across all cleaner threads. log.cleaner.delete.retention.ms Type: long Default: 86400000 (1 day) Valid Values: [0,... ] Importance: medium Dynamic update: cluster-wide The amount of time to retain tombstone message markers for log compacted topics. This setting also gives a bound on the time in which a consumer must complete a read if they begin from offset 0 to ensure that they get a valid snapshot of the final stage (otherwise tombstones messages may be collected before a consumer completes their scan). log.cleaner.enable Type: boolean Default: true Importance: medium Dynamic update: read-only Enable the log cleaner process to run on the server. Should be enabled if using any topics with a cleanup.policy=compact including the internal offsets topic. If disabled those topics will not be compacted and continually grow in size. log.cleaner.io.buffer.load.factor Type: double Default: 0.9 Importance: medium Dynamic update: cluster-wide Log cleaner dedupe buffer load factor. The percentage full the dedupe buffer can become. A higher value will allow more log to be cleaned at once but will lead to more hash collisions. log.cleaner.io.buffer.size Type: int Default: 524288 Valid Values: [0,... ] Importance: medium Dynamic update: cluster-wide The total memory used for log cleaner I/O buffers across all cleaner threads. log.cleaner.io.max.bytes.per.second Type: double Default: 1.7976931348623157E308 Importance: medium Dynamic update: cluster-wide The log cleaner will be throttled so that the sum of its read and write i/o will be less than this value on average. log.cleaner.max.compaction.lag.ms Type: long Default: 9223372036854775807 Valid Values: [1,... ] Importance: medium Dynamic update: cluster-wide The maximum time a message will remain ineligible for compaction in the log. Only applicable for logs that are being compacted. log.cleaner.min.cleanable.ratio Type: double Default: 0.5 Valid Values: [0,... ,1] Importance: medium Dynamic update: cluster-wide The minimum ratio of dirty log to total log for a log to eligible for cleaning. If the log.cleaner.max.compaction.lag.ms or the log.cleaner.min.compaction.lag.ms configurations are also specified, then the log compactor considers the log eligible for compaction as soon as either: (i) the dirty ratio threshold has been met and the log has had dirty (uncompacted) records for at least the log.cleaner.min.compaction.lag.ms duration, or (ii) if the log has had dirty (uncompacted) records for at most the log.cleaner.max.compaction.lag.ms period. log.cleaner.min.compaction.lag.ms Type: long Default: 0 Valid Values: [0,... ] Importance: medium Dynamic update: cluster-wide The minimum time a message will remain uncompacted in the log. Only applicable for logs that are being compacted. log.cleaner.threads Type: int Default: 1 Valid Values: [0,... ] Importance: medium Dynamic update: cluster-wide The number of background threads to use for log cleaning. log.cleanup.policy Type: list Default: delete Valid Values: [compact, delete] Importance: medium Dynamic update: cluster-wide The default cleanup policy for segments beyond the retention window. A comma separated list of valid policies. Valid policies are: "delete" and "compact". log.index.interval.bytes Type: int Default: 4096 (4 kibibytes) Valid Values: [0,... ] Importance: medium Dynamic update: cluster-wide The interval with which we add an entry to the offset index. log.index.size.max.bytes Type: int Default: 10485760 (10 mebibytes) Valid Values: [4,... ] Importance: medium Dynamic update: cluster-wide The maximum size in bytes of the offset index. log.local.retention.bytes Type: long Default: -2 Valid Values: [-2,... ] Importance: medium Dynamic update: cluster-wide The maximum size of local log segments that can grow for a partition before it gets eligible for deletion. Default value is -2, it represents log.retention.bytes value to be used. The effective value should always be less than or equal to log.retention.bytes value. log.local.retention.ms Type: long Default: -2 Valid Values: [-2,... ] Importance: medium Dynamic update: cluster-wide The number of milliseconds to keep the local log segments before it gets eligible for deletion. Default value is -2, it represents log.retention.ms value is to be used. The effective value should always be less than or equal to log.retention.ms value. log.message.format.version Type: string Default: 3.0-IV1 Valid Values: [0.8.0, 0.8.1, 0.8.2, 0.9.0, 0.10.0-IV0, 0.10.0-IV1, 0.10.1-IV0, 0.10.1-IV1, 0.10.1-IV2, 0.10.2-IV0, 0.11.0-IV0, 0.11.0-IV1, 0.11.0-IV2, 1.0-IV0, 1.1-IV0, 2.0-IV0, 2.0-IV1, 2.1-IV0, 2.1-IV1, 2.1-IV2, 2.2-IV0, 2.2-IV1, 2.3-IV0, 2.3-IV1, 2.4-IV0, 2.4-IV1, 2.5-IV0, 2.6-IV0, 2.7-IV0, 2.7-IV1, 2.7-IV2, 2.8-IV0, 2.8-IV1, 3.0-IV0, 3.0-IV1, 3.1-IV0, 3.2-IV0, 3.3-IV0, 3.3-IV1, 3.3-IV2, 3.3-IV3, 3.4-IV0, 3.5-IV0, 3.5-IV1, 3.5-IV2, 3.6-IV0, 3.6-IV1, 3.6-IV2, 3.7-IV0, 3.7-IV1, 3.7-IV2, 3.7-IV3, 3.7-IV4, 3.8-IV0, 3.9-IV0] Importance: medium Dynamic update: read-only Specify the message format version the broker will use to append messages to the logs. The value should be a valid MetadataVersion. Some examples are: 0.8.2, 0.9.0.0, 0.10.0, check MetadataVersion for more details. By setting a particular message format version, the user is certifying that all the existing messages on disk are smaller or equal than the specified version. Setting this value incorrectly will cause consumers with older versions to break as they will receive messages with a format that they don't understand. log.message.timestamp.after.max.ms Type: long Default: 9223372036854775807 Valid Values: [0,... ] Importance: medium Dynamic update: cluster-wide This configuration sets the allowable timestamp difference between the message timestamp and the broker's timestamp. The message timestamp can be later than or equal to the broker's timestamp, with the maximum allowable difference determined by the value set in this configuration. If log.message.timestamp.type=CreateTime, the message will be rejected if the difference in timestamps exceeds this specified threshold. This configuration is ignored if log.message.timestamp.type=LogAppendTime. log.message.timestamp.before.max.ms Type: long Default: 9223372036854775807 Valid Values: [0,... ] Importance: medium Dynamic update: cluster-wide This configuration sets the allowable timestamp difference between the broker's timestamp and the message timestamp. The message timestamp can be earlier than or equal to the broker's timestamp, with the maximum allowable difference determined by the value set in this configuration. If log.message.timestamp.type=CreateTime, the message will be rejected if the difference in timestamps exceeds this specified threshold. This configuration is ignored if log.message.timestamp.type=LogAppendTime. log.message.timestamp.difference.max.ms Type: long Default: 9223372036854775807 Valid Values: [0,... ] Importance: medium Dynamic update: cluster-wide [DEPRECATED] The maximum difference allowed between the timestamp when a broker receives a message and the timestamp specified in the message. If log.message.timestamp.type=CreateTime, a message will be rejected if the difference in timestamp exceeds this threshold. This configuration is ignored if log.message.timestamp.type=LogAppendTime.The maximum timestamp difference allowed should be no greater than log.retention.ms to avoid unnecessarily frequent log rolling. log.message.timestamp.type Type: string Default: CreateTime Valid Values: [CreateTime, LogAppendTime] Importance: medium Dynamic update: cluster-wide Define whether the timestamp in the message is message create time or log append time. The value should be either CreateTime or LogAppendTime . log.preallocate Type: boolean Default: false Importance: medium Dynamic update: cluster-wide Should pre allocate file when create new segment? If you are using Kafka on Windows, you probably need to set it to true. log.retention.check.interval.ms Type: long Default: 300000 (5 minutes) Valid Values: [1,... ] Importance: medium Dynamic update: read-only The frequency in milliseconds that the log cleaner checks whether any log is eligible for deletion. max.connection.creation.rate Type: int Default: 2147483647 Valid Values: [0,... ] Importance: medium Dynamic update: cluster-wide The maximum connection creation rate we allow in the broker at any time. Listener-level limits may also be configured by prefixing the config name with the listener prefix, for example, listener.name.internal.max.connection.creation.rate .Broker-wide connection rate limit should be configured based on broker capacity while listener limits should be configured based on application requirements. New connections will be throttled if either the listener or the broker limit is reached, with the exception of inter-broker listener. Connections on the inter-broker listener will be throttled only when the listener-level rate limit is reached. max.connections Type: int Default: 2147483647 Valid Values: [0,... ] Importance: medium Dynamic update: cluster-wide The maximum number of connections we allow in the broker at any time. This limit is applied in addition to any per-ip limits configured using max.connections.per.ip. Listener-level limits may also be configured by prefixing the config name with the listener prefix, for example, listener.name.internal.max.connections.per.ip . Broker-wide limit should be configured based on broker capacity while listener limits should be configured based on application requirements. New connections are blocked if either the listener or broker limit is reached. Connections on the inter-broker listener are permitted even if broker-wide limit is reached. The least recently used connection on another listener will be closed in this case. max.connections.per.ip Type: int Default: 2147483647 Valid Values: [0,... ] Importance: medium Dynamic update: cluster-wide The maximum number of connections we allow from each ip address. This can be set to 0 if there are overrides configured using max.connections.per.ip.overrides property. New connections from the ip address are dropped if the limit is reached. max.connections.per.ip.overrides Type: string Default: "" Importance: medium Dynamic update: cluster-wide A comma-separated list of per-ip or hostname overrides to the default maximum number of connections. An example value is "hostName:100,127.0.0.1:200". max.incremental.fetch.session.cache.slots Type: int Default: 1000 Valid Values: [0,... ] Importance: medium Dynamic update: read-only The maximum number of total incremental fetch sessions that we will maintain. FetchSessionCache is sharded into 8 shards and the limit is equally divided among all shards. Sessions are allocated to each shard in round-robin. Only entries within a shard are considered eligible for eviction. max.request.partition.size.limit Type: int Default: 2000 Valid Values: [1,... ] Importance: medium Dynamic update: read-only The maximum number of partitions can be served in one request. num.partitions Type: int Default: 1 Valid Values: [1,... ] Importance: medium Dynamic update: read-only The default number of log partitions per topic. password.encoder.old.secret Type: password Default: null Importance: medium Dynamic update: read-only The old secret that was used for encoding dynamically configured passwords. This is required only when the secret is updated. If specified, all dynamically encoded passwords are decoded using this old secret and re-encoded using password.encoder.secret when broker starts up. password.encoder.secret Type: password Default: null Importance: medium Dynamic update: read-only The secret used for encoding dynamically configured passwords for this broker. principal.builder.class Type: class Default: org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder Importance: medium Dynamic update: per-broker The fully qualified name of a class that implements the KafkaPrincipalBuilder interface, which is used to build the KafkaPrincipal object used during authorization. If no principal builder is defined, the default behavior depends on the security protocol in use. For SSL authentication, the principal will be derived using the rules defined by ssl.principal.mapping.rules applied on the distinguished name from the client certificate if one is provided; otherwise, if client authentication is not required, the principal name will be ANONYMOUS. For SASL authentication, the principal will be derived using the rules defined by sasl.kerberos.principal.to.local.rules if GSSAPI is in use, and the SASL authentication ID for other mechanisms. For PLAINTEXT, the principal will be ANONYMOUS. producer.purgatory.purge.interval.requests Type: int Default: 1000 Importance: medium Dynamic update: read-only The purge interval (in number of requests) of the producer request purgatory. queued.max.request.bytes Type: long Default: -1 Importance: medium Dynamic update: read-only The number of queued bytes allowed before no more requests are read. remote.fetch.max.wait.ms Type: int Default: 500 Valid Values: [1,... ] Importance: medium Dynamic update: cluster-wide The maximum amount of time the server will wait before answering the remote fetch request. remote.log.manager.copy.max.bytes.per.second Type: long Default: 9223372036854775807 Valid Values: [1,... ] Importance: medium Dynamic update: cluster-wide The maximum number of bytes that can be copied from local storage to remote storage per second. This is a global limit for all the partitions that are being copied from local storage to remote storage. The default value is Long.MAX_VALUE, which means there is no limit on the number of bytes that can be copied per second. remote.log.manager.copy.quota.window.num Type: int Default: 11 Valid Values: [1,... ] Importance: medium Dynamic update: read-only The number of samples to retain in memory for remote copy quota management. The default value is 11, which means there are 10 whole windows + 1 current window. remote.log.manager.copy.quota.window.size.seconds Type: int Default: 1 Valid Values: [1,... ] Importance: medium Dynamic update: read-only The time span of each sample for remote copy quota management. The default value is 1 second. remote.log.manager.fetch.max.bytes.per.second Type: long Default: 9223372036854775807 Valid Values: [1,... ] Importance: medium Dynamic update: cluster-wide The maximum number of bytes that can be fetched from remote storage to local storage per second. This is a global limit for all the partitions that are being fetched from remote storage to local storage. The default value is Long.MAX_VALUE, which means there is no limit on the number of bytes that can be fetched per second. remote.log.manager.fetch.quota.window.num Type: int Default: 11 Valid Values: [1,... ] Importance: medium Dynamic update: read-only The number of samples to retain in memory for remote fetch quota management. The default value is 11, which means there are 10 whole windows + 1 current window. remote.log.manager.fetch.quota.window.size.seconds Type: int Default: 1 Valid Values: [1,... ] Importance: medium Dynamic update: read-only The time span of each sample for remote fetch quota management. The default value is 1 second. remote.log.manager.thread.pool.size Type: int Default: 10 Valid Values: [1,... ] Importance: medium Dynamic update: read-only Size of the thread pool used in scheduling tasks to copy segments, fetch remote log indexes and clean up remote log segments. remote.log.metadata.manager.class.name Type: string Default: org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager Valid Values: non-empty string Importance: medium Dynamic update: read-only Fully qualified class name of RemoteLogMetadataManager implementation. remote.log.metadata.manager.class.path Type: string Default: null Importance: medium Dynamic update: read-only Class path of the RemoteLogMetadataManager implementation. If specified, the RemoteLogMetadataManager implementation and its dependent libraries will be loaded by a dedicated classloader which searches this class path before the Kafka broker class path. The syntax of this parameter is same as the standard Java class path string. remote.log.metadata.manager.impl.prefix Type: string Default: rlmm.config. Valid Values: non-empty string Importance: medium Dynamic update: read-only Prefix used for properties to be passed to RemoteLogMetadataManager implementation. For example this value can be rlmm.config. . remote.log.metadata.manager.listener.name Type: string Default: null Valid Values: non-empty string Importance: medium Dynamic update: read-only Listener name of the local broker to which it should get connected if needed by RemoteLogMetadataManager implementation. remote.log.reader.max.pending.tasks Type: int Default: 100 Valid Values: [1,... ] Importance: medium Dynamic update: read-only Maximum remote log reader thread pool task queue size. If the task queue is full, fetch requests are served with an error. remote.log.reader.threads Type: int Default: 10 Valid Values: [1,... ] Importance: medium Dynamic update: read-only Size of the thread pool that is allocated for handling remote log reads. remote.log.storage.manager.class.name Type: string Default: null Valid Values: non-empty string Importance: medium Dynamic update: read-only Fully qualified class name of RemoteStorageManager implementation. remote.log.storage.manager.class.path Type: string Default: null Importance: medium Dynamic update: read-only Class path of the RemoteStorageManager implementation. If specified, the RemoteStorageManager implementation and its dependent libraries will be loaded by a dedicated classloader which searches this class path before the Kafka broker class path. The syntax of this parameter is same as the standard Java class path string. remote.log.storage.manager.impl.prefix Type: string Default: rsm.config. Valid Values: non-empty string Importance: medium Dynamic update: read-only Prefix used for properties to be passed to RemoteStorageManager implementation. For example this value can be rsm.config. . remote.log.storage.system.enable Type: boolean Default: false Importance: medium Dynamic update: read-only Whether to enable tiered storage functionality in a broker or not. Valid values are true or false and the default value is false. When it is true broker starts all the services required for the tiered storage functionality. replica.fetch.backoff.ms Type: int Default: 1000 (1 second) Valid Values: [0,... ] Importance: medium Dynamic update: read-only The amount of time to sleep when fetch partition error occurs. replica.fetch.max.bytes Type: int Default: 1048576 (1 mebibyte) Valid Values: [0,... ] Importance: medium Dynamic update: read-only The number of bytes of messages to attempt to fetch for each partition. This is not an absolute maximum, if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that progress can be made. The maximum record batch size accepted by the broker is defined via message.max.bytes (broker config) or max.message.bytes (topic config). replica.fetch.response.max.bytes Type: int Default: 10485760 (10 mebibytes) Valid Values: [0,... ] Importance: medium Dynamic update: read-only Maximum bytes expected for the entire fetch response. Records are fetched in batches, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that progress can be made. As such, this is not an absolute maximum. The maximum record batch size accepted by the broker is defined via message.max.bytes (broker config) or max.message.bytes (topic config). replica.selector.class Type: string Default: null Importance: medium Dynamic update: read-only The fully qualified class name that implements ReplicaSelector. This is used by the broker to find the preferred read replica. By default, we use an implementation that returns the leader. reserved.broker.max.id Type: int Default: 1000 Valid Values: [0,... ] Importance: medium Dynamic update: read-only Max number that can be used for a broker.id. sasl.client.callback.handler.class Type: class Default: null Importance: medium Dynamic update: read-only The fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface. sasl.enabled.mechanisms Type: list Default: GSSAPI Importance: medium Dynamic update: per-broker The list of SASL mechanisms enabled in the Kafka server. The list may contain any mechanism for which a security provider is available. Only GSSAPI is enabled by default. sasl.jaas.config Type: password Default: null Importance: medium Dynamic update: per-broker JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format is described here . The format for the value is: loginModuleClass controlFlag (optionName=optionValue)*; . For brokers, the config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule required;. sasl.kerberos.kinit.cmd Type: string Default: /usr/bin/kinit Importance: medium Dynamic update: per-broker Kerberos kinit command path. sasl.kerberos.min.time.before.relogin Type: long Default: 60000 Importance: medium Dynamic update: per-broker Login thread sleep time between refresh attempts. sasl.kerberos.principal.to.local.rules Type: list Default: DEFAULT Importance: medium Dynamic update: per-broker A list of rules for mapping from principal names to short names (typically operating system usernames). The rules are evaluated in order and the first rule that matches a principal name is used to map it to a short name. Any later rules in the list are ignored. By default, principal names of the form {username}/{hostname}@{REALM} are mapped to {username} . For more details on the format please see security authorization and acls . Note that this configuration is ignored if an extension of KafkaPrincipalBuilder is provided by the principal.builder.class configuration. sasl.kerberos.service.name Type: string Default: null Importance: medium Dynamic update: per-broker The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config. sasl.kerberos.ticket.renew.jitter Type: double Default: 0.05 Importance: medium Dynamic update: per-broker Percentage of random jitter added to the renewal time. sasl.kerberos.ticket.renew.window.factor Type: double Default: 0.8 Importance: medium Dynamic update: per-broker Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket. sasl.login.callback.handler.class Type: class Default: null Importance: medium Dynamic update: read-only The fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For brokers, login callback handler config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.callback.handler.class=com.example.CustomScramLoginCallbackHandler. sasl.login.class Type: class Default: null Importance: medium Dynamic update: read-only The fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.class=com.example.CustomScramLogin. sasl.login.refresh.buffer.seconds Type: short Default: 300 Importance: medium Dynamic update: per-broker The amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would otherwise occur closer to expiration than the number of buffer seconds then the refresh will be moved up to maintain as much of the buffer time as possible. Legal values are between 0 and 3600 (1 hour); a default value of 300 (5 minutes) is used if no value is specified. This value and sasl.login.refresh.min.period.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER. sasl.login.refresh.min.period.seconds Type: short Default: 60 Importance: medium Dynamic update: per-broker The desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between 0 and 900 (15 minutes); a default value of 60 (1 minute) is used if no value is specified. This value and sasl.login.refresh.buffer.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER. sasl.login.refresh.window.factor Type: double Default: 0.8 Importance: medium Dynamic update: per-broker Login refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which time it will try to refresh the credential. Legal values are between 0.5 (50%) and 1.0 (100%) inclusive; a default value of 0.8 (80%) is used if no value is specified. Currently applies only to OAUTHBEARER. sasl.login.refresh.window.jitter Type: double Default: 0.05 Importance: medium Dynamic update: per-broker The maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. Legal values are between 0 and 0.25 (25%) inclusive; a default value of 0.05 (5%) is used if no value is specified. Currently applies only to OAUTHBEARER. sasl.mechanism.inter.broker.protocol Type: string Default: GSSAPI Importance: medium Dynamic update: per-broker SASL mechanism used for inter-broker communication. Default is GSSAPI. sasl.oauthbearer.jwks.endpoint.url Type: string Default: null Importance: medium Dynamic update: read-only The OAuth/OIDC provider URL from which the provider's JWKS (JSON Web Key Set) can be retrieved. The URL can be HTTP(S)-based or file-based. If the URL is HTTP(S)-based, the JWKS data will be retrieved from the OAuth/OIDC provider via the configured URL on broker startup. All then-current keys will be cached on the broker for incoming requests. If an authentication request is received for a JWT that includes a "kid" header claim value that isn't yet in the cache, the JWKS endpoint will be queried again on demand. However, the broker polls the URL every sasl.oauthbearer.jwks.endpoint.refresh.ms milliseconds to refresh the cache with any forthcoming keys before any JWT requests that include them are received. If the URL is file-based, the broker will load the JWKS file from a configured location on startup. In the event that the JWT includes a "kid" header value that isn't in the JWKS file, the broker will reject the JWT and authentication will fail. sasl.oauthbearer.token.endpoint.url Type: string Default: null Importance: medium Dynamic update: read-only The URL for the OAuth/OIDC identity provider. If the URL is HTTP(S)-based, it is the issuer's token endpoint URL to which requests will be made to login based on the configuration in sasl.jaas.config. If the URL is file-based, it specifies a file containing an access token (in JWT serialized form) issued by the OAuth/OIDC identity provider to use for authorization. sasl.server.callback.handler.class Type: class Default: null Importance: medium Dynamic update: read-only The fully qualified name of a SASL server callback handler class that implements the AuthenticateCallbackHandler interface. Server callback handlers must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.plain.sasl.server.callback.handler.class=com.example.CustomPlainCallbackHandler. sasl.server.max.receive.size Type: int Default: 524288 Importance: medium Dynamic update: read-only The maximum receive size allowed before and during initial SASL authentication. Default receive size is 512KB. GSSAPI limits requests to 64K, but we allow upto 512KB by default for custom SASL mechanisms. In practice, PLAIN, SCRAM and OAUTH mechanisms can use much smaller limits. security.inter.broker.protocol Type: string Default: PLAINTEXT Valid Values: [PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL] Importance: medium Dynamic update: read-only Security protocol used to communicate between brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL. It is an error to set this and inter.broker.listener.name properties at the same time. socket.connection.setup.timeout.max.ms Type: long Default: 30000 (30 seconds) Importance: medium Dynamic update: read-only The maximum amount of time the client will wait for the socket connection to be established. The connection setup timeout will increase exponentially for each consecutive connection failure up to this maximum. To avoid connection storms, a randomization factor of 0.2 will be applied to the timeout resulting in a random range between 20% below and 20% above the computed value. socket.connection.setup.timeout.ms Type: long Default: 10000 (10 seconds) Importance: medium Dynamic update: read-only The amount of time the client will wait for the socket connection to be established. If the connection is not built before the timeout elapses, clients will close the socket channel. This value is the initial backoff value and will increase exponentially for each consecutive connection failure, up to the socket.connection.setup.timeout.max.ms value. socket.listen.backlog.size Type: int Default: 50 Valid Values: [1,... ] Importance: medium Dynamic update: read-only The maximum number of pending connections on the socket. In Linux, you may also need to configure somaxconn and tcp_max_syn_backlog kernel parameters accordingly to make the configuration takes effect. ssl.cipher.suites Type: list Default: "" Importance: medium Dynamic update: per-broker A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported. ssl.client.auth Type: string Default: none Valid Values: [required, requested, none] Importance: medium Dynamic update: per-broker Configures kafka broker to request client authentication. The following settings are common: ssl.client.auth=required If set to required client authentication is required. ssl.client.auth=requested This means client authentication is optional. unlike required, if this option is set client can choose not to provide authentication information about itself ssl.client.auth=none This means client authentication is not needed. ssl.enabled.protocols Type: list Default: TLSv1.2,TLSv1.3 Importance: medium Dynamic update: per-broker The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. With the default value for Java 11, clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2). This default should be fine for most cases. Also see the config documentation for ssl.protocol . ssl.key.password Type: password Default: null Importance: medium Dynamic update: per-broker The password of the private key in the key store file or the PEM key specified in 'ssl.keystore.key'. ssl.keymanager.algorithm Type: string Default: SunX509 Importance: medium Dynamic update: per-broker The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine. ssl.keystore.certificate.chain Type: password Default: null Importance: medium Dynamic update: per-broker Certificate chain in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with a list of X.509 certificates. ssl.keystore.key Type: password Default: null Importance: medium Dynamic update: per-broker Private key in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with PKCS#8 keys. If the key is encrypted, key password must be specified using 'ssl.key.password'. ssl.keystore.location Type: string Default: null Importance: medium Dynamic update: per-broker The location of the key store file. This is optional for client and can be used for two-way authentication for client. ssl.keystore.password Type: password Default: null Importance: medium Dynamic update: per-broker The store password for the key store file. This is optional for client and only needed if 'ssl.keystore.location' is configured. Key store password is not supported for PEM format. ssl.keystore.type Type: string Default: JKS Importance: medium Dynamic update: per-broker The file format of the key store file. This is optional for client. The values currently supported by the default ssl.engine.factory.class are [JKS, PKCS12, PEM]. ssl.protocol Type: string Default: TLSv1.3 Importance: medium Dynamic update: per-broker The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. This value should be fine for most use cases. Allowed values in recent JVMs are 'TLSv1.2' and 'TLSv1.3'. 'TLS', 'TLSv1.1', 'SSL', 'SSLv2' and 'SSLv3' may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. With the default value for this config and 'ssl.enabled.protocols', clients will downgrade to 'TLSv1.2' if the server does not support 'TLSv1.3'. If this config is set to 'TLSv1.2', clients will not use 'TLSv1.3' even if it is one of the values in ssl.enabled.protocols and the server only supports 'TLSv1.3'. ssl.provider Type: string Default: null Importance: medium Dynamic update: per-broker The name of the security provider used for SSL connections. Default value is the default security provider of the JVM. ssl.trustmanager.algorithm Type: string Default: PKIX Importance: medium Dynamic update: per-broker The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine. ssl.truststore.certificates Type: password Default: null Importance: medium Dynamic update: per-broker Trusted certificates in the format specified by 'ssl.truststore.type'. Default SSL engine factory supports only PEM format with X.509 certificates. ssl.truststore.location Type: string Default: null Importance: medium Dynamic update: per-broker The location of the trust store file. ssl.truststore.password Type: password Default: null Importance: medium Dynamic update: per-broker The password for the trust store file. If a password is not set, trust store file configured will still be used, but integrity checking is disabled. Trust store password is not supported for PEM format. ssl.truststore.type Type: string Default: JKS Importance: medium Dynamic update: per-broker The file format of the trust store file. The values currently supported by the default ssl.engine.factory.class are [JKS, PKCS12, PEM]. zookeeper.clientCnxnSocket Type: string Default: null Importance: medium Dynamic update: read-only Typically set to org.apache.zookeeper.ClientCnxnSocketNetty when using TLS connectivity to ZooKeeper. Overrides any explicit value set via the same-named zookeeper.clientCnxnSocket system property. zookeeper.ssl.client.enable Type: boolean Default: false Importance: medium Dynamic update: read-only Set client to use TLS when connecting to ZooKeeper. An explicit value overrides any value set via the zookeeper.client.secure system property (note the different name). Defaults to false if neither is set; when true, zookeeper.clientCnxnSocket must be set (typically to org.apache.zookeeper.ClientCnxnSocketNetty ); other values to set may include zookeeper.ssl.cipher.suites , zookeeper.ssl.crl.enable , zookeeper.ssl.enabled.protocols , zookeeper.ssl.endpoint.identification.algorithm , zookeeper.ssl.keystore.location , zookeeper.ssl.keystore.password , zookeeper.ssl.keystore.type , zookeeper.ssl.ocsp.enable , zookeeper.ssl.protocol , zookeeper.ssl.truststore.location , zookeeper.ssl.truststore.password , zookeeper.ssl.truststore.type . zookeeper.ssl.keystore.location Type: string Default: null Importance: medium Dynamic update: read-only Keystore location when using a client-side certificate with TLS connectivity to ZooKeeper. Overrides any explicit value set via the zookeeper.ssl.keyStore.location system property (note the camelCase). zookeeper.ssl.keystore.password Type: password Default: null Importance: medium Dynamic update: read-only Keystore password when using a client-side certificate with TLS connectivity to ZooKeeper. Overrides any explicit value set via the zookeeper.ssl.keyStore.password system property (note the camelCase). Note that ZooKeeper does not support a key password different from the keystore password, so be sure to set the key password in the keystore to be identical to the keystore password; otherwise the connection attempt to Zookeeper will fail. zookeeper.ssl.keystore.type Type: string Default: null Importance: medium Dynamic update: read-only Keystore type when using a client-side certificate with TLS connectivity to ZooKeeper. Overrides any explicit value set via the zookeeper.ssl.keyStore.type system property (note the camelCase). The default value of null means the type will be auto-detected based on the filename extension of the keystore. zookeeper.ssl.truststore.location Type: string Default: null Importance: medium Dynamic update: read-only Truststore location when using TLS connectivity to ZooKeeper. Overrides any explicit value set via the zookeeper.ssl.trustStore.location system property (note the camelCase). zookeeper.ssl.truststore.password Type: password Default: null Importance: medium Dynamic update: read-only Truststore password when using TLS connectivity to ZooKeeper. Overrides any explicit value set via the zookeeper.ssl.trustStore.password system property (note the camelCase). zookeeper.ssl.truststore.type Type: string Default: null Importance: medium Dynamic update: read-only Truststore type when using TLS connectivity to ZooKeeper. Overrides any explicit value set via the zookeeper.ssl.trustStore.type system property (note the camelCase). The default value of null means the type will be auto-detected based on the filename extension of the truststore. alter.config.policy.class.name Type: class Default: null Importance: low Dynamic update: read-only The alter configs policy class that should be used for validation. The class should implement the org.apache.kafka.server.policy.AlterConfigPolicy interface. alter.log.dirs.replication.quota.window.num Type: int Default: 11 Valid Values: [1,... ] Importance: low Dynamic update: read-only The number of samples to retain in memory for alter log dirs replication quotas. alter.log.dirs.replication.quota.window.size.seconds Type: int Default: 1 Valid Values: [1,... ] Importance: low Dynamic update: read-only The time span of each sample for alter log dirs replication quotas. authorizer.class.name Type: string Default: "" Valid Values: non-null string Importance: low Dynamic update: read-only The fully qualified name of a class that implements org.apache.kafka.server.authorizer.Authorizer interface, which is used by the broker for authorization. auto.include.jmx.reporter Type: boolean Default: true Importance: low Dynamic update: read-only Deprecated. Whether to automatically include JmxReporter even if it's not listed in metric.reporters . This configuration will be removed in Kafka 4.0, users should instead include org.apache.kafka.common.metrics.JmxReporter in metric.reporters in order to enable the JmxReporter. client.quota.callback.class Type: class Default: null Importance: low Dynamic update: read-only The fully qualified name of a class that implements the ClientQuotaCallback interface, which is used to determine quota limits applied to client requests. By default, the <user> and <client-id> quotas that are stored in ZooKeeper are applied. For any given request, the most specific quota that matches the user principal of the session and the client-id of the request is applied. connection.failed.authentication.delay.ms Type: int Default: 100 Valid Values: [0,... ] Importance: low Dynamic update: read-only Connection close delay on failed authentication: this is the time (in milliseconds) by which connection close will be delayed on authentication failure. This must be configured to be less than connections.max.idle.ms to prevent connection timeout. controller.quorum.retry.backoff.ms Type: int Default: 20 Importance: low Dynamic update: read-only The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios. This value is the initial backoff value and will increase exponentially for each failed request, up to the retry.backoff.max.ms value. controller.quota.window.num Type: int Default: 11 Valid Values: [1,... ] Importance: low Dynamic update: read-only The number of samples to retain in memory for controller mutation quotas. controller.quota.window.size.seconds Type: int Default: 1 Valid Values: [1,... ] Importance: low Dynamic update: read-only The time span of each sample for controller mutations quotas. create.topic.policy.class.name Type: class Default: null Importance: low Dynamic update: read-only The create topic policy class that should be used for validation. The class should implement the org.apache.kafka.server.policy.CreateTopicPolicy interface. delegation.token.expiry.check.interval.ms Type: long Default: 3600000 (1 hour) Valid Values: [1,... ] Importance: low Dynamic update: read-only Scan interval to remove expired delegation tokens. kafka.metrics.polling.interval.secs Type: int Default: 10 Valid Values: [1,... ] Importance: low Dynamic update: read-only The metrics polling interval (in seconds) which can be used inkafka.metrics.reporters implementations. kafka.metrics.reporters Type: list Default: "" Importance: low Dynamic update: read-only A list of classes to use as Yammer metrics custom reporters. The reporters should implement kafka.metrics.KafkaMetricsReporter trait. If a client wants to expose JMX operations on a custom reporter, the custom reporter needs to additionally implement an MBean trait that extends kafka.metrics.KafkaMetricsReporterMBean trait so that the registered MBean is compliant with the standard MBean convention. listener.security.protocol.map Type: string Default: SASL_SSL:SASL_SSL,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT Importance: low Dynamic update: per-broker Map between listener names and security protocols. This must be defined for the same security protocol to be usable in more than one port or IP. For example, internal and external traffic can be separated even if SSL is required for both. Concretely, the user could define listeners with names INTERNAL and EXTERNAL and this property as: INTERNAL:SSL,EXTERNAL:SSL . As shown, key and value are separated by a colon and map entries are separated by commas. Each listener name should only appear once in the map. Different security (SSL and SASL) settings can be configured for each listener by adding a normalised prefix (the listener name is lowercased) to the config name. For example, to set a different keystore for the INTERNAL listener, a config with name listener.name.internal.ssl.keystore.location would be set. If the config for the listener name is not set, the config will fallback to the generic config (i.e. ssl.keystore.location ). Note that in KRaft a default mapping from the listener names defined by controller.listener.names to PLAINTEXT is assumed if no explicit mapping is provided and no other security protocol is in use. log.dir.failure.timeout.ms Type: long Default: 30000 (30 seconds) Valid Values: [1,... ] Importance: low Dynamic update: read-only If the broker is unable to successfully communicate to the controller that some log directory has failed for longer than this time, the broker will fail and shut down. log.message.downconversion.enable Type: boolean Default: true Importance: low Dynamic update: cluster-wide This configuration controls whether down-conversion of message formats is enabled to satisfy consume requests. When set to false , broker will not perform down-conversion for consumers expecting an older message format. The broker responds with UNSUPPORTED_VERSION error for consume requests from such older clients. This configurationdoes not apply to any message format conversion that might be required for replication to followers. metadata.max.idle.interval.ms Type: int Default: 500 Valid Values: [0,... ] Importance: low Dynamic update: read-only This configuration controls how often the active controller should write no-op records to the metadata partition. If the value is 0, no-op records are not appended to the metadata partition. The default value is 500. metric.reporters Type: list Default: "" Importance: low Dynamic update: cluster-wide A list of classes to use as metrics reporters. Implementing the org.apache.kafka.common.metrics.MetricsReporter interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics. metrics.num.samples Type: int Default: 2 Valid Values: [1,... ] Importance: low Dynamic update: read-only The number of samples maintained to compute metrics. metrics.recording.level Type: string Default: INFO Importance: low Dynamic update: read-only The highest recording level for metrics. metrics.sample.window.ms Type: long Default: 30000 (30 seconds) Valid Values: [1,... ] Importance: low Dynamic update: read-only The window of time a metrics sample is computed over. password.encoder.cipher.algorithm Type: string Default: AES/CBC/PKCS5Padding Importance: low Dynamic update: read-only The Cipher algorithm used for encoding dynamically configured passwords. password.encoder.iterations Type: int Default: 4096 Valid Values: [1024,... ] Importance: low Dynamic update: read-only The iteration count used for encoding dynamically configured passwords. password.encoder.key.length Type: int Default: 128 Valid Values: [8,... ] Importance: low Dynamic update: read-only The key length used for encoding dynamically configured passwords. password.encoder.keyfactory.algorithm Type: string Default: null Importance: low Dynamic update: read-only The SecretKeyFactory algorithm used for encoding dynamically configured passwords. Default is PBKDF2WithHmacSHA512 if available and PBKDF2WithHmacSHA1 otherwise. producer.id.expiration.ms Type: int Default: 86400000 (1 day) Valid Values: [1,... ] Importance: low Dynamic update: cluster-wide The time in ms that a topic partition leader will wait before expiring producer IDs. Producer IDs will not expire while a transaction associated to them is still ongoing. Note that producer IDs may expire sooner if the last write from the producer ID is deleted due to the topic's retention settings. Setting this value the same or higher than delivery.timeout.ms can help prevent expiration during retries and protect against message duplication, but the default should be reasonable for most use cases. quota.window.num Type: int Default: 11 Valid Values: [1,... ] Importance: low Dynamic update: read-only The number of samples to retain in memory for client quotas. quota.window.size.seconds Type: int Default: 1 Valid Values: [1,... ] Importance: low Dynamic update: read-only The time span of each sample for client quotas. remote.log.index.file.cache.total.size.bytes Type: long Default: 1073741824 (1 gibibyte) Valid Values: [1,... ] Importance: low Dynamic update: cluster-wide The total size of the space allocated to store index files fetched from remote storage in the local storage. remote.log.manager.task.interval.ms Type: long Default: 30000 (30 seconds) Valid Values: [1,... ] Importance: low Dynamic update: read-only Interval at which remote log manager runs the scheduled tasks like copy segments, and clean up remote log segments. remote.log.metadata.custom.metadata.max.bytes Type: int Default: 128 Valid Values: [0,... ] Importance: low Dynamic update: read-only The maximum size of custom metadata in bytes that the broker should accept from a remote storage plugin. If custom metadata exceeds this limit, the updated segment metadata will not be stored, the copied data will be attempted to delete, and the remote copying task for this topic-partition will stop with an error. replication.quota.window.num Type: int Default: 11 Valid Values: [1,... ] Importance: low Dynamic update: read-only The number of samples to retain in memory for replication quotas. replication.quota.window.size.seconds Type: int Default: 1 Valid Values: [1,... ] Importance: low Dynamic update: read-only The time span of each sample for replication quotas. sasl.login.connect.timeout.ms Type: int Default: null Importance: low Dynamic update: read-only The (optional) value in milliseconds for the external authentication provider connection timeout. Currently applies only to OAUTHBEARER. sasl.login.read.timeout.ms Type: int Default: null Importance: low Dynamic update: read-only The (optional) value in milliseconds for the external authentication provider read timeout. Currently applies only to OAUTHBEARER. sasl.login.retry.backoff.max.ms Type: long Default: 10000 (10 seconds) Importance: low Dynamic update: read-only The (optional) value in milliseconds for the maximum wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER. sasl.login.retry.backoff.ms Type: long Default: 100 Importance: low Dynamic update: read-only The (optional) value in milliseconds for the initial wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER. sasl.oauthbearer.clock.skew.seconds Type: int Default: 30 Importance: low Dynamic update: read-only The (optional) value in seconds to allow for differences between the time of the OAuth/OIDC identity provider and the broker. sasl.oauthbearer.expected.audience Type: list Default: null Importance: low Dynamic update: read-only The (optional) comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiences. The JWT will be inspected for the standard OAuth "aud" claim and if this value is set, the broker will match the value from JWT's "aud" claim to see if there is an exact match. If there is no match, the broker will reject the JWT and authentication will fail. sasl.oauthbearer.expected.issuer Type: string Default: null Importance: low Dynamic update: read-only The (optional) setting for the broker to use to verify that the JWT was created by the expected issuer. The JWT will be inspected for the standard OAuth "iss" claim and if this value is set, the broker will match it exactly against what is in the JWT's "iss" claim. If there is no match, the broker will reject the JWT and authentication will fail. sasl.oauthbearer.jwks.endpoint.refresh.ms Type: long Default: 3600000 (1 hour) Importance: low Dynamic update: read-only The (optional) value in milliseconds for the broker to wait between refreshing its JWKS (JSON Web Key Set) cache that contains the keys to verify the signature of the JWT. sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms Type: long Default: 10000 (10 seconds) Importance: low Dynamic update: read-only The (optional) value in milliseconds for the maximum wait between attempts to retrieve the JWKS (JSON Web Key Set) from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting. sasl.oauthbearer.jwks.endpoint.retry.backoff.ms Type: long Default: 100 Importance: low Dynamic update: read-only The (optional) value in milliseconds for the initial wait between JWKS (JSON Web Key Set) retrieval attempts from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting. sasl.oauthbearer.scope.claim.name Type: string Default: scope Importance: low Dynamic update: read-only The OAuth claim for the scope is often named "scope", but this (optional) setting can provide a different name to use for the scope included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim. sasl.oauthbearer.sub.claim.name Type: string Default: sub Importance: low Dynamic update: read-only The OAuth claim for the subject is often named "sub", but this (optional) setting can provide a different name to use for the subject included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim. security.providers Type: string Default: null Importance: low Dynamic update: read-only A list of configurable creator classes each returning a provider implementing security algorithms. These classes should implement the org.apache.kafka.common.security.auth.SecurityProviderCreator interface. ssl.allow.dn.changes Type: boolean Default: false Importance: low Dynamic update: read-only Indicates whether changes to the certificate distinguished name should be allowed during a dynamic reconfiguration of certificates or not. ssl.allow.san.changes Type: boolean Default: false Importance: low Dynamic update: read-only Indicates whether changes to the certificate subject alternative names should be allowed during a dynamic reconfiguration of certificates or not. ssl.endpoint.identification.algorithm Type: string Default: https Importance: low Dynamic update: per-broker The endpoint identification algorithm to validate server hostname using server certificate. ssl.engine.factory.class Type: class Default: null Importance: low Dynamic update: per-broker The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactory. Alternatively, setting this to org.apache.kafka.common.security.ssl.CommonNameLoggingSslEngineFactory will log the common name of expired SSL certificates used by clients to authenticate at any of the brokers with log level INFO. Note that this will cause a tiny delay during establishment of new connections from mTLS clients to brokers due to the extra code for examining the certificate chain provided by the client. Note further that the implementation uses a custom truststore based on the standard Java truststore and thus might be considered a security risk due to not being as mature as the standard one. ssl.principal.mapping.rules Type: string Default: DEFAULT Importance: low Dynamic update: read-only A list of rules for mapping from distinguished name from the client certificate to short name. The rules are evaluated in order and the first rule that matches a principal name is used to map it to a short name. Any later rules in the list are ignored. By default, distinguished name of the X.500 certificate will be the principal. For more details on the format please see security authorization and acls . Note that this configuration is ignored if an extension of KafkaPrincipalBuilder is provided by the principal.builder.class configuration. ssl.secure.random.implementation Type: string Default: null Importance: low Dynamic update: per-broker The SecureRandom PRNG implementation to use for SSL cryptography operations. telemetry.max.bytes Type: int Default: 1048576 (1 mebibyte) Valid Values: [1,... ] Importance: low Dynamic update: read-only The maximum size (after compression if compression is used) of telemetry metrics pushed from a client to the broker. The default value is 1048576 (1 MB). transaction.abort.timed.out.transaction.cleanup.interval.ms Type: int Default: 10000 (10 seconds) Valid Values: [1,... ] Importance: low Dynamic update: read-only The interval at which to rollback transactions that have timed out. transaction.partition.verification.enable Type: boolean Default: true Importance: low Dynamic update: cluster-wide Enable verification that checks that the partition has been added to the transaction before writing transactional records to the partition. transaction.remove.expired.transaction.cleanup.interval.ms Type: int Default: 3600000 (1 hour) Valid Values: [1,... ] Importance: low Dynamic update: read-only The interval at which to remove transactions that have expired due to transactional.id.expiration.ms passing. zookeeper.ssl.cipher.suites Type: list Default: null Importance: low Dynamic update: read-only Specifies the enabled cipher suites to be used in ZooKeeper TLS negotiation (csv). Overrides any explicit value set via the zookeeper.ssl.ciphersuites system property (note the single word "ciphersuites"). The default value of null means the list of enabled cipher suites is determined by the Java runtime being used. zookeeper.ssl.crl.enable Type: boolean Default: false Importance: low Dynamic update: read-only Specifies whether to enable Certificate Revocation List in the ZooKeeper TLS protocols. Overrides any explicit value set via the zookeeper.ssl.crl system property (note the shorter name). zookeeper.ssl.enabled.protocols Type: list Default: null Importance: low Dynamic update: read-only Specifies the enabled protocol(s) in ZooKeeper TLS negotiation (csv). Overrides any explicit value set via the zookeeper.ssl.enabledProtocols system property (note the camelCase). The default value of null means the enabled protocol will be the value of the zookeeper.ssl.protocol configuration property. zookeeper.ssl.endpoint.identification.algorithm Type: string Default: HTTPS Importance: low Dynamic update: read-only Specifies whether to enable hostname verification in the ZooKeeper TLS negotiation process, with (case-insensitively) "https" meaning ZooKeeper hostname verification is enabled and an explicit blank value meaning it is disabled (disabling it is only recommended for testing purposes). An explicit value overrides any "true" or "false" value set via the zookeeper.ssl.hostnameVerification system property (note the different name and values; true implies https and false implies blank). zookeeper.ssl.ocsp.enable Type: boolean Default: false Importance: low Dynamic update: read-only Specifies whether to enable Online Certificate Status Protocol in the ZooKeeper TLS protocols. Overrides any explicit value set via the zookeeper.ssl.ocsp system property (note the shorter name). zookeeper.ssl.protocol Type: string Default: TLSv1.2 Importance: low Dynamic update: read-only Specifies the protocol to be used in ZooKeeper TLS negotiation. An explicit value overrides any value set via the same-named zookeeper.ssl.protocol system property.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/kafka_configuration_properties/broker-configuration-properties-str
Chapter 2. Exporting and downloading a manifest
Chapter 2. Exporting and downloading a manifest Export a manifest from the Hybrid Cloud Console to manage the subscriptions that it contains from within your Satellite instance. Exporting and downloading a new copy of the manifest is required whenever you add or remove subscriptions from the manifest. Prerequisites You are logged in to the Red Hat Hybrid Cloud Console. You are connected to a Red Hat Satellite Server. You have Red Hat Satellite 6 or later. You have the Subscriptions administrator role in the role-based access control (RBAC) system for the Red Hat Hybrid Cloud Console. Procedure To export and download a manifest, complete the following steps: From the Hybrid Cloud Console home page, click Services > Subscriptions and Spend > Manifests . From the Manifests page, click the name of the manifest that you want to export. Click Export manifest . Important Do not refresh the page while the manifest is exporting. Refreshing the page will disrupt the process. When the manifest is exported successfully, a notification window opens. From the notification window, click Download manifest . Note The application encodes the selected subscription certificates and creates a compressed file in .zip format. This file is the subscription manifest that you can upload into Satellite Server. The file saves to your default downloads folder. After you download the manifest, you can import it into your Satellite Server. You can then use the Satellite web UI to update the manifest and refresh it to reflect the changes. Alternatively, you can import an updated manifest that contains the changes. Additional resources For more information about importing your manifest into Satellite, see Importing a Subscription Manifest into Satellite Server in the Red Hat Satellite Content Management Guide . For more information about using Satellite to manage your Red Hat subscriptions, see Managing Subscriptions .
null
https://docs.redhat.com/en/documentation/subscription_central/1-latest/html/creating_and_managing_manifests_for_a_connected_satellite_server/proc-exporting-downloading-manifest-satellite-connected
probe::netfilter.bridge.local_out
probe::netfilter.bridge.local_out Name probe::netfilter.bridge.local_out - Called on a bridging packet coming from a local process Synopsis netfilter.bridge.local_out Values indev Address of net_device representing input device, 0 if unknown br_msg Message age in 1/256 secs nf_drop Constant used to signify a 'drop' verdict llcproto_stp Constant used to signify Bridge Spanning Tree Protocol packet pf Protocol family -- always " bridge " br_fd Forward delay in 1/256 secs nf_queue Constant used to signify a 'queue' verdict brhdr Address of bridge header br_mac Bridge MAC address br_flags BPDU flags outdev_name Name of network device packet will be routed to (if known) nf_accept Constant used to signify an 'accept' verdict br_rid Identity of root bridge nf_stop Constant used to signify a 'stop' verdict br_type BPDU type br_max Max age in 1/256 secs protocol Packet protocol br_htime Hello time in 1/256 secs br_bid Identity of bridge br_rmac Root bridge MAC address br_prid Protocol identifier llcpdu Address of LLC Protocol Data Unit length The length of the packet buffer contents, in bytes nf_stolen Constant used to signify a 'stolen' verdict br_cost Total cost from transmitting bridge to root br_vid Protocol version identifier indev_name Name of network device packet was received on (if known) br_poid Port identifier outdev Address of net_device representing output device, 0 if unknown nf_repeat Constant used to signify a 'repeat' verdict
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-netfilter-bridge-local-out
10.6. Cluster Service Will Not Start
10.6. Cluster Service Will Not Start If a cluster-controlled service will not start, check for the following conditions. There may be a syntax error in the service configuration in the cluster.conf file. You can use the rg_test command to validate the syntax in your configuration. If there are any configuration or syntax faults, the rg_test will inform you what the problem is. For more information on the rg_test command, see Section C.5, "Debugging and Testing Services and Resource Ordering" . If the configuration is valid, then increase the resource group manager's logging and then read the messages logs to determine what is causing the service start to fail. You can increase the log level by adding the loglevel="7" parameter to the rm tag in the cluster.conf file. You will then get increased verbosity in your messages logs with regards to starting, stopping, and migrating clustered services.
[ "rg_test test /etc/cluster/cluster.conf start service servicename" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-clustservicenostart-CA
Chapter 1. OpenShift Container Platform security and compliance
Chapter 1. OpenShift Container Platform security and compliance 1.1. Security overview It is important to understand how to properly secure various aspects of your OpenShift Container Platform cluster. Container security A good starting point to understanding OpenShift Container Platform security is to review the concepts in Understanding container security . This and subsequent sections provide a high-level walkthrough of the container security measures available in OpenShift Container Platform, including solutions for the host layer, the container and orchestration layer, and the build and application layer. These sections also include information on the following topics: Why container security is important and how it compares with existing security standards. Which container security measures are provided by the host (RHCOS and RHEL) layer and which are provided by OpenShift Container Platform. How to evaluate your container content and sources for vulnerabilities. How to design your build and deployment process to proactively check container content. How to control access to containers through authentication and authorization. How networking and attached storage are secured in OpenShift Container Platform. Containerized solutions for API management and SSO. Auditing OpenShift Container Platform auditing provides a security-relevant chronological set of records documenting the sequence of activities that have affected the system by individual users, administrators, or other components of the system. Administrators can configure the audit log policy and view audit logs . Certificates Certificates are used by various components to validate access to the cluster. Administrators can replace the default ingress certificate , add API server certificates , or add a service certificate . You can also review more details about the types of certificates used by the cluster: User-provided certificates for the API server Proxy certificates Service CA certificates Node certificates Bootstrap certificates etcd certificates OLM certificates Aggregated API client certificates Machine Config Operator certificates User-provided certificates for default ingress Ingress certificates Monitoring and cluster logging Operator component certificates Control plane certificates Encrypting data You can enable etcd encryption for your cluster to provide an additional layer of data security. For example, it can help protect the loss of sensitive data if an etcd backup is exposed to the incorrect parties. Vulnerability scanning Administrators can use the Red Hat Quay Container Security Operator to run vulnerability scans and review information about detected vulnerabilities. 1.2. Compliance overview For many OpenShift Container Platform customers, regulatory readiness, or compliance, on some level is required before any systems can be put into production. That regulatory readiness can be imposed by national standards, industry standards, or the organization's corporate governance framework. Compliance checking Administrators can use the Compliance Operator to run compliance scans and recommend remediations for any issues found. The oc-compliance plugin is an OpenShift CLI ( oc ) plugin that provides a set of utilities to easily interact with the Compliance Operator. File integrity checking Administrators can use the File Integrity Operator to continually run file integrity checks on cluster nodes and provide a log of files that have been modified. 1.3. Additional resources Understanding authentication Configuring the internal OAuth server Understanding identity provider configuration Using RBAC to define and apply permissions Managing security context constraints
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/security_and_compliance/security-compliance-overview
Chapter 17. Configuring logging by using RHEL system roles
Chapter 17. Configuring logging by using RHEL system roles You can use the logging RHEL system role to configure your local and remote hosts as logging servers in an automated fashion to collect logs from many client systems. Logging solutions provide multiple ways of reading logs and multiple logging outputs. For example, a logging system can receive the following inputs: Local files systemd/journal Another logging system over the network In addition, a logging system can have the following outputs: Logs stored in the local files in the /var/log/ directory Logs sent to Elasticsearch engine Logs forwarded to another logging system With the logging RHEL system role, you can combine the inputs and outputs to fit your scenario. For example, you can configure a logging solution that stores inputs from journal in a local file, whereas inputs read from files are both forwarded to another logging system and stored in the local log files. 17.1. Filtering local log messages by using the logging RHEL system role You can use the property-based filter of the logging RHEL system role to filter your local log messages based on various conditions. As a result, you can achieve for example: Log clarity: In a high-traffic environment, logs can grow rapidly. The focus on specific messages, like errors, can help to identify problems faster. Optimized system performance: Excessive amount of logs is usually connected with system performance degradation. Selective logging for only the important events can prevent resource depletion, which enables your systems to run more efficiently. Enhanced security: Efficient filtering through security messages, like system errors and failed logins, helps to capture only the relevant logs. This is important for detecting breaches and meeting compliance standards. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Deploy the logging solution hosts: managed-node-01.example.com tasks: - name: Filter logs based on a specific value they contain ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: files_input type: basics logging_outputs: - name: files_output0 type: files property: msg property_op: contains property_value: error path: /var/log/errors.log - name: files_output1 type: files property: msg property_op: "!contains" property_value: error path: /var/log/others.log logging_flows: - name: flow0 inputs: [files_input] outputs: [files_output0, files_output1] The settings specified in the example playbook include the following: logging_inputs Defines a list of logging input dictionaries. The type: basics option covers inputs from systemd journal or Unix socket. logging_outputs Defines a list of logging output dictionaries. The type: files option supports storing logs in the local files, usually in the /var/log/ directory. The property: msg ; property: contains ; and property_value: error options specify that all logs that contain the error string are stored in the /var/log/errors.log file. The property: msg ; property: !contains ; and property_value: error options specify that all other logs are put in the /var/log/others.log file. You can replace the error value with the string by which you want to filter. logging_flows Defines a list of logging flow dictionaries to specify relationships between logging_inputs and logging_outputs . The inputs: [files_input] option specifies a list of inputs, from which processing of logs starts. The outputs: [files_output0, files_output1] option specifies a list of outputs, to which the logs are sent. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.logging/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification On the managed node, test the syntax of the /etc/rsyslog.conf file: On the managed node, verify that the system sends messages that contain the error string to the log: Send a test message: View the /var/log/errors.log log, for example: Where hostname is the host name of the client system. Note that the log contains the user name of the user that entered the logger command, in this case root . Additional resources /usr/share/ansible/roles/rhel-system-roles.logging/README.md file /usr/share/doc/rhel-system-roles/logging/ directory rsyslog.conf(5) and syslog(3) man pages on your system 17.2. Applying a remote logging solution by using the logging RHEL system role You can use the logging RHEL system role to configure a remote logging solution, where one or more clients take logs from the systemd-journal service and forward them to a remote server. The server receives remote input from the remote_rsyslog and remote_files configurations, and outputs the logs to local files in directories named by remote host names. As a result, you can cover use cases where you need for example: Centralized log management: Collecting, accessing, and managing log messages of multiple machines from a single storage point simplifies day-to-day monitoring and troubleshooting tasks. Also, this use case reduces the need to log into individual machines to check the log messages. Enhanced security: Storing log messages in one central place increases chances they are in a secure and tamper-proof environment. Such an environment makes it easier to detect and respond to security incidents more effectively and to meet audit requirements. Improved efficiency in log analysis: Correlating log messages from multiple systems is important for fast troubleshooting of complex problems that span multiple machines or services. That way you can quickly analyze and cross-reference events from different sources. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Define the ports in the SELinux policy of the server or client system and open the firewall for those ports. The default SELinux policy includes ports 601, 514, 6514, 10514, and 20514. To use a different port, see modify the SELinux policy on the client and server systems . Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Deploy the logging solution hosts: managed-node-01.example.com tasks: - name: Configure the server to receive remote input ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: remote_udp_input type: remote udp_ports: [ 601 ] - name: remote_tcp_input type: remote tcp_ports: [ 601 ] logging_outputs: - name: remote_files_output type: remote_files logging_flows: - name: flow_0 inputs: [remote_udp_input, remote_tcp_input] outputs: [remote_files_output] - name: Deploy the logging solution hosts: managed-node-02.example.com tasks: - name: Configure the server to output the logs to local files in directories named by remote host names ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: basic_input type: basics logging_outputs: - name: forward_output0 type: forwards severity: info target: <host1.example.com> udp_port: 601 - name: forward_output1 type: forwards facility: mail target: <host1.example.com> tcp_port: 601 logging_flows: - name: flows0 inputs: [basic_input] outputs: [forward_output0, forward_output1] [basic_input] [forward_output0, forward_output1] The settings specified in the first play of the example playbook include the following: logging_inputs Defines a list of logging input dictionaries. The type: remote option covers remote inputs from the other logging system over the network. The udp_ports: [ 601 ] option defines a list of UDP port numbers to monitor. The tcp_ports: [ 601 ] option defines a list of TCP port numbers to monitor. If both udp_ports and tcp_ports is set, udp_ports is used and tcp_ports is dropped. logging_outputs Defines a list of logging output dictionaries. The type: remote_files option makes output store logs to the local files per remote host and program name originated the logs. logging_flows Defines a list of logging flow dictionaries to specify relationships between logging_inputs and logging_outputs . The inputs: [remote_udp_input, remote_tcp_input] option specifies a list of inputs, from which processing of logs starts. The outputs: [remote_files_output] option specifies a list of outputs, to which the logs are sent. The settings specified in the second play of the example playbook include the following: logging_inputs Defines a list of logging input dictionaries. The type: basics option covers inputs from systemd journal or Unix socket. logging_outputs Defines a list of logging output dictionaries. The type: forwards option supports sending logs to the remote logging server over the network. The severity: info option refers to log messages of the informative importance. The facility: mail option refers to the type of system program that is generating the log message. The target: <host1.example.com> option specifies the hostname of the remote logging server. The udp_port: 601 / tcp_port: 601 options define the UDP/TCP ports on which the remote logging server listens. logging_flows Defines a list of logging flow dictionaries to specify relationships between logging_inputs and logging_outputs . The inputs: [basic_input] option specifies a list of inputs, from which processing of logs starts. The outputs: [forward_output0, forward_output1] option specifies a list of outputs, to which the logs are sent. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.logging/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification On both the client and the server system, test the syntax of the /etc/rsyslog.conf file: Verify that the client system sends messages to the server: On the client system, send a test message: On the server system, view the /var/log/ <host2.example.com> /messages log, for example: Where <host2.example.com> is the host name of the client system. Note that the log contains the user name of the user that entered the logger command, in this case root . Additional resources /usr/share/ansible/roles/rhel-system-roles.logging/README.md file /usr/share/doc/rhel-system-roles/logging/ directory rsyslog.conf(5) and syslog(3) manual pages 17.3. Using the logging RHEL system role with TLS Transport Layer Security (TLS) is a cryptographic protocol designed to allow secure communication over the computer network. You can use the logging RHEL system role to configure a secure transfer of log messages, where one or more clients take logs from the systemd-journal service and transfer them to a remote server while using TLS. Typically, TLS for transferring logs in a remote logging solution is used when sending sensitive data over less trusted or public networks, such as the Internet. Also, by using certificates in TLS you can ensure that the client is forwarding logs to the correct and trusted server. This prevents attacks like "man-in-the-middle". 17.3.1. Configuring client logging with TLS You can use the logging RHEL system role to configure logging on RHEL clients and transfer logs to a remote logging system using TLS encryption. This procedure creates a private key and a certificate. , it configures TLS on all hosts in the clients group in the Ansible inventory. The TLS protocol encrypts the message transmission for secure transfer of logs over the network. Note You do not have to call the certificate RHEL system role in the playbook to create the certificate. The logging RHEL system role calls it automatically when the logging_certificates variable is set. In order for the CA to be able to sign the created certificate, the managed nodes must be enrolled in an IdM domain. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. The managed nodes are enrolled in an IdM domain. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure remote logging solution using TLS for secure transfer of logs hosts: managed-node-01.example.com tasks: - name: Deploying files input and forwards output with certs ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_certificates: - name: logging_cert dns: ['localhost', 'www.example.com'] ca: ipa logging_pki_files: - ca_cert: /local/path/to/ca_cert.pem cert: /local/path/to/logging_cert.pem private_key: /local/path/to/logging_cert.pem logging_inputs: - name: input_name type: files input_log_path: /var/log/containers/*.log logging_outputs: - name: output_name type: forwards target: your_target_host tcp_port: 514 tls: true pki_authmode: x509/name permitted_server: 'server.example.com' logging_flows: - name: flow_name inputs: [input_name] outputs: [output_name] The settings specified in the example playbook include the following: logging_certificates The value of this parameter is passed on to certificate_requests in the certificate RHEL system role and used to create a private key and certificate. logging_pki_files Using this parameter, you can configure the paths and other settings that logging uses to find the CA, certificate, and key files used for TLS, specified with one or more of the following sub-parameters: ca_cert , ca_cert_src , cert , cert_src , private_key , private_key_src , and tls . Note If you are using logging_certificates to create the files on the managed node, do not use ca_cert_src , cert_src , and private_key_src , which are used to copy files not created by logging_certificates . ca_cert Represents the path to the CA certificate file on the managed node. Default path is /etc/pki/tls/certs/ca.pem and the file name is set by the user. cert Represents the path to the certificate file on the managed node. Default path is /etc/pki/tls/certs/server-cert.pem and the file name is set by the user. private_key Represents the path to the private key file on the managed node. Default path is /etc/pki/tls/private/server-key.pem and the file name is set by the user. ca_cert_src Represents the path to the CA certificate file on the control node which is copied to the target host to the location specified by ca_cert . Do not use this if using logging_certificates . cert_src Represents the path to a certificate file on the control node which is copied to the target host to the location specified by cert . Do not use this if using logging_certificates . private_key_src Represents the path to a private key file on the control node which is copied to the target host to the location specified by private_key . Do not use this if using logging_certificates . tls Setting this parameter to true ensures secure transfer of logs over the network. If you do not want a secure wrapper, you can set tls: false . For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.logging/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources /usr/share/ansible/roles/rhel-system-roles.logging/README.md file /usr/share/doc/rhel-system-roles/logging/ directory /usr/share/ansible/roles/rhel-system-roles.certificate/README.md file /usr/share/doc/rhel-system-roles/certificate/ directory Requesting certificates using RHEL system roles . rsyslog.conf(5) and syslog(3) manual pages 17.3.2. Configuring server logging with TLS You can use the logging RHEL system role to configure logging on RHEL servers and set them to receive logs from a remote logging system using TLS encryption. This procedure creates a private key and a certificate. , it configures TLS on all hosts in the server group in the Ansible inventory. Note You do not have to call the certificate RHEL system role in the playbook to create the certificate. The logging RHEL system role calls it automatically. In order for the CA to be able to sign the created certificate, the managed nodes must be enrolled in an IdM domain. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. The managed nodes are enrolled in an IdM domain. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure remote logging solution using TLS for secure transfer of logs hosts: managed-node-01.example.com tasks: - name: Deploying remote input and remote_files output with certs ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_certificates: - name: logging_cert dns: ['localhost', 'www.example.com'] ca: ipa logging_pki_files: - ca_cert: /local/path/to/ca_cert.pem cert: /local/path/to/logging_cert.pem private_key: /local/path/to/logging_cert.pem logging_inputs: - name: input_name type: remote tcp_ports: 514 tls: true permitted_clients: ['clients.example.com'] logging_outputs: - name: output_name type: remote_files remote_log_path: /var/log/remote/%FROMHOST%/%PROGRAMNAME:::secpath-replace%.log async_writing: true client_count: 20 io_buffer_size: 8192 logging_flows: - name: flow_name inputs: [input_name] outputs: [output_name] The settings specified in the example playbook include the following: logging_certificates The value of this parameter is passed on to certificate_requests in the certificate RHEL system role and used to create a private key and certificate. logging_pki_files Using this parameter, you can configure the paths and other settings that logging uses to find the CA, certificate, and key files used for TLS, specified with one or more of the following sub-parameters: ca_cert , ca_cert_src , cert , cert_src , private_key , private_key_src , and tls . Note If you are using logging_certificates to create the files on the managed node, do not use ca_cert_src , cert_src , and private_key_src , which are used to copy files not created by logging_certificates . ca_cert Represents the path to the CA certificate file on the managed node. Default path is /etc/pki/tls/certs/ca.pem and the file name is set by the user. cert Represents the path to the certificate file on the managed node. Default path is /etc/pki/tls/certs/server-cert.pem and the file name is set by the user. private_key Represents the path to the private key file on the managed node. Default path is /etc/pki/tls/private/server-key.pem and the file name is set by the user. ca_cert_src Represents the path to the CA certificate file on the control node which is copied to the target host to the location specified by ca_cert . Do not use this if using logging_certificates . cert_src Represents the path to a certificate file on the control node which is copied to the target host to the location specified by cert . Do not use this if using logging_certificates . private_key_src Represents the path to a private key file on the control node which is copied to the target host to the location specified by private_key . Do not use this if using logging_certificates . tls Setting this parameter to true ensures secure transfer of logs over the network. If you do not want a secure wrapper, you can set tls: false . For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.logging/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources /usr/share/ansible/roles/rhel-system-roles.logging/README.md file /usr/share/doc/rhel-system-roles/logging/ directory Requesting certificates using RHEL system roles . rsyslog.conf(5) and syslog(3) manual pages 17.4. Using the logging RHEL system roles with RELP Reliable Event Logging Protocol (RELP) is a networking protocol for data and message logging over the TCP network. It ensures reliable delivery of event messages and you can use it in environments that do not tolerate any message loss. The RELP sender transfers log entries in the form of commands and the receiver acknowledges them once they are processed. To ensure consistency, RELP stores the transaction number to each transferred command for any kind of message recovery. You can consider a remote logging system in between the RELP Client and RELP Server. The RELP Client transfers the logs to the remote logging system and the RELP Server receives all the logs sent by the remote logging system. To achieve that use case, you can use the logging RHEL system role to configure the logging system to reliably send and receive log entries. 17.4.1. Configuring client logging with RELP You can use the logging RHEL system role to configure a transfer of log messages stored locally to the remote logging system with RELP. This procedure configures RELP on all hosts in the clients group in the Ansible inventory. The RELP configuration uses Transport Layer Security (TLS) to encrypt the message transmission for secure transfer of logs over the network. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure client-side of the remote logging solution using RELP hosts: managed-node-01.example.com tasks: - name: Deploy basic input and RELP output ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: basic_input type: basics logging_outputs: - name: relp_client type: relp target: logging.server.com port: 20514 tls: true ca_cert: /etc/pki/tls/certs/ca.pem cert: /etc/pki/tls/certs/client-cert.pem private_key: /etc/pki/tls/private/client-key.pem pki_authmode: name permitted_servers: - '*.server.example.com' logging_flows: - name: example_flow inputs: [basic_input] outputs: [relp_client] The settings specified in the example playbook include the following: target This is a required parameter that specifies the host name where the remote logging system is running. port Port number the remote logging system is listening. tls Ensures secure transfer of logs over the network. If you do not want a secure wrapper you can set the tls variable to false . By default tls parameter is set to true while working with RELP and requires key/certificates and triplets { ca_cert , cert , private_key } and/or { ca_cert_src , cert_src , private_key_src }. If the { ca_cert_src , cert_src , private_key_src } triplet is set, the default locations /etc/pki/tls/certs and /etc/pki/tls/private are used as the destination on the managed node to transfer files from control node. In this case, the file names are identical to the original ones in the triplet If the { ca_cert , cert , private_key } triplet is set, files are expected to be on the default path before the logging configuration. If both triplets are set, files are transferred from local path from control node to specific path of the managed node. ca_cert Represents the path to CA certificate. Default path is /etc/pki/tls/certs/ca.pem and the file name is set by the user. cert Represents the path to certificate. Default path is /etc/pki/tls/certs/server-cert.pem and the file name is set by the user. private_key Represents the path to private key. Default path is /etc/pki/tls/private/server-key.pem and the file name is set by the user. ca_cert_src Represents local CA certificate file path which is copied to the managed node. If ca_cert is specified, it is copied to the location. cert_src Represents the local certificate file path which is copied to the managed node. If cert is specified, it is copied to the location. private_key_src Represents the local key file path which is copied to the managed node. If private_key is specified, it is copied to the location. pki_authmode Accepts the authentication mode as name or fingerprint . permitted_servers List of servers that will be allowed by the logging client to connect and send logs over TLS. inputs List of logging input dictionary. outputs List of logging output dictionary. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.logging/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources /usr/share/ansible/roles/rhel-system-roles.logging/README.md file /usr/share/doc/rhel-system-roles/logging/ directory rsyslog.conf(5) and syslog(3) manual pages 17.4.2. Configuring server logging with RELP You can use the logging RHEL system role to configure a server for receiving log messages from the remote logging system with RELP. This procedure configures RELP on all hosts in the server group in the Ansible inventory. The RELP configuration uses TLS to encrypt the message transmission for secure transfer of logs over the network. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure server-side of the remote logging solution using RELP hosts: managed-node-01.example.com tasks: - name: Deploying remote input and remote_files output ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: relp_server type: relp port: 20514 tls: true ca_cert: /etc/pki/tls/certs/ca.pem cert: /etc/pki/tls/certs/server-cert.pem private_key: /etc/pki/tls/private/server-key.pem pki_authmode: name permitted_clients: - '*example.client.com' logging_outputs: - name: remote_files_output type: remote_files logging_flows: - name: example_flow inputs: relp_server outputs: remote_files_output The settings specified in the example playbook include the following: port Port number the remote logging system is listening. tls Ensures secure transfer of logs over the network. If you do not want a secure wrapper you can set the tls variable to false . By default tls parameter is set to true while working with RELP and requires key/certificates and triplets { ca_cert , cert , private_key } and/or { ca_cert_src , cert_src , private_key_src }. If the { ca_cert_src , cert_src , private_key_src } triplet is set, the default locations /etc/pki/tls/certs and /etc/pki/tls/private are used as the destination on the managed node to transfer files from control node. In this case, the file names are identical to the original ones in the triplet If the { ca_cert , cert , private_key } triplet is set, files are expected to be on the default path before the logging configuration. If both triplets are set, files are transferred from local path from control node to specific path of the managed node. ca_cert Represents the path to CA certificate. Default path is /etc/pki/tls/certs/ca.pem and the file name is set by the user. cert Represents the path to the certificate. Default path is /etc/pki/tls/certs/server-cert.pem and the file name is set by the user. private_key Represents the path to private key. Default path is /etc/pki/tls/private/server-key.pem and the file name is set by the user. ca_cert_src Represents local CA certificate file path which is copied to the managed node. If ca_cert is specified, it is copied to the location. cert_src Represents the local certificate file path which is copied to the managed node. If cert is specified, it is copied to the location. private_key_src Represents the local key file path which is copied to the managed node. If private_key is specified, it is copied to the location. pki_authmode Accepts the authentication mode as name or fingerprint . permitted_clients List of clients that will be allowed by the logging server to connect and send logs over TLS. inputs List of logging input dictionary. outputs List of logging output dictionary. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.logging/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources /usr/share/ansible/roles/rhel-system-roles.logging/README.md file /usr/share/doc/rhel-system-roles/logging/ directory rsyslog.conf(5) and syslog(3) manual pages
[ "--- - name: Deploy the logging solution hosts: managed-node-01.example.com tasks: - name: Filter logs based on a specific value they contain ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: files_input type: basics logging_outputs: - name: files_output0 type: files property: msg property_op: contains property_value: error path: /var/log/errors.log - name: files_output1 type: files property: msg property_op: \"!contains\" property_value: error path: /var/log/others.log logging_flows: - name: flow0 inputs: [files_input] outputs: [files_output0, files_output1]", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "rsyslogd -N 1 rsyslogd: version 8.1911.0-6.el8, config validation run rsyslogd: End of config validation run. Bye.", "logger error", "cat /var/log/errors.log Aug 5 13:48:31 hostname root[6778]: error", "--- - name: Deploy the logging solution hosts: managed-node-01.example.com tasks: - name: Configure the server to receive remote input ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: remote_udp_input type: remote udp_ports: [ 601 ] - name: remote_tcp_input type: remote tcp_ports: [ 601 ] logging_outputs: - name: remote_files_output type: remote_files logging_flows: - name: flow_0 inputs: [remote_udp_input, remote_tcp_input] outputs: [remote_files_output] - name: Deploy the logging solution hosts: managed-node-02.example.com tasks: - name: Configure the server to output the logs to local files in directories named by remote host names ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: basic_input type: basics logging_outputs: - name: forward_output0 type: forwards severity: info target: <host1.example.com> udp_port: 601 - name: forward_output1 type: forwards facility: mail target: <host1.example.com> tcp_port: 601 logging_flows: - name: flows0 inputs: [basic_input] outputs: [forward_output0, forward_output1] [basic_input] [forward_output0, forward_output1]", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "rsyslogd -N 1 rsyslogd: version 8.1911.0-6.el8, config validation run (level 1), master config /etc/rsyslog.conf rsyslogd: End of config validation run. Bye.", "logger test", "cat /var/log/ <host2.example.com> /messages Aug 5 13:48:31 <host2.example.com> root[6778]: test", "--- - name: Configure remote logging solution using TLS for secure transfer of logs hosts: managed-node-01.example.com tasks: - name: Deploying files input and forwards output with certs ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_certificates: - name: logging_cert dns: ['localhost', 'www.example.com'] ca: ipa logging_pki_files: - ca_cert: /local/path/to/ca_cert.pem cert: /local/path/to/logging_cert.pem private_key: /local/path/to/logging_cert.pem logging_inputs: - name: input_name type: files input_log_path: /var/log/containers/*.log logging_outputs: - name: output_name type: forwards target: your_target_host tcp_port: 514 tls: true pki_authmode: x509/name permitted_server: 'server.example.com' logging_flows: - name: flow_name inputs: [input_name] outputs: [output_name]", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "--- - name: Configure remote logging solution using TLS for secure transfer of logs hosts: managed-node-01.example.com tasks: - name: Deploying remote input and remote_files output with certs ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_certificates: - name: logging_cert dns: ['localhost', 'www.example.com'] ca: ipa logging_pki_files: - ca_cert: /local/path/to/ca_cert.pem cert: /local/path/to/logging_cert.pem private_key: /local/path/to/logging_cert.pem logging_inputs: - name: input_name type: remote tcp_ports: 514 tls: true permitted_clients: ['clients.example.com'] logging_outputs: - name: output_name type: remote_files remote_log_path: /var/log/remote/%FROMHOST%/%PROGRAMNAME:::secpath-replace%.log async_writing: true client_count: 20 io_buffer_size: 8192 logging_flows: - name: flow_name inputs: [input_name] outputs: [output_name]", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "--- - name: Configure client-side of the remote logging solution using RELP hosts: managed-node-01.example.com tasks: - name: Deploy basic input and RELP output ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: basic_input type: basics logging_outputs: - name: relp_client type: relp target: logging.server.com port: 20514 tls: true ca_cert: /etc/pki/tls/certs/ca.pem cert: /etc/pki/tls/certs/client-cert.pem private_key: /etc/pki/tls/private/client-key.pem pki_authmode: name permitted_servers: - '*.server.example.com' logging_flows: - name: example_flow inputs: [basic_input] outputs: [relp_client]", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "--- - name: Configure server-side of the remote logging solution using RELP hosts: managed-node-01.example.com tasks: - name: Deploying remote input and remote_files output ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: relp_server type: relp port: 20514 tls: true ca_cert: /etc/pki/tls/certs/ca.pem cert: /etc/pki/tls/certs/server-cert.pem private_key: /etc/pki/tls/private/server-key.pem pki_authmode: name permitted_clients: - '*example.client.com' logging_outputs: - name: remote_files_output type: remote_files logging_flows: - name: example_flow inputs: relp_server outputs: remote_files_output", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/automating_system_administration_by_using_rhel_system_roles/configuring-logging-by-using-rhel-system-roles_automating-system-administration-by-using-rhel-system-roles
Red Hat build of MicroShift release notes
Red Hat build of MicroShift release notes Red Hat build of MicroShift 4.18 Highlights of what is new and what has changed with this MicroShift release Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html-single/red_hat_build_of_microshift_release_notes/index
Policy APIs
Policy APIs OpenShift Container Platform 4.17 Reference guide for policy APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/policy_apis/index
Chapter 16. Export and Import
Chapter 16. Export and Import Red Hat Single Sign-On has the ability to export and import the entire database. This can be especially useful if you want to migrate your whole Red Hat Single Sign-On database from one environment to another or migrate to a different database (for example from MySQL to Oracle). Export and import is triggered at server boot time and its parameters are passed in via Java system properties. It is important to note that because import and export happens at server startup, no other actions should be taken on the server or the database while this happens. You can export/import your database either to: Directory on local filesystem Single JSON file on your filesystem When importing using the directory strategy, note that the files need to follow the naming convention specified below. If you are importing files which were previously exported, the files already follow this convention. <REALM_NAME>-realm.json, such as "acme-roadrunner-affairs-realm.json" for the realm named "acme-roadrunner-affairs" <REALM_NAME>-users-<INDEX>.json, such as "acme-roadrunner-affairs-users-0.json" for the first users file of the realm named "acme-roadrunner-affairs" If you export to a directory, you can also specify the number of users that will be stored in each JSON file. Note If you have bigger amount of users in your database (500 or more), it's highly recommended to export into directory rather than to single file. Exporting into single file may lead to the very big file. Also the directory provider is using separate transaction for each "page" (file with users), which leads to much better performance. Default count of users per file (and transaction) is 50, which showed us best performance, but you have possibility to override (See below). Exporting to single file is using one transaction per whole export and one per whole import, which results in bad performance with large amount of users. To export into unencrypted directory you can use: To export into single JSON file you can use: And similarly for import just use -Dkeycloak.migration.action=import instead of export . Here's an example of importing: Other available options are: -Dkeycloak.migration.realmName This property is used if you want to export just one specified realm instead of all. If not specified, then all realms will be exported. -Dkeycloak.migration.usersExportStrategy This property is used to specify where users are exported. Possible values are: DIFFERENT_FILES - Users will be exported into different files according to the maximum number of users per file. This is default value. SKIP - Exporting of users will be skipped completely. REALM_FILE - All users will be exported to same file with the realm settings. (The result will be a file like "foo-realm.json" with both realm data and users.) SAME_FILE - All users will be exported to same file but different from the realm file. (The result will be a file like "foo-realm.json" with realm data and "foo-users.json" with users.) -Dkeycloak.migration.usersPerFile This property is used to specify the number of users per file (and also per DB transaction). It's 50 by default. It's used only if usersExportStrategy is DIFFERENT_FILES -Dkeycloak.migration.strategy This property is used during import. It can be used to specify how to proceed if a realm with same name already exists in the database where you are going to import data. Possible values are: IGNORE_EXISTING - Ignore importing if a realm of this name already exists. OVERWRITE_EXISTING - Remove existing realm and import it again with new data from the JSON file. If you want to fully migrate one environment to another and ensure that the new environment will contain the same data as the old one, you can specify this. When importing realm files that weren't exported before, the option keycloak.import can be used. If more than one realm file needs to be imported, a comma separated list of file names can be specified. This is more appropriate than the cases before, as this will happen only after the master realm has been initialized. Examples: -Dkeycloak.import=/tmp/realm1.json -Dkeycloak.import=/tmp/realm1.json,/tmp/realm2.json 16.1. Admin console export/import Import of most resources can be performed from the admin console as well as export of most resources. Export of users is not supported. Note Attributes containing secrets or private information will be masked in export file. Export files obtained via Admin Console are thus not appropriate for backups or data transfer between servers. Only boot-time exports are appropriate for that. The files created during a "startup" export can also be used to import from the admin UI. This way, you can export from one realm and import to another realm. Or, you can export from one server and import to another. Note The admin console export/import allows just one realm per file. Warning The admin console import allows you to "overwrite" resources if you choose. Use this feature with caution, especially on a production system. Export .json files from Admin Console Export operation are generally not appropriate for data import since they contain invalid values for secrets. Warning The admin console export allows you to export clients, groups, and roles. If there is a great number of any of these assets in your realm, the operation may take some time to complete. During that time server may not be responsive to user requests. Use this feature with caution, especially on a production system.
[ "bin/standalone.sh -Dkeycloak.migration.action=export -Dkeycloak.migration.provider=dir -Dkeycloak.migration.dir=<DIR TO EXPORT TO>", "bin/standalone.sh -Dkeycloak.migration.action=export -Dkeycloak.migration.provider=singleFile -Dkeycloak.migration.file=<FILE TO EXPORT TO>", "bin/standalone.sh -Dkeycloak.migration.action=import -Dkeycloak.migration.provider=singleFile -Dkeycloak.migration.file=<FILE TO IMPORT> -Dkeycloak.migration.strategy=OVERWRITE_EXISTING" ]
https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.4/html/server_administration_guide/export_import
Chapter 4. Creating system images with Image Builder web console interface
Chapter 4. Creating system images with Image Builder web console interface Image Builder is a tool for creating custom system images. To use Image Builder and create your custom system images, you can use the web console interface. Note that the command-line interface is the currently preferred alternative, because it offers more features. Note The command-line interface is the currently preferred alternative, because it offers more features. 4.1. Accessing Image Builder GUI in the RHEL 7 web console The cockpit-composer plug-in for the RHEL 7 web console enables users to manage Image Builder blueprints and composes with a graphical interface. Note that the preferred method for controlling Image Builder is at the moment using the command-line interface. Prerequisites You must have root access to the system. Procedure 1. Open https://localhost:9090/ in a web browser on the system where Image Builder is installed. Figure 4.1. Log into the web console For more information on how to remotely access Image Builder, see managing systems using the RHEL 7 web console . 2. Log into the web console with your root username and password. 3. To display the Image Builder controls, click the Image Builder icon, which is in the upper-left corner of the window. The Image Builder view opens, listing existing blueprints. Figure 4.2. Image Builder on Web Console Related information Chapter 3, Creating system images with Image Builder command-line interface You will be able to create customized RHEL OS images using Image Builder scenario
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/image_builder_guide/chap-documentation-image_builder-test_chapter_4
30.2. sudo Rules in Identity Management
30.2. sudo Rules in Identity Management Using sudo rules, you can define who can do what , where , and as whom . Who are the users allowed to use sudo . What are the commands that can be used with sudo . Where are the target hosts on which the users are allowed to use sudo . As whom is the system or other user identity which the users assume to perform tasks. 30.2.1. External Users and Hosts in sudo Rules IdM accepts external entities in sudo rules. External entities are entities that are stored outside of the IdM domain, such as users or hosts that are not part of the IdM domain. For example, you can use sudo rules to grant root access to a member of the IT group in IdM, where the root user is not a user defined in the IdM domain. Or, for another example, administrators can block access to certain hosts that are on a network but are not part of the IdM domain. 30.2.2. User Group Support for sudo Rules You can use sudo to give access to whole user groups in IdM. IdM supports both Unix and non-POSIX groups. Note that creating non-POSIX groups can cause access problems because any users in a non-POSIX group inherit non-POSIX permissions from the group. 30.2.3. Support for sudoers Options IdM supports sudoers options. For a complete list of the available sudoers options, see the sudoers (5) man page. Note that IdM does not allow white spaces or line breaks in sudoers options. Therefore, instead of supplying multiple options in a comma-separated list, add them separately. For example, to add two sudoers options from the command line: Similarly, make sure to supply long options on one line. For example, from the command line:
[ "ipa sudorule-add-option sudo_rule_name Sudo Option: first_option ipa sudorule-add-option sudo_rule_name Sudo Option: second_option", "ipa sudorule-add-option sudo_rule_name Sudo Option: env_keep=\"COLORS DISPLAY EDITOR HOSTNAME HISTSIZE INPUTRC KDEDIR LESSSECURE LS_COLORS MAIL PATH PS1 PS2 XAUTHORITY\"" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/sudo-rules
Chapter 4. Red Hat Directory Server 11.7
Chapter 4. Red Hat Directory Server 11.7 Learn about new system requirements, updates and new features, known issues, and deprecated functionality implemented in Directory Server 11.7. 4.1. Important updates and new features Learn about new features and important updates in Directory Server 11.7. Directory Server rebased to version 1.4.3.34 The 389-ds-base packages have been upgraded to upstream version 1.4.3.34. Important updates and new features in the 389-ds-base packages The Red Hat Directory Server features that are included in the 389-ds-base packages are documented in the Red Hat Enterprise Linux 8.8 Release Notes: New nsslapd-auditlog-display-attrs configuration parameter for the Directory Server audit log Directory server now supports ECDSA private keys for TLS New pamModuleIsThreadSafe configuration option is now available 4.2. Bug fixes Learn about bugs fixed in Directory Server 11.7 that have a significant impact on users. The ns-slapd binary is now linked with the thread-safe libldap_r library, no longer causing segmentation fault An upstream change in the build system introduced a regression by linking the ns-slapd binary with the non thread-safe libldap library instead of the thread-safe libldap_r . Consequently, the ns-slapd process could fail with a segmentation fault. This update fixes the problem with the build system code and the ns-slapd binary is now linked back with the thread-safe libldap_r library. As a result, the segmentation fault no longer occurs. (BZ#2268138) Directory Server now flushes the entry cache less frequently Previously, Directory Server flushed its entry cache even when it was not necessary. As a result, in certain situations, Directory Server was unresponsive and had bad performance. With this update, Directory Server flushes the entry cache only when it is necessary. (BZ#2268136) Bug fixes in the 389-ds-base packages The Red Hat Directory Server bug fixes that are included in the 389-ds-base packages are documented in the Red Hat Enterprise Linux 8.8 Release Notes: The scheduled time of the changelog compaction now works correctly 4.3. Known issues Learn about known problems and, if applicable, workarounds in Directory Server 11.7. Access log displays an error message during Directory Server installation in FIPS mode When you install Directory Server in the FIPS mode, the access log file displays the following error message: Such behavior happens because at first, Directory Server finds that TLS is not initialized and logs the error message. However, later when the dscreate utility completes TLS initialization and enables security, the error message is no longer present. (BZ#2153668) Directory Server settings that are changed outside the web console's window are not automatically visible Because of the design of the Directory Server module in the Red Hat Enterprise Linux 8 web console, the web console does not automatically display the latest settings if a user changes the configuration outside of the console's window. For example, if you change the configuration using the command line while the web console is open, the new settings are not automatically updated in the web console. This applies also if you change the configuration using the web console on a different computer. To work around the problem, manually refresh the web console in the browser if the configuration has been changed outside the console's window. (BZ#1654281) Configuring a referral for a suffix fails in Directory Server If you set a back-end referral in Directory Server, setting the state of the backend using the dsconf <instance_name> backend suffix set --state referral command fails with the following error: As a consequence, configuring a referral for suffixes fail. To work around the problem: Set the nsslapd-referral parameter manually: Set the back-end state: As a result, with the workaround, you can configure a referral for a suffix. (BZ#2063033) Directory Server replication fails after changing password of the replication manager account After a password change, Directory Server does not properly update the password cache for the replication agreement. As a consequence, when you change the password for the replication manager account, the replication breaks. To work around this problem, restart the Directory Server instance. As a result, the cache is rebuilt at start-up, and the replication connection binds with the new password instead of the old one. (BZ#2101473) Known issues in the 389-ds-base package Red Hat Directory Server known issues that affect 389-ds-base package are documented in Red Hat Enterprise Linux 8.8 8.8 Release Notes: The default keyword for enabled ciphers in the NSS does not work in conjunction with other ciphers (BZ#1817505)
[ "[time_stamp] - WARN - slapd_do_all_nss_ssl_init - ERROR: TLS is not enabled, and the machine is in FIPS mode. Some functionality won't work correctly (for example, users with PBKDF2_SHA256 password scheme won't be able to log in). It's highly advisable to enable TLS on this instance.", "Error: 103 - 9 - 53 - Server is unwilling to perform - [] - need to set nsslapd-referral before moving to referral state", "ldapmodify -D \"cn=Directory Manager\" -W -H ldap://server.example.com dn: cn=dc\\3Dexample\\2Cdc\\3Dcom,cn=mapping tree,cn=config changetype: modify add: nsslapd-referral nsslapd-referral: ldap://remote_server:389/dc=example,dc=com", "dsconf <instance_name> backend suffix set --state referral" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/release_notes/directory-server-11.7
Chapter 1. Operating OpenShift Lightspeed
Chapter 1. Operating OpenShift Lightspeed There are different ways you can access the OpenShift Lightspeed user interface. Clicking the OpenShift Lightspeed chat window floating icon in the lower-right of the screen Using the Actions dropdown list on specific resources in the OpenShift Container Platform web console Using the Node options icon from a list of resources in the OpenShift Container Platform web console The following topics provide information about submitting questions to Red Hat OpenShift Lightspeed. 1.1. Using the chat window to ask a question This procedure explains how to use the Red Hat OpenShift Lightspeed icon to ask a question using natural language. Procedure Click the Red Hat OpenShift Lightspeed icon in the lower-right corner of the screen. This action presents the Red Hat OpenShift Lightspeed user interface. Enter a question. Click the Submit button. Lightspeed returns information based on your question. 1.2. Using the Actions dropdown list to ask a question This procedure explains how to use the Actions dropdown list to ask OpenShift Lightspeed a question. Procedure Navigate to a resource in the OpenShift Container Platform web console. For example, click Workloads Pods and then click the name of a pod. Click the Actions dropdown list and select Ask OpenShift Lightspeed . This presents the Red Hat OpenShift Lightspeed user interface. Enter a question. Click the Submit button. Lightspeed returns information based on your question. 1.3. Using the Node options icon to ask a question This procedure explains how to use the Node options to ask OpenShift Lightspeed a question. Procedure Navigate to a resource in the OpenShift Container Platform web console. For example, click Workloads Pods . Click the Node options icon in the row for a pod and select Ask OpenShift Lightspeed . This action presents the Red Hat OpenShift Lightspeed natural language interface. Enter a question. Click the Submit button. The Lightspeed natural language interface returns information based on your query. 1.4. About Lightspeed conversations OpenShift Lightspeed is designed to answer questions about OpenShift, Kubernetes, and additional OpenShift components, such as OpenShift Virtualization, OpenShift Pipelines, and OpenShift Service Mesh. OpenShift Lightspeed will not answer questions that are unrelated to the targeted topics. In some cases, you may need to rephrase an ambiguously worded question because OpenShift Lightspeed could not correctly interpret what you asked. Conversation history helps provide context that OpenShift Lightspeed references when generating answers. Using specific language helps increase the success of responses. For example, instead of asking "How do I start a virtual machine?" try asking "How do I start a virtual machine in OpenShift Virtualization?" Conversation history does not persist if you reload the console page. Reloading the console page performs the same action as clicking the New Chat button. Conversation history is also erased if OpenShift Lightspeed is restarted. 1.5. Providing feedback for a conversation OpenShift Lightspeed includes an integrated feedback system. This procedure explains how to provide feedback to Red Hat for a specific OpenShift Lightspeed question and response. Prerequisites You have installed the OpenShift Lightspeed Operator and deployed the OpenShift Lightspeed service. Procedure Click the Red Hat OpenShift Lightspeed icon in the lower-right corner of the screen. Enter a question into the Send message field: Click the Submit button. Lightspeed returns information. To provide feedback on a particular question and response, click the thumbs up or thumbs down button. This action presents a field that allows you to enter additional information. Click the Submit button. Your rating, any text you entered, the specific question you asked OpenShift Lightspeed, and the response are all sent to Red Hat for review. 1.6. Sample conversation overview The following examples are intended to serve as samples you can use to start a conversation with OpenShift Lightspeed. In some cases, the conversation includes an initial question and then one or more follow-up questions. Rephrasing questions or asking a more precise follow-up question can help increase the success of the reply. Some of the examples suggest specific workflows to follow in the user interface. Be sure to ask the follow-up questions without starting a new dialog. OpenShift Lightspeed uses the entire conversation as context, so follow-up questions should help refine answers. 1.6.1. Asking a general question This procedure shows how to ask OpenShift Lightspeed a general question about the OpenShift Container Platform. Procedure Click the Red Hat OpenShift Lightspeed icon in the lower-right corner of the screen. Enter the following question into the Send message field: What is an OpenShift imagestream used for? Click the Submit button. Lightspeed returns information that provides an explanation of an imagestream and details about usage. 1.6.2. Asking related questions This procedure shows how to ask OpenShift Lightspeed a series of related questions in order to obtain more highly refined information. Lightspeed uses the conversation history to help create context. Procedure Click the Red Hat OpenShift Lightspeed icon in the lower-right corner of the screen. Enter the following question into the Send message field: How are OpenShift security context constraints used? Click the Submit button. Lightspeed returns information. Enter the following question into the Send message field: Can I control who can use a particular SCC? Click the Submit button. Lightspeed returns more highly refined information that contains additional details. Enter the following question into the Send message field: Can you give me an example? Click the Submit button. Lightspeed returns sample code that you can copy and use. 1.6.3. Attaching a resource object to your question This procedure explains how to attach a resource object to provide additional context for your question. Procedure Navigate to a supported resource in the OpenShift Container Platform web console. For example, click Workloads Pods and then click the name of a pod. Click the Red Hat OpenShift Lightspeed icon in the lower-right corner of the screen. Click the plus icon in the Red Hat OpenShift Lightspeed user interface to attach a resource object. Select the resource object to attach to the question. Tip You can attach the following resources: CronJob , DaemonSet , Deployment , Job , Pod , ReplicaSet and StatefulSet from Workloads in the OpenShift Container Platform web console. Alert from Observe in the OpenShift Container Platform web console. Services , Routes , Ingresses , and NetworkPolicies from Networking in the OpenShift Container Platform web console. Enter a question. Click the Submit button. Lightspeed returns information based on your question. 1.6.4. Using Lightspeed to troubleshoot alerts This procedure shows how to use OpenShift Lightspeed to troubleshoot alerts. Procedure In the OpenShift Container Platform web console, select the Administrator perspective. Click Observe Alerting . Click the AlertmanagerReceiversNotConfigured alert. Copy the title and text of the warning. Click the Red Hat OpenShift Lightspeed icon in the lower-right corner of the screen. In the Red Hat OpenShift Lightspeed user interface, enter the following text: What should I do about this alert? Paste the title and text associated with the alert. Click the Submit button. Lightspeed references the alert information to provide context when generating the information it returns. 1.7. Starting a new chat conversation When you ask follow-up questions, OpenShift Lightspeed references the conversation history to provide additional context that influences the replies it generates. Whenever you initiate a new conversation with OpenShift Lightspeed you should clear the chat history. This procedure explains how to clear the chat history and start a new conversation. Procedure In the Red Hat OpenShift Lightspeed natural language interface, click New chat . This action clears the history of your conversation. Enter a question. Click the Submit button. OpenShift Lightspeed only references the new question when generating a response.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_lightspeed/1.0tp1/html/operate/ols-using-openshift-lightspeed
Chapter 30. Getting started with an ext3 file system
Chapter 30. Getting started with an ext3 file system As a system administrator, you can create, mount, resize, backup, and restore an ext3 file system. The ext3 file system is essentially an enhanced version of the ext2 file system. 30.1. Features of an ext3 file system Following are the features of an ext3 file system: Availability: After an unexpected power failure or system crash, file system check is not required due to the journaling provided. The default journal size takes about a second to recover, depending on the speed of the hardware Note The only supported journaling mode in ext3 is data=ordered (default). For more information, see the Red Hat Knowledgebase solution Is the EXT journaling option "data=writeback" supported in RHEL? . Data Integrity: The ext3 file system prevents loss of data integrity during an unexpected power failure or system crash. Speed: Despite writing some data more than once, ext3 has a higher throughput in most cases than ext2 because ext3's journaling optimizes hard drive head motion. Easy Transition: It is easy to migrate from ext2 to ext3 and gain the benefits of a robust journaling file system without reformatting. Additional resources ext3 man page on your system 30.2. Creating an ext3 file system As a system administrator, you can create an ext3 file system on a block device using mkfs.ext3 command. Prerequisites A partition on your disk. For information about creating MBR or GPT partitions, see Creating a partition table on a disk with parted . . + Alternatively, use an LVM or MD volume. Procedure To create an ext3 file system: For a regular-partition device, an LVM volume, an MD volume, or a similar device, use the following command: Replace /dev/ block_device with the path to a block device. For example, /dev/sdb1 , /dev/disk/by-uuid/05e99ec8-def1-4a5e-8a9d-5945339ceb2a , or /dev/my-volgroup/my-lv . In general, the default options are optimal for most usage scenarios. For striped block devices (for example, RAID5 arrays), the stripe geometry can be specified at the time of file system creation. Using proper stripe geometry enhances the performance of an ext3 file system. For example, to create a file system with a 64k stride (that is, 16 x 4096) on a 4k-block file system, use the following command: In the given example: stride=value: Specifies the RAID chunk size stripe-width=value: Specifies the number of data disks in a RAID device, or the number of stripe units in the stripe. Note To specify a UUID when creating a file system: Replace UUID with the UUID you want to set: for example, 7cd65de3-e0be-41d9-b66d-96d749c02da7 . Replace /dev/ block_device with the path to an ext3 file system to have the UUID added to it: for example, /dev/sda8 . To specify a label when creating a file system: To view the created ext3 file system: Additional resources ext3 and mkfs.ext3 man pages on your system 30.3. Mounting an ext3 file system As a system administrator, you can mount an ext3 file system using the mount utility. Prerequisites An ext3 file system. For information about creating an ext3 file system, see Creating an ext3 file system . Procedure To create a mount point to mount the file system: Replace /mount/point with the directory name where mount point of the partition must be created. To mount an ext3 file system: To mount an ext3 file system with no extra options: To mount the file system persistently, see Persistently mounting file systems . To view the mounted file system: Additional resources mount , ext3 , and fstab man pages on your system Mounting file systems 30.4. Resizing an ext3 file system As a system administrator, you can resize an ext3 file system using the resize2fs utility. The resize2fs utility reads the size in units of file system block size, unless a suffix indicating a specific unit is used. The following suffixes indicate specific units: s (sectors) - 512 byte sectors K (kilobytes) - 1,024 bytes M (megabytes) - 1,048,576 bytes G (gigabytes) - 1,073,741,824 bytes T (terabytes) - 1,099,511,627,776 bytes Prerequisites An ext3 file system. For information about creating an ext3 file system, see Creating an ext3 file system . An underlying block device of an appropriate size to hold the file system after resizing. Procedure To resize an ext3 file system, take the following steps: To shrink and grow the size of an unmounted ext3 file system: Replace /dev/block_device with the path to the block device, for example /dev/sdb1 . Replace size with the required resize value using s , K , M , G , and T suffixes. An ext3 file system may be grown while mounted using the resize2fs command: Note The size parameter is optional (and often redundant) when expanding. The resize2fs automatically expands to fill the available space of the container, usually a logical volume or partition. To view the resized file system: Additional resources resize2fs , e2fsck , and ext3 man pages on your system
[ "mkfs.ext3 /dev/ block_device", "mkfs.ext3 -E stride=16,stripe-width=64 /dev/ block_device", "mkfs.ext3 -U UUID /dev/ block_device", "mkfs.ext3 -L label-name /dev/ block_device", "blkid", "mkdir /mount/point", "mount /dev/ block_device /mount/point", "df -h", "umount /dev/ block_device e2fsck -f /dev/ block_device resize2fs /dev/ block_device size", "resize2fs /mount/device size", "df -h" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_file_systems/getting-started-with-an-ext3-file-system_managing-file-systems
Chapter 4. sVirt
Chapter 4. sVirt 4.1. Introduction Since virtual machines under KVM are implemented as Linux processes, KVM leverages the standard Linux security model to provide isolation and resource controls. The Linux kernel includes SELinux (Security-Enhanced Linux), a project developed by the US National Security Agency to add mandatory access control (MAC), multi-level security (MLS) and multi-category security (MCS) through a flexible and customizable security policy. SELinux provides strict resource isolation and confinement for processes running on top of the Linux kernel, including virtual machine processes. The sVirt project builds upon SELinux to further facilitate virtual machine isolation and controlled sharing. For example, fine-grained permissions could be applied to group virtual machines together to share resources. From a security point of view, the hypervisor is a tempting target for attackers, as a compromised hypervisor could lead to the compromise of all virtual machines running on the host system. Integrating SELinux into virtualization technologies helps improve hypervisor security against malicious virtual machines trying to gain access to the host system or other virtual machines. Refer to the following image which represents isolated guests, limiting the ability for a compromised hypervisor (or guest) to launch further attacks, or to extend to another instance: Figure 4.1. Attack path isolated by SELinux Note For more information on SELinux, refer to Red Hat Enterprise Linux Security-Enhanced Linux .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_security_guide/chap-Virtualization_Security_Guide-sVirt
Chapter 7. Enabling OAuth 2.0 token-based access
Chapter 7. Enabling OAuth 2.0 token-based access Streams for Apache Kafka supports OAuth 2.0 for securing Kafka clusters by integrating with an OAUth 2.0 authorization server. Kafka brokers and clients both need to be configured to use OAuth 2.0. OAuth 2.0 enables standardized token-based authentication and authorization between applications, using a central authorization server to issue tokens that grant limited access to resources. You can define specific scopes for fine-grained access control. Scopes correspond to different levels of access to Kafka topics or operations within the cluster. OAuth 2.0 also supports single sign-on and integration with identity providers. 7.1. Configuring an OAuth 2.0 authorization server Before you can use OAuth 2.0 token-based access, you must configure an authorization server for integration with Streams for Apache Kafka. The steps are dependent on the chosen authorization server. Consult the product documentation for the authorization server for information on how to set up OAuth 2.0 access. Prepare the authorization server to work with Streams for Apache Kafka by defining OAUth 2.0 clients for Kafka and each Kafka client component of your application. In relation to the authorization server, the Kafka cluster and Kafka clients are both regarded as OAuth 2.0 clients. In general, configure OAuth 2.0 clients in the authorization server with the following client credentials enabled: Client ID (for example, kafka for the Kafka cluster) Client ID and secret as the authentication mechanism Note You only need to use a client ID and secret when using a non-public introspection endpoint of the authorization server. The credentials are not typically required when using public authorization server endpoints, as with fast local JWT token validation. 7.2. Using OAuth 2.0 token-based authentication Streams for Apache Kafka supports the use of OAuth 2.0 for token-based authentication. An OAuth 2.0 authorization server handles the granting of access and inquiries about access. Kafka clients authenticate to Kafka brokers. Brokers and clients communicate with the authorization server, as necessary, to obtain or validate access tokens. For a deployment of Streams for Apache Kafka, OAuth 2.0 integration provides the following support: Server-side OAuth 2.0 authentication for Kafka brokers Client-side OAuth 2.0 authentication for Kafka MirrorMaker, Kafka Connect, and the Kafka Bridge Streams for Apache Kafka on RHEL includes two OAuth 2.0 libraries: kafka-oauth-client Provides a custom login callback handler class named io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler . To handle the OAUTHBEARER authentication mechanism, use the login callback handler with the OAuthBearerLoginModule provided by Apache Kafka. kafka-oauth-common A helper library that provides some of the functionality needed by the kafka-oauth-client library. The provided client libraries also have dependencies on some additional third-party libraries, such as: keycloak-core , jackson-databind , and slf4j-api . We recommend using a Maven project to package your client to ensure that all the dependency libraries are included. Dependency libraries might change in future versions. Additional resources OAuth 2.0 site 7.2.1. Configuring OAuth 2.0 authentication on listeners To secure Kafka brokers with OAuth 2.0 authentication, configure a Kafka listener to use OAUth 2.0 authentication and a client authentication mechanism in the Kafka server.properties file, and add further configuration depending on the authentication mechanism and type of token validation used in the authentication. A minimum configuration is required. You can also configure a TLS listener, where TLS is used for inter-broker communication. We recommend using OAuth 2.0 authentication together with TLS encryption. Without encryption, the connection is vulnerable to network eavesdropping and unauthorized access through token theft. When you have defined the type of authentication as OAuth 2.0, you add configuration based on the type of validation, either as fast local JWT validation or token validation using an introspection endpoint. Enabling SASL authentication mechanisms Use one or both of the following SASL mechanisms for clients to exchange credentials and establish authenticated sessions with Kafka. OAUTHBEARER Using the OAUTHBEARER authentication mechanism, credentials exchange uses a bearer token provided by an OAuth callback handler. Token provision can be configured to use the following methods: Client ID and secret (using the OAuth 2.0 client credentials mechanism ) Client ID and client assertion Long-lived access token Long-lived refresh token obtained manually OAUTHBEARER is recommended as it provides a higher level of security than PLAIN , though it can only be used by Kafka clients that support the OAUTHBEARER mechanism at the protocol level. Client credentials are never shared with Kafka. PLAIN PLAIN is a simple authentication mechanism used by all Kafka client tools. Consider using PLAIN only with Kafka clients that do not support OAUTHBEARER . Using the PLAIN authentication mechanism, credentials exchange can be configured to use the following methods: Client ID and secret (using the OAuth 2.0 client credentials mechanism ) Long-lived access token Regardless of the method used, the client must provide username and password properties to Kafka. Credentials are handled centrally behind a compliant authorization server, similar to how OAUTHBEARER authentication is used. The username extraction process depends on the authorization server configuration. Example listener configuration for the OAUTHBEARER mechanism sasl.enabled.mechanisms=OAUTHBEARER 1 listeners=CLIENT://0.0.0.0:9092 2 listener.security.protocol.map=CLIENT:SASL_PLAINTEXT 3 listener.name.client.sasl.enabled.mechanisms=OAUTHBEARER 4 sasl.mechanism.inter.broker.protocol=OAUTHBEARER 5 inter.broker.listener.name=CLIENT 6 listener.name.client.oauthbearer.sasl.server.callback.handler.class=io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler 7 listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule # ... 1 Enables the OAUTHBEARER mechanism for credentials exchange over SASL. 2 Configures a listener for client applications to connect to. The system hostname is used as an advertised hostname, which clients must resolve in order to reconnect. The listener is named CLIENT in this example. 3 Specifies the channel protocol for the listener. SASL_SSL is for TLS. SASL_PLAINTEXT is used for an unencrypted connection (no TLS), but there is risk of eavesdropping and interception at the TCP connection layer. 4 Specifies the OAUTHBEARER mechanism for the CLIENT listener. The client name ( CLIENT ) is usually specified in uppercase in the listeners property, in lowercase for listener.name properties ( listener.name.client ), and in lowercase when part of a listener.name. client .* property. 5 Specifies the OAUTHBEARER mechanism for inter-broker communication. 6 Specifies the listener for inter-broker communication. The specification is required for the configuration to be valid. 7 Configures OAuth 2.0 authentication on the client listener. Configuring OAuth 2.0 with properties or variables Configure OAuth 2.0 settings using Java Authentication and Authorization Service (JAAS) properties or environment variables. JAAS properties are configured in the server.properties configuration file, and passed as key-values pairs of the listener.name.<listener_name>.oauthbearer.sasl.jaas.config property. If using environment variables, you still need to provide the listener.name.<listener_name>.oauthbearer.sasl.jaas.config property in the server.properties file, but you can omit the other JAAS properties. You can use capitalized or upper-case environment variable naming conventions. The Streams for Apache Kafka OAuth 2.0 libraries use properties that start with: oauth. to configure authentication strimzi. to configure OAuth 2.0 authorization Configuring fast local JWT token validation Fast local JWT token validation involves checking a JWT token signature locally to ensure that the token meets the following criteria: Contains a typ (type) or token_type header claim value of Bearer to indicate it is an access token Is currently valid and not expired Has an issuer that matches a validIssuerURI You specify a validIssuerURI attribute when you configure the listener, so that any tokens not issued by the authorization server are rejected. The authorization server does not need to be contacted during fast local JWT token validation. You activate fast local JWT token validation by specifying a jwksEndpointUri attribute, the endpoint exposed by the OAuth 2.0 authorization server. The endpoint contains the public keys used to validate signed JWT tokens, which are sent as credentials by Kafka clients. All communication with the authorization server should be performed using TLS encryption. You can configure a certificate truststore and point to the truststore file. You might want to configure a userNameClaim to properly extract a username from the JWT token. If required, you can use a JsonPath expression like "['user.info'].['user.id']" to retrieve the username from nested JSON attributes within a token. If you want to use Kafka ACL authorization, identify the user by their username during authentication. (The sub claim in JWT tokens is typically a unique ID, not a username.) Example configuration for fast local JWT token validation # ... listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ 1 oauth.valid.issuer.uri="https://<auth_server_address>/<issuer-context>" \ 2 oauth.jwks.endpoint.uri="https://<oauth_server_address>/<path_to_jwks_endpoint>" \ 3 oauth.jwks.refresh.seconds="300" \ 4 oauth.jwks.refresh.min.pause.seconds="1" \ 5 oauth.jwks.expiry.seconds="360" \ 6 oauth.username.claim="preferred_username" \ 7 oauth.ssl.truststore.location="<path_to_truststore_p12_file>" \ 8 oauth.ssl.truststore.password="<truststore_password>" \ 9 oauth.ssl.truststore.type="PKCS12" ; 10 listener.name.client.oauthbearer.connections.max.reauth.ms=3600000 11 1 Configures the CLIENT listener for OAuth 2.0. Connectivity with the authorization server should use secure HTTPS connections. 2 A valid issuer URI. Only access tokens issued by this issuer will be accepted. (Always required.) 3 The JWKS endpoint URL. 4 The period between endpoint refreshes (default 300). 5 The minimum pause in seconds between consecutive attempts to refresh JWKS public keys. When an unknown signing key is encountered, the JWKS keys refresh is scheduled outside the regular periodic schedule with at least the specified pause since the last refresh attempt. The refreshing of keys follows the rule of exponential backoff, retrying on unsuccessful refreshes with ever increasing pause, until it reaches oauth.jwks.refresh.seconds . The default value is 1. 6 The duration the JWKs certificates are considered valid before they expire. Default is 360 seconds. If you specify a longer time, consider the risk of allowing access to revoked certificates. 7 The token claim (or key) that contains the actual user name in the token. The user name is the principal used to identify the user. The value will depend on the authentication flow and the authorization server used. If required, you can use a JsonPath expression like "['user.info'].['user.id']" to retrieve the username from nested JSON attributes within a token. 8 The location of the truststore used in the TLS configuration. 9 Password to access the truststore. 10 The truststore type in PKCS #12 format. 11 (Optional) Enforces session expiry when a token expires, and also activates the Kafka re-authentication mechanism . If the specified value is less than the time left for the access token to expire, then the client will have to re-authenticate before the actual token expiry. By default, the session does not expire when the access token expires, and the client does not attempt re-authentication. Configuring token validation using an introspection endpoint Token validation using an OAuth 2.0 introspection endpoint treats a received access token as opaque. The Kafka broker sends an access token to the introspection endpoint, which responds with the token information necessary for validation. Importantly, it returns up-to-date information if the specific access token is valid, and also information about when the token expires. To configure OAuth 2.0 introspection-based validation, you specify an introspection endpoint URI rather than the JWKs endpoint URI specified for fast local JWT token validation. Depending on the authorization server, you typically have to specify a client ID and client secret , because the introspection endpoint is usually protected. Example token validation configuration using an introspection endpoint # ... listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ oauth.introspection.endpoint.uri="https://<oauth_server_address>/<introspection_endpoint>" \ 1 oauth.client.id="kafka-broker" \ 2 oauth.client.secret="kafka-broker-secret" \ 3 oauth.ssl.truststore.location="<path_to_truststore_p12_file>" \ 4 oauth.ssl.truststore.password="<truststore_password>" \ 5 oauth.ssl.truststore.type="PKCS12" \ 6 oauth.username.claim="preferred_username" ; 7 1 URI of the token introspection endpoint. 2 Client ID of the Kafka broker. 3 Secret for the Kafka broker. 4 The location of the truststore used in the TLS configuration. 5 Password to access the truststore. 6 The truststore type in PKCS #12 format. 7 The token claim (or key) that contains the actual user name in the token. The user name is the principal used to identify the user. The value will depend on the authentication flow and the authorization server used. If required, you can use a JsonPath expression like "['user.info'].['user.id']" to retrieve the username from nested JSON attributes within a token. Authenticating brokers to the authorization server protected endpoints Usually, the certificates endpoint of the authorization server ( oauth.jwks.endpoint.uri ) is publicly accessible, while the introspection endpoint ( oauth.introspection.endpoint.uri ) is protected. However, this may vary depending on the authorization server configuration. The Kafka broker can authenticate to the authorization server's protected endpoints in one of two ways using HTTP authentication schemes: HTTP Basic authentication uses a client ID and secret. HTTP Bearer authentication uses a bearer token. To configure HTTP Basic authentication, set the following properties: oauth.client.id oauth.client.secret For HTTP Bearer authentication, set one of the following properties: oauth.server.bearer.token.location to specify the file path on disk containing the bearer token. oauth.server.bearer.token to specify the bearer token in clear text. Including additional configuration options Specify additional settings depending on the authentication requirements and the authorization server you are using. Some of these properties apply only to certain authentication mechanisms or when used in combination with other properties. For example, when using OAUth over PLAIN , access tokens are passed as password property values with or without an USDaccessToken: prefix. If you configure a token endpoint ( oauth.token.endpoint.uri ) in the listener configuration, you need the prefix. If you don't configure a token endpoint in the listener configuration, you don't need the prefix. The Kafka broker interprets the password as a raw access token. If the password is set as the access token, the username must be set to the same principal name that the Kafka broker obtains from the access token. You can specify username extraction options in your listener using the oauth.username.claim , oauth.username.prefix , oauth.fallback.username.claim , oauth.fallback.username.prefix , and oauth.userinfo.endpoint.uri properties. The username extraction process also depends on your authorization server; in particular, how it maps client IDs to account names. Note The PLAIN mechanism does not support password grant authentication. Use either client credentials (client ID + secret) or an access token for authentication. Example additional configuration settings listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ # ... oauth.token.endpoint.uri="https://<auth_server_address>/<path_to_token_endpoint>" \ 1 oauth.custom.claim.check="@.custom == 'custom-value'" \ 2 oauth.scope="<scope>" \ 3 oauth.check.audience="true" \ 4 oauth.audience="<audience>" \ 5 oauth.client.id="kafka-broker" \ 6 oauth.client.secret="kafka-broker-secret" \ 7 oauth.connect.timeout.seconds=60 \ 8 oauth.read.timeout.seconds=60 \ 9 oauth.http.retries=2 \ 10 oauth.http.retry.pause.millis=300 \ 11 oauth.groups.claim="USD.groups" \ 12 oauth.groups.claim.delimiter="," \ 13 oauth.include.accept.header="false" ; 14 oauth.check.issuer=false \ 15 oauth.username.prefix="user-account-" \ 16 oauth.fallback.username.claim="client_id" \ 17 oauth.fallback.username.prefix="service-account-" \ 18 oauth.valid.token.type="bearer" \ 19 oauth.userinfo.endpoint.uri="https://<auth_server_address>/<path_to_userinfo_endpoint>" ; 20 1 The OAuth 2.0 token endpoint URL to your authorization server. For production, always use https:// urls. Required when KeycloakAuthorizer is used, or an OAuth 2.0 enabled listener is used for inter-broker communication. 2 (Optional) Custom claim checking . A JsonPath filter query that applies additional custom rules to the JWT access token during validation. If the access token does not contain the necessary data, it is rejected. When using the introspection endpoint method, the custom check is applied to the introspection endpoint response JSON. 3 (Optional) A scope parameter passed to the token endpoint. A scope is used when obtaining an access token for inter-broker authentication. It is also used in the name of a client for OAuth 2.0 over PLAIN client authentication using a clientId and secret . This only affects the ability to obtain the token, and the content of the token, depending on the authorization server. It does not affect token validation rules by the listener. 4 (Optional) Audience checking . If your authorization server provides an aud (audience) claim, and you want to enforce an audience check, set ouath.check.audience to true . Audience checks identify the intended recipients of tokens. As a result, the Kafka broker will reject tokens that do not have its clientId in their aud claims. Default is false . 5 (Optional) An audience parameter passed to the token endpoint. An audience is used when obtaining an access token for inter-broker authentication. It is also used in the name of a client for OAuth 2.0 over PLAIN client authentication using a clientId and secret . This only affects the ability to obtain the token, and the content of the token, depending on the authorization server. It does not affect token validation rules by the listener. 6 The configured client ID of the Kafka broker, which is the same for all brokers. This is the client registered with the authorization server as kafka-broker . Required when an introspection endpoint is used for token validation, or when KeycloakAuthorizer is used. 7 The configured secret for the Kafka broker, which is the same for all brokers. When the broker must authenticate to the authorization server, either a client secret, access token or a refresh token has to be specified. 8 (Optional) The connect timeout in seconds when connecting to the authorization server. The default value is 60. 9 (Optional) The read timeout in seconds when connecting to the authorization server. The default value is 60. 10 The maximum number of times to retry a failed HTTP request to the authorization server. The default value is 0, meaning that no retries are performed. To use this option effectively, consider reducing the timeout times for the oauth.connect.timeout.seconds and oauth.read.timeout.seconds options. However, note that retries may prevent the current worker thread from being available to other requests, and if too many requests stall, it could make the Kafka broker unresponsive. 11 The time to wait before attempting another retry of a failed HTTP request to the authorization server. By default, this time is set to zero, meaning that no pause is applied. This is because many issues that cause failed requests are per-request network glitches or proxy issues that can be resolved quickly. However, if your authorization server is under stress or experiencing high traffic, you may want to set this option to a value of 100 ms or more to reduce the load on the server and increase the likelihood of successful retries. 12 A JsonPath query used to extract groups information from JWT token or introspection endpoint response. Not set by default. This can be used by a custom authorizer to make authorization decisions based on user groups. 13 A delimiter used to parse groups information when returned as a single delimited string. The default value is ',' (comma). 14 (Optional) Sets oauth.include.accept.header to false to remove the Accept header from requests. You can use this setting if including the header is causing issues when communicating with the authorization server. 15 If your authorization server does not provide an iss claim, it is not possible to perform an issuer check. In this situation, set oauth.check.issuer to false and do not specify a oauth.valid.issuer.uri . Default is true . 16 The prefix used when constructing the user ID. This only takes effect if oauth.username.claim is configured. 17 An authorization server may not provide a single attribute to identify both regular users and clients. When a client authenticates in its own name, the server might provide a client ID attribute. When a user authenticates using a username and password, to obtain a refresh token or an access token, the server might provide a username attribute in addition to a client ID. Use this fallback option to specify the username claim (attribute) to use if a primary user ID attribute is not available. If required, you can use a JsonPath expression like "['client.info'].['client.id']" to retrieve the fallback username from nested JSON attributes within a token. 18 In situations where oauth.fallback.username.claim is applicable, it may also be necessary to prevent name collisions between the values of the username claim, and those of the fallback username claim. Consider a situation where a client called producer exists, but also a regular user called producer exists. In order to differentiate between the two, you can use this property to add a prefix to the user ID of the client. 19 (Only applicable when using oauth.introspection.endpoint.uri ) Depending on the authorization server you are using, the introspection endpoint may or may not return the token type attribute, or it may contain different values. You can specify a valid token type value that the response from the introspection endpoint has to contain. 20 (Only applicable when using oauth.introspection.endpoint.uri ) The authorization server may be configured or implemented in such a way to not provide any identifiable information in an introspection endpoint response. In order to obtain the user ID, you can configure the URI of the userinfo endpoint as a fallback. The oauth.username.claim , oauth.username.prefix , oauth.fallback.username.claim , and oauth.fallback.username.prefix settings are also applied to the response of the userinfo endpoint. Configuring listeners for inter-broker communication The following example uses the OAUTHBEARER mechanism for fast token validation in a minimum configuration where inter-broker communication goes through the same listener as application clients. The oauth.client.id , oauth.client.secret , and auth.token.endpoint.uri properties relate to inter-broker communication. Example inter-broker configuration using the OAUTHBEARER mechanism sasl.enabled.mechanisms=OAUTHBEARER listeners=CLIENT://0.0.0.0:9092 listener.security.protocol.map=CLIENT:SASL_PLAINTEXT listener.name.client.sasl.enabled.mechanisms=OAUTHBEARER sasl.mechanism.inter.broker.protocol=OAUTHBEARER inter.broker.listener.name=CLIENT listener.name.client.oauthbearer.sasl.server.callback.handler.class=io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ 1 oauth.valid.issuer.uri="https://<auth_server_address>/<issuer-context>" \ oauth.jwks.endpoint.uri="https://<oauth_server_address>/<path_to_jwks_endpoint>" \ oauth.username.claim="preferred_username" \ oauth.client.id="kafka-broker" \ 2 oauth.client.secret="kafka-secret" \ 3 oauth.token.endpoint.uri="https://<oauth_server_address>/<token_endpoint>" ; 4 listener.name.client.oauthbearer.sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler 5 listener.name.client.oauthbearer.connections.max.reauth.ms=3600000 1 Configures authentication settings for client and inter-broker communication. 2 Client ID of the Kafka broker, which is the same for all brokers. This is the client registered with the authorization server as kafka-broker . 3 Secret for the Kafka broker, which is the same for all brokers. 4 The OAuth 2.0 token endpoint URL to your authorization server. For production, always use https:// urls. 5 Enables (and is only required for) OAuth 2.0 authentication for inter-broker communication. The following example shows a minimum configuration for a TLS listener used for inter-broker communication. Example inter-broker configuration configuration with TLS sasl.enabled.mechanisms=OAUTHBEARER listeners=REPLICATION://kafka:9091,CLIENT://kafka:9092 1 listener.security.protocol.map=REPLICATION:SSL,CLIENT:SASL_PLAINTEXT 2 listener.name.client.sasl.enabled.mechanisms=OAUTHBEARER inter.broker.listener.name=REPLICATION listener.name.replication.ssl.keystore.password=<keystore_password> 3 listener.name.replication.ssl.truststore.password=<truststore_password> listener.name.replication.ssl.keystore.type=JKS listener.name.replication.ssl.truststore.type=JKS listener.name.replication.ssl.secure.random.implementation=SHA1PRNG 4 listener.name.replication.ssl.endpoint.identification.algorithm=HTTPS 5 listener.name.replication.ssl.keystore.location=<path_to_keystore> 6 listener.name.replication.ssl.truststore.location=<path_to_truststore> 7 listener.name.replication.ssl.client.auth=required 8 listener.name.client.oauthbearer.sasl.server.callback.handler.class=io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ oauth.valid.issuer.uri="https://<auth_server_address>/<issuer-context>" \ oauth.jwks.endpoint.uri="https://<oauth_server_address>/<path_to_jwks_endpoint>" \ oauth.username.claim="preferred_username" ; 1 Separate configurations are required for inter-broker communication and client applications. 2 Configures the REPLICATION listener to use TLS, and the CLIENT listener to use SASL over an unencrypted channel. The client could use an encrypted channel ( SASL_SSL ) in a production environment. 3 The ssl. properties define the TLS configuration. 4 Random number generator implementation. If not set, the Java platform SDK default is used. 5 Hostname verification. If set to an empty string, the hostname verification is turned off. If not set, the default value is HTTPS , which enforces hostname verification for server certificates. 6 Path to the keystore for the listener. 7 Path to the truststore for the listener. 8 Specifies that clients of the REPLICATION listener have to authenticate with a client certificate when establishing a TLS connection (used for inter-broker connectivity). The following example uses the PLAIN mechanism for fast token validation in a minimum configuration where inter-broker communication goes through the same listener as application clients. Example inter-broker configuration configuration using the PLAIN mechanism listeners=CLIENT://0.0.0.0:9092 listener.security.protocol.map=CLIENT:SASL_PLAINTEXT listener.name.client.sasl.enabled.mechanisms=OAUTHBEARER,PLAIN sasl.mechanism.inter.broker.protocol=OAUTHBEARER inter.broker.listener.name=CLIENT listener.name.client.oauthbearer.sasl.server.callback.handler.class=io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ oauth.valid.issuer.uri="https:<auth_server_address>/<issuer-context>" \ oauth.jwks.endpoint.uri="https://<auth_server>/<path_to_jwks_endpoint>" \ oauth.username.claim="preferred_username" \ oauth.client.id="kafka-broker" \ oauth.client.secret="kafka-secret" \ oauth.token.endpoint.uri="https://<oauth_server_address>/<token_endpoint>" ; listener.name.client.oauthbearer.sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler 1 listener.name.client.plain.sasl.server.callback.handler.class=io.strimzi.kafka.oauth.server.plain.JaasServerOauthOverPlainValidatorCallbackHandler 2 listener.name.client.plain.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \ 3 oauth.valid.issuer.uri="https://<auth_server_address>/<issuer-context>" \ oauth.jwks.endpoint.uri="https://<oauth_server_address>/<path_to_jwks_endpoint>" \ oauth.username.claim="preferred_username" \ oauth.token.endpoint.uri="https://<oauth_server_address>/<token_endpoint>" ; 4 listener.name.client.oauthbearer.connections.max.reauth.ms=3600000 1 Enables OAuth 2.0 authentication for inter-broker communication. 2 Configures the server callback handler for PLAIN authentication. 3 Configures authentication settings for client communication using PLAIN authentication. oauth.token.endpoint.uri is an optional property that enables OAuth 2.0 over PLAIN using the OAuth 2.0 client credentials mechanism . 4 The OAuth 2.0 token endpoint URL to your authorization server. If specified, clients can authenticate over PLAIN by passing an access token as the password using an USDaccessToken: prefix. 7.2.2. Configuring OAuth 2.0 on client applications To configure OAuth 2.0 on client applications, you must specify the following: SASL (Simple Authentication and Security Layer) security protocols SASL mechanisms A JAAS (Java Authentication and Authorization Service) module Authentication properties to access the authorization server Configuring SASL protocols Specify SASL protocols in the client configuration: SASL_SSL for authentication over TLS encrypted connections SASL_PLAINTEXT for authentication over unencrypted connections Use SASL_SSL for production and SASL_PLAINTEXT for local development only. When using SASL_SSL , additional ssl.truststore configuration is needed. The truststore configuration is required for secure connection ( https:// ) to the OAuth 2.0 authorization server. To verify the OAuth 2.0 authorization server, add the CA certificate for the authorization server to the truststore in your client configuration. You can configure a truststore in PEM or PKCS #12 format. Configuring SASL authentication mechanisms Specify SASL mechanisms in the client configuration: OAUTHBEARER for credentials exchange using a bearer token PLAIN to pass client credentials (clientId + secret) or an access token Configuring a JAAS module Specify a JAAS module that implements the SASL authentication mechanism as a sasl.jaas.config property value: org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule implements the OAUTHBEARER mechanism org.apache.kafka.common.security.plain.PlainLoginModule implements the PLAIN mechanism Note For the OAUTHBEARER mechanism, Streams for Apache Kafka provides a callback handler for clients that use Kafka Client Java libraries to enable credentials exchange. For clients in other languages, custom code may be required to obtain the access token. For the PLAIN mechanism, Streams for Apache Kafka provides server-side callbacks to enable credentials exchange. To be able to use the OAUTHBEARER mechanism, you must also add the custom io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler class as the callback handler. JaasClientOauthLoginCallbackHandler handles OAuth callbacks to the authorization server for access tokens during client login. This enables automatic token renewal, ensuring continuous authentication without user intervention. Additionally, it handles login credentials for clients using the OAuth 2.0 password grant method. Configuring authentication properties Configure the client to use credentials or access tokens for OAuth 2.0 authentication. Using client credentials Using client credentials involves configuring the client with the necessary credentials (client ID and secret, or client ID and client assertion) to obtain a valid access token from an authorization server. This is the simplest mechanism. Using access tokens Using access tokens, the client is configured with a valid long-lived access token or refresh token obtained from an authorization server. Using access tokens adds more complexity because there is an additional dependency on authorization server tools. If you are using long-lived access tokens, you may need to configure the client in the authorization server to increase the maximum lifetime of the token. The only information ever sent to Kafka is the access token. The credentials used to obtain the token are never sent to Kafka. When a client obtains an access token, no further communication with the authorization server is needed. SASL authentication properties support the following authentication methods: OAuth 2.0 client credentials Access token or Service account token Refresh token OAuth 2.0 password grant (deprecated) Add the authentication properties as JAAS configuration ( sasl.jaas.config and sasl.login.callback.handler.class ). If the client application is not configured with an access token directly, the client exchanges one of the following sets of credentials for an access token during Kafka session initiation: Client ID and secret Client ID and client assertion Client ID, refresh token, and (optionally) a secret Username and password, with client ID and (optionally) a secret Note You can also specify authentication properties as environment variables, or as Java system properties. For Java system properties, you can set them using setProperty and pass them on the command line using the -D option. Example client credentials configuration using the client secret security.protocol=SASL_SSL 1 sasl.mechanism=OAUTHBEARER 2 ssl.truststore.location=/tmp/truststore.p12 3 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ oauth.token.endpoint.uri="<token_endpoint_url>" \ 4 oauth.client.id="<client_id>" \ 5 oauth.client.secret="<client_secret>" \ 6 oauth.ssl.truststore.location="/tmp/oauth-truststore.p12" \ 7 oauth.ssl.truststore.password="USDSTOREPASS" \ 8 oauth.ssl.truststore.type="PKCS12" \ 9 oauth.scope="<scope>" \ 10 oauth.audience="<audience>" ; 11 sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler 1 SASL_SSL security protocol for TLS-encrypted connections. Use SASL_PLAINTEXT over unencrypted connections for local development only. 2 The SASL mechanism specified as OAUTHBEARER or PLAIN . 3 The truststore configuration for secure access to the Kafka cluster. 4 URI of the authorization server token endpoint. 5 Client ID, which is the name used when creating the client in the authorization server. 6 Client secret created when creating the client in the authorization server. 7 The location contains the public key certificate ( truststore.p12 ) for the authorization server. 8 The password for accessing the truststore. 9 The truststore type. 10 (Optional) The scope for requesting the token from the token endpoint. An authorization server may require a client to specify the scope. 11 (Optional) The audience for requesting the token from the token endpoint. An authorization server may require a client to specify the audience. Example client credentials configuration using the client assertion security.protocol=SASL_SSL sasl.mechanism=OAUTHBEARER ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ oauth.token.endpoint.uri="<token_endpoint_url>" \ oauth.client.id="<client_id>" \ oauth.client.assertion.location="<path_to_client_assertion_token_file>" \ 1 oauth.client.assertion.type="urn:ietf:params:oauth:client-assertion-type:jwt-bearer" \ 2 oauth.ssl.truststore.location="/tmp/oauth-truststore.p12" \ oauth.ssl.truststore.password="USDSTOREPASS" \ oauth.ssl.truststore.type="PKCS12" \ oauth.scope="<scope>" \ oauth.audience="<audience>" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler 1 Path to the client assertion file used for authenticating the client. This file is a private key file as an alternative to the client secret. Alternatively, use the oauth.client.assertion option to specify the client assertion value in clear text. 2 (Optional) Sometimes you may need to specify the client assertion type. In not specified, the default value is urn:ietf:params:oauth:client-assertion-type:jwt-bearer . Example password grants configuration security.protocol=SASL_SSL sasl.mechanism=OAUTHBEARER ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ oauth.token.endpoint.uri="<token_endpoint_url>" \ oauth.client.id="<client_id>" \ 1 oauth.client.secret="<client_secret>" \ 2 oauth.password.grant.username="<username>" \ 3 oauth.password.grant.password="<password>" \ 4 oauth.ssl.truststore.location="/tmp/oauth-truststore.p12" \ oauth.ssl.truststore.password="USDSTOREPASS" \ oauth.ssl.truststore.type="PKCS12" \ oauth.scope="<scope>" \ oauth.audience="<audience>" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler 1 Client ID, which is the name used when creating the client in the authorization server. 2 (Optional) Client secret created when creating the client in the authorization server. 3 Username for password grant authentication. OAuth password grant configuration (username and password) uses the OAuth 2.0 password grant method. To use password grants, create a user account for a client on your authorization server with limited permissions. The account should act like a service account. Use in environments where user accounts are required for authentication, but consider using a refresh token first. 4 Password for password grant authentication. Note SASL PLAIN does not support passing a username and password (password grants) using the OAuth 2.0 password grant method. Example access token configuration security.protocol=SASL_SSL sasl.mechanism=OAUTHBEARER ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ oauth.access.token="<access_token>" ; 1 sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler 1 Long-lived access token for Kafka clients. Alternatively, oauth.access.token.location can be used to specify the file that contains the access token. Example OpenShift service account token configuration security.protocol=SASL_SSL sasl.mechanism=OAUTHBEARER ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ oauth.access.token.location="/var/run/secrets/kubernetes.io/serviceaccount/token"; 1 sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler 1 Location to the service account token on the filesystem (assuming that the client is deployed as an OpenShift pod) Example refresh token configuration security.protocol=SASL_SSL sasl.mechanism=OAUTHBEARER ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ oauth.token.endpoint.uri="<token_endpoint_url>" \ oauth.client.id="<client_id>" \ 1 oauth.client.secret="<client_secret>" \ 2 oauth.refresh.token="<refresh_token>" \ 3 oauth.ssl.truststore.location="/tmp/oauth-truststore.p12" \ oauth.ssl.truststore.password="USDSTOREPASS" \ oauth.ssl.truststore.type="PKCS12" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler 1 Client ID, which is the name used when creating the client in the authorization server. 2 (Optional) Client secret created when creating the client in the authorization server. 3 Long-lived refresh token for Kafka clients. SASL extensions for custom OAUTHBEARER implementations If your Kafka broker uses a custom OAUTHBEARER implementation, you may need to pass additional SASL extension options. These extensions can include attributes or information required as client context by the authorization server. The options are passed as key-value pairs and are sent to the Kafka broker when a new session is started. Pass SASL extension values using oauth.sasl.extension. as a key prefix. Example configuration to pass SASL extension values oauth.sasl.extension.key1="value1" oauth.sasl.extension.key2="value2" 7.2.3. OAuth 2.0 client authentication flows OAuth 2.0 authentication flows depend on the underlying Kafka client and Kafka broker configuration. The flows must also be supported by the authorization server used. The Kafka broker listener configuration determines how clients authenticate using an access token. The client can pass a client ID and secret to request an access token. If a listener is configured to use PLAIN authentication, the client can authenticate with a client ID and secret or username and access token. These values are passed as the username and password properties of the PLAIN mechanism. Listener configuration supports the following token validation options: You can use fast local token validation based on JWT signature checking and local token introspection, without contacting an authorization server. The authorization server provides a JWKS endpoint with public certificates that are used to validate signatures on the tokens. You can use a call to a token introspection endpoint provided by an authorization server. Each time a new Kafka broker connection is established, the broker passes the access token received from the client to the authorization server. The Kafka broker checks the response to confirm whether the token is valid. Note An authorization server might only allow the use of opaque access tokens, which means that local token validation is not possible. Kafka client credentials can also be configured for the following types of authentication: Direct local access using a previously generated long-lived access token Contact with the authorization server for a new access token to be issued (using a client ID and credentials, or a refresh token, or a username and a password) 7.2.3.1. Example client authentication flows using the SASL OAUTHBEARER mechanism You can use the following communication flows for Kafka authentication using the SASL OAUTHBEARER mechanism. Client using client ID and credentials, with broker delegating validation to authorization server The Kafka client requests an access token from the authorization server using a client ID and credentials, and optionally a refresh token. Alternatively, the client may authenticate using a username and a password. The authorization server generates a new access token. The Kafka client authenticates with the Kafka broker using the SASL OAUTHBEARER mechanism to pass the access token. The Kafka broker validates the access token by calling a token introspection endpoint on the authorization server using its own client ID and secret. A Kafka client session is established if the token is valid. Client using client ID and credentials, with broker performing fast local token validation The Kafka client authenticates with the authorization server from the token endpoint, using a client ID and credentials, and optionally a refresh token. Alternatively, the client may authenticate using a username and a password. The authorization server generates a new access token. The Kafka client authenticates with the Kafka broker using the SASL OAUTHBEARER mechanism to pass the access token. The Kafka broker validates the access token locally using a JWT token signature check, and local token introspection. Client using long-lived access token, with broker delegating validation to authorization server The Kafka client authenticates with the Kafka broker using the SASL OAUTHBEARER mechanism to pass the long-lived access token. The Kafka broker validates the access token by calling a token introspection endpoint on the authorization server, using its own client ID and secret. A Kafka client session is established if the token is valid. Client using long-lived access token, with broker performing fast local validation The Kafka client authenticates with the Kafka broker using the SASL OAUTHBEARER mechanism to pass the long-lived access token. The Kafka broker validates the access token locally using a JWT token signature check and local token introspection. Warning Fast local JWT token signature validation is suitable only for short-lived tokens as there is no check with the authorization server if a token has been revoked. Token expiration is written into the token, but revocation can happen at any time, so cannot be accounted for without contacting the authorization server. Any issued token would be considered valid until it expires. 7.2.3.2. Example client authentication flows using the SASL PLAIN mechanism You can use the following communication flows for Kafka authentication using the OAuth PLAIN mechanism. Client using a client ID and secret, with the broker obtaining the access token for the client The Kafka client passes a clientId as a username and a secret as a password. The Kafka broker uses a token endpoint to pass the clientId and secret to the authorization server. The authorization server returns a fresh access token or an error if the client credentials are not valid. The Kafka broker validates the token in one of the following ways: If a token introspection endpoint is specified, the Kafka broker validates the access token by calling the endpoint on the authorization server. A session is established if the token validation is successful. If local token introspection is used, a request is not made to the authorization server. The Kafka broker validates the access token locally using a JWT token signature check. Client using a long-lived access token without a client ID and secret The Kafka client passes a username and password. The password provides the value of an access token that was obtained manually and configured before running the client. The password is passed with or without an USDaccessToken: string prefix depending on whether or not the Kafka broker listener is configured with a token endpoint for authentication. If the token endpoint is configured, the password should be prefixed by USDaccessToken: to let the broker know that the password parameter contains an access token rather than a client secret. The Kafka broker interprets the username as the account username. If the token endpoint is not configured on the Kafka broker listener (enforcing a no-client-credentials mode ), the password should provide the access token without the prefix. The Kafka broker interprets the username as the account username. In this mode, the client doesn't use a client ID and secret, and the password parameter is always interpreted as a raw access token. The Kafka broker validates the token in one of the following ways: If a token introspection endpoint is specified, the Kafka broker validates the access token by calling the endpoint on the authorization server. A session is established if token validation is successful. If local token introspection is used, there is no request made to the authorization server. Kafka broker validates the access token locally using a JWT token signature check. 7.2.4. Re-authenticating sessions You can configure OAuth listeners to use Kafka session re-authentication for OAuth 2.0 sessions between Kafka clients and Kafka brokers. This mechanism enforces the expiry of an authenticated session between the client and the broker after a defined period of time. When a session expires, the client immediately starts a new session by reusing the existing connection rather than dropping it. Session re-authentication is disabled by default. To enable it, set a time value for the connections.max.reauth.ms property in the server.properties file. For an example configuration, see Section 7.2.1, "Configuring OAuth 2.0 authentication on listeners" . Session re-authentication must be supported by the Kafka client libraries used by the client. Session re-authentication can be used with fast local JWT or introspection endpoint token validation. Client re-authentication When the broker's authenticated session expires, the client must re-authenticate to the existing session by sending a new, valid access token to the broker, without dropping the connection. If token validation is successful, a new client session is started using the existing connection. If the client fails to re-authenticate, the broker will close the connection if further attempts are made to send or receive messages. Java clients that use Kafka client library 2.2 or later automatically re-authenticate if the re-authentication mechanism is enabled on the broker. Session re-authentication also applies to refresh tokens, if used. When the session expires, the client refreshes the access token by using its refresh token. The client then uses the new access token to re-authenticate over the existing connection. Session expiry for OAUTHBEARER and PLAIN When session re-authentication is configured, session expiry works differently for OAUTHBEARER and PLAIN authentication. For OAUTHBEARER and PLAIN, using the client ID and secret method: The broker's authenticated session will expire at the configured connections.max.reauth.ms . The session will expire earlier if the access token expires before the configured time. For PLAIN using the long-lived access token method: The broker's authenticated session will expire at the configured connections.max.reauth.ms . Re-authentication will fail if the access token expires before the configured time. Although session re-authentication is attempted, PLAIN has no mechanism for refreshing tokens. If connections.max.reauth.ms is not configured, OAUTHBEARER and PLAIN clients can remain connected to brokers indefinitely, without needing to re-authenticate. Authenticated sessions do not end with access token expiry. However, this can be considered when configuring authorization, for example, by using keycloak authorization or installing a custom authorizer. 7.2.5. Example: Enabling OAuth 2.0 authentication This example shows how to configure client access to a Kafka cluster using OAUth 2.0 authentication. The procedures describe the configuration required to set up OAuth 2.0 authentication on Kafka listeners and Kafka Java clients. 7.2.5.1. Configuring OAuth 2.0 support for Kafka brokers This procedure describes how to configure Kafka brokers so that the broker listeners are enabled to use OAuth 2.0 authentication using an authorization server. We advise use of OAuth 2.0 over an encrypted interface through configuration of TLS listeners. Plain listeners are not recommended. Configure the Kafka brokers using properties that support your chosen authorization server, and the type of authorization you are implementing. Prerequisites Streams for Apache Kafka is installed on each host , and the configuration files are available. An OAuth 2.0 authorization server is deployed. Procedure Configure the Kafka broker listener configuration in the server.properties file. For example, using the OAUTHBEARER mechanism: sasl.enabled.mechanisms=OAUTHBEARER listeners=CLIENT://0.0.0.0:9092 listener.security.protocol.map=CLIENT:SASL_PLAINTEXT listener.name.client.sasl.enabled.mechanisms=OAUTHBEARER sasl.mechanism.inter.broker.protocol=OAUTHBEARER inter.broker.listener.name=CLIENT listener.name.client.oauthbearer.sasl.server.callback.handler.class=io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required ; listener.name.client.oauthbearer.sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler Configure broker connection settings as part of the listener.name.client.oauthbearer.sasl.jaas.config . Configuring fast local JWT token validation Configuring token validation using an introspection endpoint Including additional configuration options If required, configure access to the authorization server. This step is normally required for a production environment, unless a technology like service mesh is used to configure secure channels outside containers. Provide a custom truststore for connecting to a secured authorization server. SSL is always required for access to the authorization server. Set properties to configure the truststore. For example: listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ # ... oauth.client.id="kafka-broker" \ oauth.client.secret="kafka-broker-secret" \ oauth.ssl.truststore.location="<path_to_truststore_p12_file>" \ oauth.ssl.truststore.password="<truststore_password>" \ oauth.ssl.truststore.type="PKCS12" ; If the certificate hostname does not match the access URL hostname, you can turn off certificate hostname validation: oauth.ssl.endpoint.identification.algorithm="" The check ensures that client connection to the authorization server is authentic. You may wish to turn off the validation in a non-production environment. What to do Configure your Kafka clients to use OAuth 2.0 7.2.5.2. Setting up OAuth 2.0 on Kafka Java clients Configure Kafka producer and consumer APIs to use OAuth 2.0 for interaction with Kafka brokers. Add a callback plugin to your client pom.xml file, then configure your client for OAuth 2.0. How you configure the authentication properties depends on the authentication method you are using to access the OAuth 2.0 authorization server. In this procedure, the properties are specified in a properties file, then loaded into the client configuration. Prerequisites Streams for Apache Kafka and Kafka are running An OAuth 2.0 authorization server is deployed and configured for OAuth access to Kafka brokers Kafka brokers are configured for OAuth 2.0 Procedure Add the client library with OAuth 2.0 support to the pom.xml file for the Kafka client: <dependency> <groupId>io.strimzi</groupId> <artifactId>kafka-oauth-client</artifactId> <version>0.15.0.redhat-00012</version> </dependency> Configure the client depending on the OAuth 2.0 authentication method: Example client credentials configuration using the client secret Example password grants configuration Example access token configuration Example refresh token configuration For example, specify the properties for the authentication method in a client.properties file. Input the client properties for OAUTH 2.0 authentication into the Java client code. Example showing input of client properties Properties props = new Properties(); try (FileReader reader = new FileReader("client.properties", StandardCharsets.UTF_8)) { props.load(reader); } Verify that the Kafka client can access the Kafka brokers. 7.3. Using OAuth 2.0 token-based authorization Streams for Apache Kafka supports the use of OAuth 2.0 token-based authorization through Red Hat build of Keycloak Authorization Services , which lets you manage security policies and permissions centrally. Security policies and permissions defined in Red Hat build of Keycloak grant access to Kafka resources. Users and clients are matched against policies that permit access to perform specific actions on Kafka brokers. Kafka allows all users full access to brokers by default, but also provides the AclAuthorizer and StandardAuthorizer plugins to configure authorization based on Access Control Lists (ACLs). The ACL rules managed by these plugins are used to grant or deny access to resources based on username , and these rules are stored within the Kafka cluster itself. However, OAuth 2.0 token-based authorization with Red Hat build of Keycloak offers far greater flexibility on how you wish to implement access control to Kafka brokers. In addition, you can configure your Kafka brokers to use OAuth 2.0 authorization and ACLs. 7.3.1. Example: Enabling OAuth 2.0 authorization This procedure describes how to configure Kafka brokers to use OAuth 2.0 authorization using Red Hat build of Keycloak Authorization Services. Red Hat build of Keycloak server Authorization Services REST endpoints extend token-based authentication with Red Hat build of Keycloak by applying defined security policies on a particular user, and providing a list of permissions granted on different resources for that user. Policies use roles and groups to match permissions to users. OAuth 2.0 authorization enforces permissions locally based on the received list of grants for the user from Red Hat build of Keycloak Authorization Services. A Red Hat build of Keycloak authorizer ( KeycloakAuthorizer ) is provided with Streams for Apache Kafka. The authorizer fetches a list of granted permissions from the authorization server as needed, and enforces authorization locally on Kafka, making rapid authorization decisions for each client request. Before you begin Consider the access you require or want to limit for certain users. You can use a combination of Red Hat build of Keycloak groups , roles , clients , and users to configure access in Red Hat build of Keycloak. Typically, groups are used to match users based on organizational departments or geographical locations. And roles are used to match users based on their function. With Red Hat build of Keycloak, you can store users and groups in LDAP, whereas clients and roles cannot be stored this way. Storage and access to user data may be a factor in how you choose to configure authorization policies. Note Super users always have unconstrained access to a Kafka broker regardless of the authorization implemented on the Kafka broker. Prerequisites Streams for Apache Kafka must be configured to use OAuth 2.0 with Red Hat build of Keycloak token-based authentication . You use the same RRed Hat build of Keycloak endpoint when you set up authorization. You need to understand how to manage policies and permissions for Red Hat build of Keycloak Authorization Services, as described in the Red Hat build of Keycloak documentation . Procedure Access the Red Hat build of Keycloak Admin Console or use the Red Hat build of Keycloak Admin CLI to enable Authorization Services for the OAuth 2.0 client for Kafka you created when setting up OAuth 2.0 authentication. Use Authorization Services to define resources, authorization scopes, policies, and permissions for the client. Bind the permissions to users and clients by assigning them roles and groups. Configure the Kafka brokers to use Red Hat build of Keycloak authorization. Add the following to the Kafka server.properties configuration file to install the authorizer in Kafka: authorizer.class.name=io.strimzi.kafka.oauth.server.authorizer.KeycloakAuthorizer principal.builder.class=io.strimzi.kafka.oauth.server.OAuthKafkaPrincipalBuilder Add configuration for the Kafka brokers to access the authorization server and Authorization Services. Here we show example configuration added as additional properties to server.properties , but you can also define them as environment variables using capitalized or upper-case naming conventions. strimzi.authorization.token.endpoint.uri="https://<auth_server_address>/auth/realms/REALM-NAME/protocol/openid-connect/token" 1 strimzi.authorization.client.id="kafka" 2 1 The OAuth 2.0 token endpoint URL to Red Hat build of Keycloak. For production, always use https:// urls. 2 The client ID of the OAuth 2.0 client definition in Red Hat build of Keycloak that has Authorization Services enabled. Typically, kafka is used as the ID. (Optional) Add configuration for specific Kafka clusters. For example: strimzi.authorization.kafka.cluster.name="kafka-cluster" 1 1 The name of a specific Kafka cluster. Names are used to target permissions, making it possible to manage multiple clusters within the same Red Hat build of Keycloak realm. The default value is kafka-cluster . (Optional) Delegate to simple authorization: strimzi.authorization.delegate.to.kafka.acl="true" 1 1 Delegate authorization to Kafka AclAuthorizer if access is denied by Red Hat build of Keycloak Authorization Services policies. The default is false . (Optional) Add configuration for TLS connection to the authorization server. For example: strimzi.authorization.ssl.truststore.location=<path_to_truststore> 1 strimzi.authorization.ssl.truststore.password=<my_truststore_password> 2 strimzi.authorization.ssl.truststore.type=JKS 3 strimzi.authorization.ssl.secure.random.implementation=SHA1PRNG 4 strimzi.authorization.ssl.endpoint.identification.algorithm=HTTPS 5 1 The path to the truststore that contain the certificates. 2 The password for the truststore. 3 The truststore type. If not set, the default Java keystore type is used. 4 Random number generator implementation. If not set, the Java platform SDK default is used. 5 Hostname verification. If set to an empty string, the hostname verification is turned off. If not set, the default value is HTTPS , which enforces hostname verification for server certificates. (Optional) Configure the refresh of grants from the authorization server. The grants refresh job works by enumerating the active tokens and requesting the latest grants for each. For example: strimzi.authorization.grants.refresh.period.seconds="120" 1 strimzi.authorization.grants.refresh.pool.size="10" 2 strimzi.authorization.grants.max.idle.time.seconds="300" 3 strimzi.authorization.grants.gc.period.seconds="300" 4 strimzi.authorization.reuse.grants="false" 5 1 Specifies how often the list of grants from the authorization server is refreshed (once per minute by default). To turn grants refresh off for debugging purposes, set to "0" . 2 Specifies the size of the thread pool (the degree of parallelism) used by the grants refresh job. The default value is "5" . 3 The time, in seconds, after which an idle grant in the cache can be evicted. The default value is 300. 4 The time, in seconds, between consecutive runs of a job that cleans stale grants from the cache. The default value is 300. 5 Controls whether the latest grants are fetched for a new session. When disabled, grants are retrieved from Red Hat build of Keycloak and cached for the user. The default value is true . (Optional) Configure network timeouts when communicating with the authorization server. For example: strimzi.authorization.connect.timeout.seconds="60" 1 strimzi.authorization.read.timeout.seconds="60" 2 strimzi.authorization.http.retries="2" 3 1 The connect timeout in seconds when connecting to the Red Hat build of Keycloak token endpoint. The default value is 60 . 2 The read timeout in seconds when connecting to the Red Hat build of Keycloak token endpoint. The default value is 60 . 3 The maximum number of times to retry (without pausing) a failed HTTP request to the authorization server. The default value is 0 , meaning that no retries are performed. To use this option effectively, consider reducing the timeout times for the strimzi.authorization.connect.timeout.seconds and strimzi.authorization.read.timeout.seconds options. However, note that retries may prevent the current worker thread from being available to other requests, and if too many requests stall, it could make Kafka unresponsive. (Optional) Enable OAuth 2.0 metrics for token validation and authorization: oauth.enable.metrics="true" 1 1 Controls whether to enable or disable OAuth metrics. The default value is false . (Optional) Remove the Accept header from requests: oauth.include.accept.header="false" 1 1 Set to false if including the header is causing issues when communicating with the authorization server. The default value is true . Verify the configured permissions by accessing Kafka brokers as clients or users with specific roles, ensuring they have the necessary access and do not have unauthorized access.
[ "sasl.enabled.mechanisms=OAUTHBEARER 1 listeners=CLIENT://0.0.0.0:9092 2 listener.security.protocol.map=CLIENT:SASL_PLAINTEXT 3 listener.name.client.sasl.enabled.mechanisms=OAUTHBEARER 4 sasl.mechanism.inter.broker.protocol=OAUTHBEARER 5 inter.broker.listener.name=CLIENT 6 listener.name.client.oauthbearer.sasl.server.callback.handler.class=io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler 7 listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule #", "listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \\ 1 oauth.valid.issuer.uri=\"https://<auth_server_address>/<issuer-context>\" \\ 2 oauth.jwks.endpoint.uri=\"https://<oauth_server_address>/<path_to_jwks_endpoint>\" \\ 3 oauth.jwks.refresh.seconds=\"300\" \\ 4 oauth.jwks.refresh.min.pause.seconds=\"1\" \\ 5 oauth.jwks.expiry.seconds=\"360\" \\ 6 oauth.username.claim=\"preferred_username\" \\ 7 oauth.ssl.truststore.location=\"<path_to_truststore_p12_file>\" \\ 8 oauth.ssl.truststore.password=\"<truststore_password>\" \\ 9 oauth.ssl.truststore.type=\"PKCS12\" ; 10 listener.name.client.oauthbearer.connections.max.reauth.ms=3600000 11", "listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.introspection.endpoint.uri=\"https://<oauth_server_address>/<introspection_endpoint>\" \\ 1 oauth.client.id=\"kafka-broker\" \\ 2 oauth.client.secret=\"kafka-broker-secret\" \\ 3 oauth.ssl.truststore.location=\"<path_to_truststore_p12_file>\" \\ 4 oauth.ssl.truststore.password=\"<truststore_password>\" \\ 5 oauth.ssl.truststore.type=\"PKCS12\" \\ 6 oauth.username.claim=\"preferred_username\" ; 7", "listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required # oauth.token.endpoint.uri=\"https://<auth_server_address>/<path_to_token_endpoint>\" \\ 1 oauth.custom.claim.check=\"@.custom == 'custom-value'\" \\ 2 oauth.scope=\"<scope>\" \\ 3 oauth.check.audience=\"true\" \\ 4 oauth.audience=\"<audience>\" \\ 5 oauth.client.id=\"kafka-broker\" \\ 6 oauth.client.secret=\"kafka-broker-secret\" \\ 7 oauth.connect.timeout.seconds=60 \\ 8 oauth.read.timeout.seconds=60 \\ 9 oauth.http.retries=2 \\ 10 oauth.http.retry.pause.millis=300 \\ 11 oauth.groups.claim=\"USD.groups\" \\ 12 oauth.groups.claim.delimiter=\",\" \\ 13 oauth.include.accept.header=\"false\" ; 14 oauth.check.issuer=false \\ 15 oauth.username.prefix=\"user-account-\" \\ 16 oauth.fallback.username.claim=\"client_id\" \\ 17 oauth.fallback.username.prefix=\"service-account-\" \\ 18 oauth.valid.token.type=\"bearer\" \\ 19 oauth.userinfo.endpoint.uri=\"https://<auth_server_address>/<path_to_userinfo_endpoint>\" ; 20", "sasl.enabled.mechanisms=OAUTHBEARER listeners=CLIENT://0.0.0.0:9092 listener.security.protocol.map=CLIENT:SASL_PLAINTEXT listener.name.client.sasl.enabled.mechanisms=OAUTHBEARER sasl.mechanism.inter.broker.protocol=OAUTHBEARER inter.broker.listener.name=CLIENT listener.name.client.oauthbearer.sasl.server.callback.handler.class=io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \\ 1 oauth.valid.issuer.uri=\"https://<auth_server_address>/<issuer-context>\" oauth.jwks.endpoint.uri=\"https://<oauth_server_address>/<path_to_jwks_endpoint>\" oauth.username.claim=\"preferred_username\" oauth.client.id=\"kafka-broker\" \\ 2 oauth.client.secret=\"kafka-secret\" \\ 3 oauth.token.endpoint.uri=\"https://<oauth_server_address>/<token_endpoint>\" ; 4 listener.name.client.oauthbearer.sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler 5 listener.name.client.oauthbearer.connections.max.reauth.ms=3600000", "sasl.enabled.mechanisms=OAUTHBEARER listeners=REPLICATION://kafka:9091,CLIENT://kafka:9092 1 listener.security.protocol.map=REPLICATION:SSL,CLIENT:SASL_PLAINTEXT 2 listener.name.client.sasl.enabled.mechanisms=OAUTHBEARER inter.broker.listener.name=REPLICATION listener.name.replication.ssl.keystore.password=<keystore_password> 3 listener.name.replication.ssl.truststore.password=<truststore_password> listener.name.replication.ssl.keystore.type=JKS listener.name.replication.ssl.truststore.type=JKS listener.name.replication.ssl.secure.random.implementation=SHA1PRNG 4 listener.name.replication.ssl.endpoint.identification.algorithm=HTTPS 5 listener.name.replication.ssl.keystore.location=<path_to_keystore> 6 listener.name.replication.ssl.truststore.location=<path_to_truststore> 7 listener.name.replication.ssl.client.auth=required 8 listener.name.client.oauthbearer.sasl.server.callback.handler.class=io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.valid.issuer.uri=\"https://<auth_server_address>/<issuer-context>\" oauth.jwks.endpoint.uri=\"https://<oauth_server_address>/<path_to_jwks_endpoint>\" oauth.username.claim=\"preferred_username\" ;", "listeners=CLIENT://0.0.0.0:9092 listener.security.protocol.map=CLIENT:SASL_PLAINTEXT listener.name.client.sasl.enabled.mechanisms=OAUTHBEARER,PLAIN sasl.mechanism.inter.broker.protocol=OAUTHBEARER inter.broker.listener.name=CLIENT listener.name.client.oauthbearer.sasl.server.callback.handler.class=io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.valid.issuer.uri=\"https:<auth_server_address>/<issuer-context>\" oauth.jwks.endpoint.uri=\"https://<auth_server>/<path_to_jwks_endpoint>\" oauth.username.claim=\"preferred_username\" oauth.client.id=\"kafka-broker\" oauth.client.secret=\"kafka-secret\" oauth.token.endpoint.uri=\"https://<oauth_server_address>/<token_endpoint>\" ; listener.name.client.oauthbearer.sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler 1 listener.name.client.plain.sasl.server.callback.handler.class=io.strimzi.kafka.oauth.server.plain.JaasServerOauthOverPlainValidatorCallbackHandler 2 listener.name.client.plain.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \\ 3 oauth.valid.issuer.uri=\"https://<auth_server_address>/<issuer-context>\" oauth.jwks.endpoint.uri=\"https://<oauth_server_address>/<path_to_jwks_endpoint>\" oauth.username.claim=\"preferred_username\" oauth.token.endpoint.uri=\"https://<oauth_server_address>/<token_endpoint>\" ; 4 listener.name.client.oauthbearer.connections.max.reauth.ms=3600000", "security.protocol=SASL_SSL 1 sasl.mechanism=OAUTHBEARER 2 ssl.truststore.location=/tmp/truststore.p12 3 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.token.endpoint.uri=\"<token_endpoint_url>\" \\ 4 oauth.client.id=\"<client_id>\" \\ 5 oauth.client.secret=\"<client_secret>\" \\ 6 oauth.ssl.truststore.location=\"/tmp/oauth-truststore.p12\" \\ 7 oauth.ssl.truststore.password=\"USDSTOREPASS\" \\ 8 oauth.ssl.truststore.type=\"PKCS12\" \\ 9 oauth.scope=\"<scope>\" \\ 10 oauth.audience=\"<audience>\" ; 11 sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler", "security.protocol=SASL_SSL sasl.mechanism=OAUTHBEARER ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.token.endpoint.uri=\"<token_endpoint_url>\" oauth.client.id=\"<client_id>\" oauth.client.assertion.location=\"<path_to_client_assertion_token_file>\" \\ 1 oauth.client.assertion.type=\"urn:ietf:params:oauth:client-assertion-type:jwt-bearer\" \\ 2 oauth.ssl.truststore.location=\"/tmp/oauth-truststore.p12\" oauth.ssl.truststore.password=\"USDSTOREPASS\" oauth.ssl.truststore.type=\"PKCS12\" oauth.scope=\"<scope>\" oauth.audience=\"<audience>\" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler", "security.protocol=SASL_SSL sasl.mechanism=OAUTHBEARER ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.token.endpoint.uri=\"<token_endpoint_url>\" oauth.client.id=\"<client_id>\" \\ 1 oauth.client.secret=\"<client_secret>\" \\ 2 oauth.password.grant.username=\"<username>\" \\ 3 oauth.password.grant.password=\"<password>\" \\ 4 oauth.ssl.truststore.location=\"/tmp/oauth-truststore.p12\" oauth.ssl.truststore.password=\"USDSTOREPASS\" oauth.ssl.truststore.type=\"PKCS12\" oauth.scope=\"<scope>\" oauth.audience=\"<audience>\" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler", "security.protocol=SASL_SSL sasl.mechanism=OAUTHBEARER ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.access.token=\"<access_token>\" ; 1 sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler", "security.protocol=SASL_SSL sasl.mechanism=OAUTHBEARER ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.access.token.location=\"/var/run/secrets/kubernetes.io/serviceaccount/token\"; 1 sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler", "security.protocol=SASL_SSL sasl.mechanism=OAUTHBEARER ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.token.endpoint.uri=\"<token_endpoint_url>\" oauth.client.id=\"<client_id>\" \\ 1 oauth.client.secret=\"<client_secret>\" \\ 2 oauth.refresh.token=\"<refresh_token>\" \\ 3 oauth.ssl.truststore.location=\"/tmp/oauth-truststore.p12\" oauth.ssl.truststore.password=\"USDSTOREPASS\" oauth.ssl.truststore.type=\"PKCS12\" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler", "oauth.sasl.extension.key1=\"value1\" oauth.sasl.extension.key2=\"value2\"", "sasl.enabled.mechanisms=OAUTHBEARER listeners=CLIENT://0.0.0.0:9092 listener.security.protocol.map=CLIENT:SASL_PLAINTEXT listener.name.client.sasl.enabled.mechanisms=OAUTHBEARER sasl.mechanism.inter.broker.protocol=OAUTHBEARER inter.broker.listener.name=CLIENT listener.name.client.oauthbearer.sasl.server.callback.handler.class=io.strimzi.kafka.oauth.server.JaasServerOauthValidatorCallbackHandler listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required ; listener.name.client.oauthbearer.sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler", "listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required # oauth.client.id=\"kafka-broker\" oauth.client.secret=\"kafka-broker-secret\" oauth.ssl.truststore.location=\"<path_to_truststore_p12_file>\" oauth.ssl.truststore.password=\"<truststore_password>\" oauth.ssl.truststore.type=\"PKCS12\" ;", "oauth.ssl.endpoint.identification.algorithm=\"\"", "<dependency> <groupId>io.strimzi</groupId> <artifactId>kafka-oauth-client</artifactId> <version>0.15.0.redhat-00012</version> </dependency>", "Properties props = new Properties(); try (FileReader reader = new FileReader(\"client.properties\", StandardCharsets.UTF_8)) { props.load(reader); }", "authorizer.class.name=io.strimzi.kafka.oauth.server.authorizer.KeycloakAuthorizer principal.builder.class=io.strimzi.kafka.oauth.server.OAuthKafkaPrincipalBuilder", "strimzi.authorization.token.endpoint.uri=\"https://<auth_server_address>/auth/realms/REALM-NAME/protocol/openid-connect/token\" 1 strimzi.authorization.client.id=\"kafka\" 2", "strimzi.authorization.kafka.cluster.name=\"kafka-cluster\" 1", "strimzi.authorization.delegate.to.kafka.acl=\"true\" 1", "strimzi.authorization.ssl.truststore.location=<path_to_truststore> 1 strimzi.authorization.ssl.truststore.password=<my_truststore_password> 2 strimzi.authorization.ssl.truststore.type=JKS 3 strimzi.authorization.ssl.secure.random.implementation=SHA1PRNG 4 strimzi.authorization.ssl.endpoint.identification.algorithm=HTTPS 5", "strimzi.authorization.grants.refresh.period.seconds=\"120\" 1 strimzi.authorization.grants.refresh.pool.size=\"10\" 2 strimzi.authorization.grants.max.idle.time.seconds=\"300\" 3 strimzi.authorization.grants.gc.period.seconds=\"300\" 4 strimzi.authorization.reuse.grants=\"false\" 5", "strimzi.authorization.connect.timeout.seconds=\"60\" 1 strimzi.authorization.read.timeout.seconds=\"60\" 2 strimzi.authorization.http.retries=\"2\" 3", "oauth.enable.metrics=\"true\" 1", "oauth.include.accept.header=\"false\" 1" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/using_streams_for_apache_kafka_on_rhel_in_kraft_mode/assembly-oauth-security-str
7.2. What is LVM2?
7.2. What is LVM2? LVM version 2, or LVM2, is the default for Red Hat Enterprise Linux, which uses the device mapper driver contained in the 2.6 kernel. LVM2, which is almost completely compatible with the earlier LVM1 version, can be upgraded from versions of Red Hat Enterprise Linux running the 2.4 kernel. Although upgrading from LVM1 to LVM2 is usually seamless, refer to Section 7.3, "Additional Resources" for further details on more complex requirements and upgrading scenarios.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Logical_Volume_Manager_LVM-What_is_LVM2
Chapter 5. Removing a system from a group
Chapter 5. Removing a system from a group If you no longer want a system in a group in the Insights Inventory application, you can remove it. For example, if a system no longer needs to be managed with other systems of a group, you can remove it. Prerequisites You have a Red Hat Hybrid Cloud Console account. You have already grouped systems registered with the Insights Inventory application. Procedure Access Red Hat Hybrid Cloud Console platform and log in. From the console dashboard, navigate to Red Hat Insights > RHEL > Inventory > Groups . On the Groups page, use the Filter by name search box to find the group you want to modify, and then click the name of the group. Select the systems you want to remove. In the Group toolbar, click the Actions for group details menu, which is three vertical dots. Click Remove from group . Then, confirm and click Remove .
null
https://docs.redhat.com/en/documentation/edge_management/1-latest/html/working_with_systems_in_the_insights_inventory_application/proc-rhem-remove-system
3.2. Allowed Key Algorithms and Their Sizes
3.2. Allowed Key Algorithms and Their Sizes Red Hat Certificate System supports the following key algorithms and sizes if they are provided by the underlying PKCS #11 module. Allowed RSA key sizes: 2048 bits or greater Allowed EC curves or equivalent as defined in the FIPS PUB 186-4 standard: nistp256 nistp384 nistp521
null
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/allowed_key_algorithms_and_their_sizes
Chapter 4. Creating and building an application using the CLI
Chapter 4. Creating and building an application using the CLI 4.1. Before you begin Review About the OpenShift CLI . You must be able to access a running instance of OpenShift Container Platform. If you do not have access, contact your cluster administrator. You must have the OpenShift CLI ( oc ) downloaded and installed . 4.2. Logging in to the CLI You can log in to the OpenShift CLI ( oc ) to access and manage your cluster. Prerequisites You must have access to an OpenShift Container Platform cluster. You must have installed the OpenShift CLI ( oc ). Procedure Log into OpenShift Container Platform from the CLI using your username and password or with an OAuth token: With username and password: USD oc login -u=<username> -p=<password> --server=<your-openshift-server> --insecure-skip-tls-verify With an OAuth token: USD oc login <https://api.your-openshift-server.com> --token=<tokenID> You can now create a project or issue other commands for managing your cluster. Additional resources oc login oc logout 4.3. Creating a new project A project enables a community of users to organize and manage their content in isolation. Projects are OpenShift Container Platform extensions to Kubernetes namespaces. Projects have additional features that enable user self-provisioning. Users must receive access to projects from administrators. Cluster administrators can allow developers to create their own projects. In most cases, users automatically have access to their own projects. Each project has its own set of objects, policies, constraints, and service accounts. Prerequisites You must have access to an OpenShift Container Platform cluster. You must have installed the OpenShift CLI ( oc ). Procedure To create a new project, enter the following command: USD oc new-project user-getting-started --display-name="Getting Started with OpenShift" Example output Now using project "user-getting-started" on server "https://openshift.example.com:6443". Additional resources oc new-project 4.4. Granting view permissions OpenShift Container Platform automatically creates a few special service accounts in every project. The default service account takes responsibility for running the pods. OpenShift Container Platform uses and injects this service account into every pod that launches. The following procedure creates a RoleBinding object for the default ServiceAccount object. The service account communicates with the OpenShift Container Platform API to learn about pods, services, and resources within the project. Prerequisites You must have access to an OpenShift Container Platform cluster. You must have installed the OpenShift CLI ( oc ). You have a deployed image. You must have cluster-admin or project-admin privileges. Procedure To add the view role to the default service account in the user-getting-started project , enter the following command: USD oc adm policy add-role-to-user view -z default -n user-getting-started Additional resources Understanding authentication RBAC overview oc policy add-role-to-user 4.5. Deploying your first image The simplest way to deploy an application in OpenShift Container Platform is to run an existing container image. The following procedure deploys a front-end component of an application called national-parks-app . The web application displays an interactive map. The map displays the location of major national parks across the world. Prerequisites You must have access to an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Procedure To deploy an application, enter the following command: USD oc new-app quay.io/openshiftroadshow/parksmap:latest --name=parksmap -l 'app=national-parks-app,component=parksmap,role=frontend,app.kubernetes.io/part-of=national-parks-app' Example output --> Found container image 0c2f55f (12 months old) from quay.io for "quay.io/openshiftroadshow/parksmap:latest" * An image stream tag will be created as "parksmap:latest" that will track this image --> Creating resources with label app=national-parks-app,app.kubernetes.io/part-of=national-parks-app,component=parksmap,role=frontend ... imagestream.image.openshift.io "parksmap" created deployment.apps "parksmap" created service "parksmap" created --> Success Additional resources oc new-app 4.5.1. Creating a route External clients can access applications running on OpenShift Container Platform through the routing layer and the data object behind that is a route . The default OpenShift Container Platform router (HAProxy) uses the HTTP header of the incoming request to determine where to proxy the connection. Optionally, you can define security, such as TLS, for the route. Prerequisites You must have access to an OpenShift Container Platform cluster. You must have installed the OpenShift CLI ( oc ). You have a deployed image. You must have cluster-admin or project-admin privileges. Procedure To retrieve the created application service, enter the following command: USD oc get service Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE parksmap ClusterIP <your-cluster-IP> <123.456.789> 8080/TCP 8m29s To create a route, enter the following command: USD oc create route edge parksmap --service=parksmap Example output route.route.openshift.io/parksmap created To retrieve the created application route, enter the following command: USD oc get route Example output NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD parksmap parksmap-user-getting-started.apps.cluster.example.com parksmap 8080-tcp edge None Additional resources oc create route edge oc get 4.5.2. Examining the pod OpenShift Container Platform leverages the Kubernetes concept of a pod, which is one or more containers deployed together on one host, and the smallest compute unit that can be defined, deployed, and managed. Pods are the rough equivalent of a machine instance, physical or virtual, to a container. You can view the pods in your cluster and to determine the health of those pods and the cluster as a whole. Prerequisites You must have access to an OpenShift Container Platform cluster. You must have installed the OpenShift CLI ( oc ). You have a deployed image. Procedure To list all pods with node names, enter the following command: USD oc get pods Example output NAME READY STATUS RESTARTS AGE parksmap-5f9579955-6sng8 1/1 Running 0 77s To list all pod details, enter the following command: USD oc describe pods Example output Name: parksmap-848bd4954b-5pvcc Namespace: user-getting-started Priority: 0 Node: ci-ln-fr1rt92-72292-4fzf9-worker-a-g9g7c/10.0.128.4 Start Time: Sun, 13 Feb 2022 14:14:14 -0500 Labels: app=national-parks-app app.kubernetes.io/part-of=national-parks-app component=parksmap deployment=parksmap pod-template-hash=848bd4954b role=frontend Annotations: k8s.v1.cni.cncf.io/network-status: [{ "name": "openshift-sdn", "interface": "eth0", "ips": [ "10.131.0.14" ], "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: [{ "name": "openshift-sdn", "interface": "eth0", "ips": [ "10.131.0.14" ], "default": true, "dns": {} }] openshift.io/generated-by: OpenShiftNewApp openshift.io/scc: restricted Status: Running IP: 10.131.0.14 IPs: IP: 10.131.0.14 Controlled By: ReplicaSet/parksmap-848bd4954b Containers: parksmap: Container ID: cri-o://4b2625d4f61861e33cc95ad6d455915ea8ff6b75e17650538cc33c1e3e26aeb8 Image: quay.io/openshiftroadshow/parksmap@sha256:89d1e324846cb431df9039e1a7fd0ed2ba0c51aafbae73f2abd70a83d5fa173b Image ID: quay.io/openshiftroadshow/parksmap@sha256:89d1e324846cb431df9039e1a7fd0ed2ba0c51aafbae73f2abd70a83d5fa173b Port: 8080/TCP Host Port: 0/TCP State: Running Started: Sun, 13 Feb 2022 14:14:25 -0500 Ready: True Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6f844 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-6f844: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: <nil> QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 46s default-scheduler Successfully assigned user-getting-started/parksmap-848bd4954b-5pvcc to ci-ln-fr1rt92-72292-4fzf9-worker-a-g9g7c Normal AddedInterface 44s multus Add eth0 [10.131.0.14/23] from openshift-sdn Normal Pulling 44s kubelet Pulling image "quay.io/openshiftroadshow/parksmap@sha256:89d1e324846cb431df9039e1a7fd0ed2ba0c51aafbae73f2abd70a83d5fa173b" Normal Pulled 35s kubelet Successfully pulled image "quay.io/openshiftroadshow/parksmap@sha256:89d1e324846cb431df9039e1a7fd0ed2ba0c51aafbae73f2abd70a83d5fa173b" in 9.49243308s Normal Created 35s kubelet Created container parksmap Normal Started 35s kubelet Started container parksmap Additional resources oc describe oc get oc label Viewing pods Viewing pod logs 4.5.3. Scaling the application In Kubernetes, a Deployment object defines how an application deploys. In most cases, users use Pod , Service , ReplicaSets , and Deployment resources together. In most cases, OpenShift Container Platform creates the resources for you. When you deploy the national-parks-app image, a deployment resource is created. In this example, only one Pod is deployed. The following procedure scales the national-parks-image to use two instances. Prerequisites You must have access to an OpenShift Container Platform cluster. You must have installed the OpenShift CLI ( oc ). You have a deployed image. Procedure To scale your application from one pod instance to two pod instances, enter the following command: USD oc scale --current-replicas=1 --replicas=2 deployment/parksmap Example output deployment.apps/parksmap scaled Verification To ensure that your application scaled properly, enter the following command: USD oc get pods Example output NAME READY STATUS RESTARTS AGE parksmap-5f9579955-6sng8 1/1 Running 0 7m39s parksmap-5f9579955-8tgft 1/1 Running 0 24s To scale your application back down to one pod instance, enter the following command: USD oc scale --current-replicas=2 --replicas=1 deployment/parksmap Additional resources oc scale 4.6. Deploying a Python application The following procedure deploys a back-end service for the parksmap application. The Python application performs 2D geo-spatial queries against a MongoDB database to locate and return map coordinates of all national parks in the world. The deployed back-end service is nationalparks . Prerequisites You must have access to an OpenShift Container Platform cluster. You must have installed the OpenShift CLI ( oc ). You have a deployed image. Procedure To create a new Python application, enter the following command: USD oc new-app python~https://github.com/openshift-roadshow/nationalparks-py.git --name nationalparks -l 'app=national-parks-app,component=nationalparks,role=backend,app.kubernetes.io/part-of=national-parks-app,app.kubernetes.io/name=python' --allow-missing-images=true Example output --> Found image 0406f6c (13 days old) in image stream "openshift/python" under tag "3.9-ubi8" for "python" Python 3.9 ---------- Python 3.9 available as container is a base platform for building and running various Python 3.9 applications and frameworks. Python is an easy to learn, powerful programming language. It has efficient high-level data structures and a simple but effective approach to object-oriented programming. Python's elegant syntax and dynamic typing, together with its interpreted nature, make it an ideal language for scripting and rapid application development in many areas on most platforms. Tags: builder, python, python39, python-39, rh-python39 * A source build using source code from https://github.com/openshift-roadshow/nationalparks-py.git will be created * The resulting image will be pushed to image stream tag "nationalparks:latest" * Use 'oc start-build' to trigger a new build --> Creating resources with label app=national-parks-app,app.kubernetes.io/name=python,app.kubernetes.io/part-of=national-parks-app,component=nationalparks,role=backend ... imagestream.image.openshift.io "nationalparks" created buildconfig.build.openshift.io "nationalparks" created deployment.apps "nationalparks" created service "nationalparks" created --> Success To create a route to expose your application, nationalparks , enter the following command: USD oc create route edge nationalparks --service=nationalparks Example output route.route.openshift.io/parksmap created To retrieve the created application route, enter the following command: USD oc get route Example output NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD nationalparks nationalparks-user-getting-started.apps.cluster.example.com nationalparks 8080-tcp edge None parksmap parksmap-user-getting-started.apps.cluster.example.com parksmap 8080-tcp edge None Additional resources oc new-app 4.7. Connecting to a database Deploy and connect a MongoDB database where the national-parks-app application stores location information. Once you mark the national-parks-app application as a backend for the map visualization tool, parksmap deployment uses the OpenShift Container Platform discover mechanism to display the map automatically. Prerequisites You must have access to an OpenShift Container Platform cluster. You must have installed the OpenShift CLI ( oc ). You have a deployed image. Procedure To connect to a database, enter the following command: USD oc new-app quay.io/centos7/mongodb-36-centos7 --name mongodb-nationalparks -e MONGODB_USER=mongodb -e MONGODB_PASSWORD=mongodb -e MONGODB_DATABASE=mongodb -e MONGODB_ADMIN_PASSWORD=mongodb -l 'app.kubernetes.io/part-of=national-parks-app,app.kubernetes.io/name=mongodb' Example output --> Found container image dc18f52 (8 months old) from quay.io for "quay.io/centos7/mongodb-36-centos7" MongoDB 3.6 ----------- MongoDB (from humongous) is a free and open-source cross-platform document-oriented database program. Classified as a NoSQL database program, MongoDB uses JSON-like documents with schemas. This container image contains programs to run mongod server. Tags: database, mongodb, rh-mongodb36 * An image stream tag will be created as "mongodb-nationalparks:latest" that will track this image --> Creating resources with label app.kubernetes.io/name=mongodb,app.kubernetes.io/part-of=national-parks-app ... imagestream.image.openshift.io "mongodb-nationalparks" created deployment.apps "mongodb-nationalparks" created service "mongodb-nationalparks" created --> Success Additional resources oc new-project 4.7.1. Creating a secret The Secret object provides a mechanism to hold sensitive information such as passwords, OpenShift Container Platform client configuration files, private source repository credentials, and so on. Secrets decouple sensitive content from the pods. You can mount secrets into containers using a volume plugin or the system can use secrets to perform actions on behalf of a pod. The following procedure adds the secret nationalparks-mongodb-parameters and mounts it to the nationalparks workload. Prerequisites You must have access to an OpenShift Container Platform cluster. You must have installed the OpenShift CLI ( oc ). You have a deployed image. Procedure To create a secret, enter the following command: USD oc create secret generic nationalparks-mongodb-parameters --from-literal=DATABASE_SERVICE_NAME=mongodb-nationalparks --from-literal=MONGODB_USER=mongodb --from-literal=MONGODB_PASSWORD=mongodb --from-literal=MONGODB_DATABASE=mongodb --from-literal=MONGODB_ADMIN_PASSWORD=mongodb Example output secret/nationalparks-mongodb-parameters created To update the environment variable to attach the mongodb secret to the nationalpartks workload, enter the following command: USD oc set env --from=secret/nationalparks-mongodb-parameters deploy/nationalparks Example output deployment.apps/nationalparks updated To show the status of the nationalparks deployment, enter the following command: USD oc rollout status deployment nationalparks Example output deployment "nationalparks" successfully rolled out To show the status of the mongodb-nationalparks deployment, enter the following command: USD oc rollout status deployment mongodb-nationalparks Example output deployment "nationalparks" successfully rolled out deployment "mongodb-nationalparks" successfully rolled out Additional resources oc create secret generic oc set env oc rollout status 4.7.2. Loading data and displaying the national parks map You deployed the parksmap and nationalparks applications and then deployed the mongodb-nationalparks database. However, no data has been loaded into the database. Prerequisites You must have access to an OpenShift Container Platform cluster. You must have installed the OpenShift CLI ( oc ). You have a deployed image. Procedure To load national parks data, enter the following command: USD oc exec USD(oc get pods -l component=nationalparks | tail -n 1 | awk '{print USD1;}') -- curl -s http://localhost:8080/ws/data/load Example output "Items inserted in database: 2893" To verify that your data is loaded properly, enter the following command: USD oc exec USD(oc get pods -l component=nationalparks | tail -n 1 | awk '{print USD1;}') -- curl -s http://localhost:8080/ws/data/all Example output (trimmed) , {"id": "Great Zimbabwe", "latitude": "-20.2674635", "longitude": "30.9337986", "name": "Great Zimbabwe"}] To add labels to the route, enter the following command: USD oc label route nationalparks type=parksmap-backend Example output route.route.openshift.io/nationalparks labeled To retrieve your routes to view your map, enter the following command: USD oc get routes Example output NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD nationalparks nationalparks-user-getting-started.apps.cluster.example.com nationalparks 8080-tcp edge None parksmap parksmap-user-getting-started.apps.cluster.example.com parksmap 8080-tcp edge None Copy and paste the HOST/PORT path you retrieved above into your web browser. Your browser should display a map of the national parks across the world. Figure 4.1. National parks across the world Additional resources oc exec oc label oc get
[ "oc login -u=<username> -p=<password> --server=<your-openshift-server> --insecure-skip-tls-verify", "oc login <https://api.your-openshift-server.com> --token=<tokenID>", "oc new-project user-getting-started --display-name=\"Getting Started with OpenShift\"", "Now using project \"user-getting-started\" on server \"https://openshift.example.com:6443\".", "oc adm policy add-role-to-user view -z default -n user-getting-started", "oc new-app quay.io/openshiftroadshow/parksmap:latest --name=parksmap -l 'app=national-parks-app,component=parksmap,role=frontend,app.kubernetes.io/part-of=national-parks-app'", "--> Found container image 0c2f55f (12 months old) from quay.io for \"quay.io/openshiftroadshow/parksmap:latest\" * An image stream tag will be created as \"parksmap:latest\" that will track this image --> Creating resources with label app=national-parks-app,app.kubernetes.io/part-of=national-parks-app,component=parksmap,role=frontend imagestream.image.openshift.io \"parksmap\" created deployment.apps \"parksmap\" created service \"parksmap\" created --> Success", "oc get service", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE parksmap ClusterIP <your-cluster-IP> <123.456.789> 8080/TCP 8m29s", "oc create route edge parksmap --service=parksmap", "route.route.openshift.io/parksmap created", "oc get route", "NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD parksmap parksmap-user-getting-started.apps.cluster.example.com parksmap 8080-tcp edge None", "oc get pods", "NAME READY STATUS RESTARTS AGE parksmap-5f9579955-6sng8 1/1 Running 0 77s", "oc describe pods", "Name: parksmap-848bd4954b-5pvcc Namespace: user-getting-started Priority: 0 Node: ci-ln-fr1rt92-72292-4fzf9-worker-a-g9g7c/10.0.128.4 Start Time: Sun, 13 Feb 2022 14:14:14 -0500 Labels: app=national-parks-app app.kubernetes.io/part-of=national-parks-app component=parksmap deployment=parksmap pod-template-hash=848bd4954b role=frontend Annotations: k8s.v1.cni.cncf.io/network-status: [{ \"name\": \"openshift-sdn\", \"interface\": \"eth0\", \"ips\": [ \"10.131.0.14\" ], \"default\": true, \"dns\": {} }] k8s.v1.cni.cncf.io/networks-status: [{ \"name\": \"openshift-sdn\", \"interface\": \"eth0\", \"ips\": [ \"10.131.0.14\" ], \"default\": true, \"dns\": {} }] openshift.io/generated-by: OpenShiftNewApp openshift.io/scc: restricted Status: Running IP: 10.131.0.14 IPs: IP: 10.131.0.14 Controlled By: ReplicaSet/parksmap-848bd4954b Containers: parksmap: Container ID: cri-o://4b2625d4f61861e33cc95ad6d455915ea8ff6b75e17650538cc33c1e3e26aeb8 Image: quay.io/openshiftroadshow/parksmap@sha256:89d1e324846cb431df9039e1a7fd0ed2ba0c51aafbae73f2abd70a83d5fa173b Image ID: quay.io/openshiftroadshow/parksmap@sha256:89d1e324846cb431df9039e1a7fd0ed2ba0c51aafbae73f2abd70a83d5fa173b Port: 8080/TCP Host Port: 0/TCP State: Running Started: Sun, 13 Feb 2022 14:14:25 -0500 Ready: True Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6f844 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-6f844: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: <nil> QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 46s default-scheduler Successfully assigned user-getting-started/parksmap-848bd4954b-5pvcc to ci-ln-fr1rt92-72292-4fzf9-worker-a-g9g7c Normal AddedInterface 44s multus Add eth0 [10.131.0.14/23] from openshift-sdn Normal Pulling 44s kubelet Pulling image \"quay.io/openshiftroadshow/parksmap@sha256:89d1e324846cb431df9039e1a7fd0ed2ba0c51aafbae73f2abd70a83d5fa173b\" Normal Pulled 35s kubelet Successfully pulled image \"quay.io/openshiftroadshow/parksmap@sha256:89d1e324846cb431df9039e1a7fd0ed2ba0c51aafbae73f2abd70a83d5fa173b\" in 9.49243308s Normal Created 35s kubelet Created container parksmap Normal Started 35s kubelet Started container parksmap", "oc scale --current-replicas=1 --replicas=2 deployment/parksmap", "deployment.apps/parksmap scaled", "oc get pods", "NAME READY STATUS RESTARTS AGE parksmap-5f9579955-6sng8 1/1 Running 0 7m39s parksmap-5f9579955-8tgft 1/1 Running 0 24s", "oc scale --current-replicas=2 --replicas=1 deployment/parksmap", "oc new-app python~https://github.com/openshift-roadshow/nationalparks-py.git --name nationalparks -l 'app=national-parks-app,component=nationalparks,role=backend,app.kubernetes.io/part-of=national-parks-app,app.kubernetes.io/name=python' --allow-missing-images=true", "--> Found image 0406f6c (13 days old) in image stream \"openshift/python\" under tag \"3.9-ubi8\" for \"python\" Python 3.9 ---------- Python 3.9 available as container is a base platform for building and running various Python 3.9 applications and frameworks. Python is an easy to learn, powerful programming language. It has efficient high-level data structures and a simple but effective approach to object-oriented programming. Python's elegant syntax and dynamic typing, together with its interpreted nature, make it an ideal language for scripting and rapid application development in many areas on most platforms. Tags: builder, python, python39, python-39, rh-python39 * A source build using source code from https://github.com/openshift-roadshow/nationalparks-py.git will be created * The resulting image will be pushed to image stream tag \"nationalparks:latest\" * Use 'oc start-build' to trigger a new build --> Creating resources with label app=national-parks-app,app.kubernetes.io/name=python,app.kubernetes.io/part-of=national-parks-app,component=nationalparks,role=backend imagestream.image.openshift.io \"nationalparks\" created buildconfig.build.openshift.io \"nationalparks\" created deployment.apps \"nationalparks\" created service \"nationalparks\" created --> Success", "oc create route edge nationalparks --service=nationalparks", "route.route.openshift.io/parksmap created", "oc get route", "NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD nationalparks nationalparks-user-getting-started.apps.cluster.example.com nationalparks 8080-tcp edge None parksmap parksmap-user-getting-started.apps.cluster.example.com parksmap 8080-tcp edge None", "oc new-app quay.io/centos7/mongodb-36-centos7 --name mongodb-nationalparks -e MONGODB_USER=mongodb -e MONGODB_PASSWORD=mongodb -e MONGODB_DATABASE=mongodb -e MONGODB_ADMIN_PASSWORD=mongodb -l 'app.kubernetes.io/part-of=national-parks-app,app.kubernetes.io/name=mongodb'", "--> Found container image dc18f52 (8 months old) from quay.io for \"quay.io/centos7/mongodb-36-centos7\" MongoDB 3.6 ----------- MongoDB (from humongous) is a free and open-source cross-platform document-oriented database program. Classified as a NoSQL database program, MongoDB uses JSON-like documents with schemas. This container image contains programs to run mongod server. Tags: database, mongodb, rh-mongodb36 * An image stream tag will be created as \"mongodb-nationalparks:latest\" that will track this image --> Creating resources with label app.kubernetes.io/name=mongodb,app.kubernetes.io/part-of=national-parks-app imagestream.image.openshift.io \"mongodb-nationalparks\" created deployment.apps \"mongodb-nationalparks\" created service \"mongodb-nationalparks\" created --> Success", "oc create secret generic nationalparks-mongodb-parameters --from-literal=DATABASE_SERVICE_NAME=mongodb-nationalparks --from-literal=MONGODB_USER=mongodb --from-literal=MONGODB_PASSWORD=mongodb --from-literal=MONGODB_DATABASE=mongodb --from-literal=MONGODB_ADMIN_PASSWORD=mongodb", "secret/nationalparks-mongodb-parameters created", "oc set env --from=secret/nationalparks-mongodb-parameters deploy/nationalparks", "deployment.apps/nationalparks updated", "oc rollout status deployment nationalparks", "deployment \"nationalparks\" successfully rolled out", "oc rollout status deployment mongodb-nationalparks", "deployment \"nationalparks\" successfully rolled out deployment \"mongodb-nationalparks\" successfully rolled out", "oc exec USD(oc get pods -l component=nationalparks | tail -n 1 | awk '{print USD1;}') -- curl -s http://localhost:8080/ws/data/load", "\"Items inserted in database: 2893\"", "oc exec USD(oc get pods -l component=nationalparks | tail -n 1 | awk '{print USD1;}') -- curl -s http://localhost:8080/ws/data/all", ", {\"id\": \"Great Zimbabwe\", \"latitude\": \"-20.2674635\", \"longitude\": \"30.9337986\", \"name\": \"Great Zimbabwe\"}]", "oc label route nationalparks type=parksmap-backend", "route.route.openshift.io/nationalparks labeled", "oc get routes", "NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD nationalparks nationalparks-user-getting-started.apps.cluster.example.com nationalparks 8080-tcp edge None parksmap parksmap-user-getting-started.apps.cluster.example.com parksmap 8080-tcp edge None" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/getting_started/openshift-cli
2.24. RHEA-2011:0709 - new package: perl-IO-Tty
2.24. RHEA-2011:0709 - new package: perl-IO-Tty A new perl-IO-Tty package is now available for Red Hat Enterprise Linux 6. The perl-IO-Tty package provides the IO::Tty and IO::Pty Perl modules that allow for the creation of a pseudo-tty (Berkeley Unix networking device). This package allows to determine the correct terminal width even for terminals with less than 50 columns. This new package adds the IO::Tty and IO::Pty Perl modules to Red Hat Enterprise Linux 6. (BZ# 669405 ) All users of pseudo-tty devises are advised to install this new package.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.1_technical_notes/perl-io-tty_new
Chapter 5. Disaster recovery with stretch cluster for OpenShift Data Foundation
Chapter 5. Disaster recovery with stretch cluster for OpenShift Data Foundation Red Hat OpenShift Data Foundation deployment can be stretched between two different geographical locations to provide the storage infrastructure with disaster recovery capabilities. When faced with a disaster, such as one of the two locations is partially or totally not available, OpenShift Data Foundation deployed on the OpenShift Container Platform deployment must be able to survive. This solution is available only for metropolitan spanned data centers with specific latency requirements between the servers of the infrastructure. Note Currently, you can deploy the stretch cluster solution where latencies do not exceed 5 milliseconds (ms) round-trip time (RTT) between the OpenShift Container Platform nodes in different locations, with a maximum round-trip time (RTT) of 10 ms. Contact Red Hat Customer Support if you are planning to deploy with higher latencies. The following diagram shows the simplest deployment for a stretched cluster: OpenShift nodes and OpenShift Data Foundation daemons In the diagram the OpenShift Data Foundation monitor pod deployed in the Arbiter zone has a built-in tolerance for the master nodes. The diagram shows the master nodes in each Data Zone which are required for a highly available OpenShift Container Platform control plane. Also, it is important that the OpenShift Container Platform nodes in one of the zones have network connectivity with the OpenShift Container Platform nodes in the other two zones. 5.1. Requirements for enabling stretch cluster Ensure you have addressed OpenShift Container Platform requirements for deployments spanning multiple sites. For more information, see knowledgebase article on cluster deployments spanning multiple sites . Ensure that you have at least three OpenShift Container Platform master nodes in three different zones. One master node in each of the three zones. Ensure that you have at least four OpenShift Container Platform worker nodes evenly distributed across the two Data Zones. For stretch cluster on bare metal, use the SSD drive as the root drive for OpenShift Container Platform master nodes. Ensure that each node is pre-labeled with its zone label. For more information, see the Applying topology zone labels to OpenShift Container Platform node section. The stretch cluster solution is designed for deployments where latencies do not exceed 5 ms between zones. Contact Red Hat Customer Support if you are planning to deploy with higher latencies. Note Flexible scaling and Arbiter both cannot be enabled at the same time as they have conflicting scaling logic. With Flexible scaling, you can add one node at a time to your OpenShift Data Foundation cluster. Whereas in an Arbiter cluster, you need to add at least one node in each of the two data zones. 5.2. Applying topology zone labels to OpenShift Container Platform nodes During a site outage, the zone that has the arbiter function makes use of the arbiter label. These labels are arbitrary and must be unique for the three locations. For example, you can label the nodes as follows: To apply the labels to the node: <NODENAME> Is the name of the node <LABEL> Is the topology zone label To validate the labels using the example labels for the three zones: <LABEL> Is the topology zone label Alternatively, you can run a single command to see all the nodes with it's zone. The stretch cluster topology zone labels are now applied to the appropriate OpenShift Container Platform nodes to define the three locations. step Install the storage operators from OpenShift Container Platform OperatorHub . 5.3. Installing Local Storage Operator Install the Local Storage Operator from the Operator Hub before creating Red Hat OpenShift Data Foundation clusters on local storage devices. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Type local storage in the Filter by keyword box to find the Local Storage Operator from the list of operators, and click on it. Set the following options on the Install Operator page: Update channel as either 4.13 or stable . Installation mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-local-storage . Update approval as Automatic . Click Install . Verification steps Verify that the Local Storage Operator shows a green tick indicating successful installation. 5.4. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and Operator installation permissions. You must have at least four worker nodes evenly distributed across two data centers in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see Planning your deployment . Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command in command-line interface to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation chapter in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to search for the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.13 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you selected Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps Verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. steps Create an OpenShift Data Foundation cluster . 5.5. Creating OpenShift Data Foundation cluster Prerequisites Ensure that you have met all the requirements in Requirements for enabling stretch cluster section. Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, select the Create a new StorageClass using the local storage devices option. Click . Important You are prompted to install the Local Storage Operator if it is not already installed. Click Install , and follow the procedure as described in Installing Local Storage Operator . In the Create local volume set page, provide the following information: Enter a name for the LocalVolumeSet and the StorageClass . By default, the local volume set name appears for the storage class name. You can change the name. Choose one of the following: Disks on all nodes Uses the available disks that match the selected filters on all the nodes. Disks on selected nodes Uses the available disks that match the selected filters only on selected nodes. Important If the nodes selected do not match the OpenShift Data Foundation cluster requirement of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed. For minimum starting node requirements, see the Resource requirements section in the Planning guide. Select SSD or NVMe to build a supported configuration. You can select HDDs for unsupported test installations. Expand the Advanced section and set the following options: Volume Mode Block is selected by default. Device Type Select one or more device type from the dropdown list. Disk Size Set a minimum size of 100GB for the device and maximum available size of the device that needs to be included. Maximum Disks Limit This indicates the maximum number of PVs that can be created on a node. If this field is left empty, then PVs are created for all the available disks on the matching nodes. Click . A pop-up to confirm the creation of LocalVolumeSet is displayed. Click Yes to continue. In the Capacity and nodes page, configure the following: Select Enable arbiter checkbox if you want to use the stretch clusters. This option is available only when all the prerequisites for arbiter are fulfilled and the selected nodes are populated. For more information, see Arbiter stretch cluster requirements in Requirements for enabling stretch cluster . Select the arbiter zone from the dropdown list. Available raw capacity is populated with the capacity value based on all the attached disks associated with the storage class. This takes some time to show up. The Selected nodes list shows the nodes based on the storage class. Click . Optional: In the Security and network page, configure the following based on your requirement: Select the Enable encryption checkbox to encrypt block and file storage. Choose one or both of the following Encryption level : Cluster-wide encryption Encrypts the entire cluster (block and file). StorageClass encryption Creates encrypted persistent volume (block only) using encryption enabled storage class. Select Connect to an external key management service checkbox. This is optional for cluster-wide encryption. Key Management Service Provider is set to Vault by default. Enter Vault Service Name , host Address of Vault server ('https:// <hostname or ip> ''), Port number and Token . Expand Advanced Settings to enter the additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in the Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter the TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Choose one of the following: Default (SDN) If you are using a single network. Custom (Multus) If you are using multiple network interfaces. Select a Public Network Interface from the dropdown. Select a Cluster Network Interface from the dropdown. Note If you are using only one additional network interface, select the single NetworkAttachementDefinition , that is, ocs-public-cluster for the Public Network Interface, and leave the Cluster Network Interface blank. Click . In the Review and create page, review the configuration details. To modify any configuration settings, click Back to go back to the configuration page. Click Create StorageSystem . For cluster-wide encryption with Key Management System (KMS), if you have used the Vault Key/Value (KV) secret engine API, version 2, then you need to edit the configmap . In the OpenShift Web Console, navigate to Workloads ConfigMaps . To view the KMS connection details, click ocs-kms-connection-details . Edit the configmap. Click Action menu (...) Edit ConfigMap . Set the VAULT_BACKEND parameter to v2 . Click Save . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources . Verify that the Status of StorageCluster is Ready and has a green tick mark to it. For arbiter mode of deployment: In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources ocs-storagecluster . In the YAML tab, search for the arbiter key in the spec section and ensure enable is set to true . To verify that all the components for OpenShift Data Foundation are successfully installed, see Verifying your OpenShift Data Foundation installation . 5.6. Verifying OpenShift Data Foundation deployment To verify that OpenShift Data Foundation is deployed correctly: Verify the state of the pods . Verify that the OpenShift Data Foundation cluster is healthy . Verify that the Multicloud Object Gateway is healthy . Verify that the OpenShift Data Foundation specific storage classes exist . 5.6.1. Verifying the state of the pods Procedure Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. For more information about the expected number of pods for each component and how it varies depending on the number of nodes, see Table 5.1, "Pods corresponding to OpenShift Data Foundation cluster" . Click the Running and Completed tabs to verify that the following pods are in Running and Completed state: Table 5.1. Pods corresponding to OpenShift Data Foundation cluster Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any worker node) ocs-metrics-exporter-* (1 pod on any worker node) odf-operator-controller-manager-* (1 pod on any worker node) odf-console-* (1 pod on any worker node) csi-addons-controller-manager-* (1 pod on any worker node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any worker node) Multicloud Object Gateway noobaa-operator-* (1 pod on any worker node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) MON rook-ceph-mon-* (5 pods are distributed across 3 zones, 2 per data-center zones and 1 in arbiter zone) MGR rook-ceph-mgr-* (2 pods on any storage node) MDS rook-ceph-mds-ocs-storagecluster-cephfilesystem-* (2 pods are distributed across 2 data-center zones) RGW rook-ceph-rgw-ocs-storagecluster-cephobjectstore-* (2 pods are distributed across 2 data-center zones) CSI cephfs csi-cephfsplugin-* (1 pod on each worker node) csi-cephfsplugin-provisioner-* (2 pods distributed across worker nodes) rbd csi-rbdplugin-* (1 pod on each worker node) csi-rbdplugin-provisioner-* (2 pods distributed across worker nodes) rook-ceph-crashcollector rook-ceph-crashcollector-* (1 pod on each storage node and 1 pod in arbiter zone) OSD rook-ceph-osd-* (1 pod for each device) rook-ceph-osd-prepare-ocs-deviceset-* (1 pod for each device) 5.6.2. Verifying the OpenShift Data Foundation cluster is healthy Procedure In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Block and File tab, verify that the Storage Cluster has a green tick. In the Details card, verify that the cluster information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation . 5.6.3. Verifying the Multicloud Object Gateway is healthy Procedure In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the object service dashboard, see Monitoring OpenShift Data Foundation . 5.6.4. Verifying that the specific storage classes exist Procedure Click Storage Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation: ocs-storagecluster-ceph-rbd ocs-storagecluster-cephfs openshift-storage.noobaa.io ocs-storagecluster-ceph-rgw 5.7. Install Zone Aware Sample Application Deploy a zone aware sample application to validate whether an OpenShift Data Foundation, stretch cluster setup is configured correctly. Important With latency between the data zones, one can expect to see performance degradation compared to an OpenShift cluster with low latency between nodes and zones (for example, all nodes in the same location). How much will the performance get degraded, depends on the latency between the zones and on the application behavior using the storage (such as heavy write traffic). Ensure that you test the critical applications with stretch cluster configuration to ensure sufficient application performance for the required service levels. A ReadWriteMany (RWX) Persistent Volume Claim (PVC) is created using the ocs-storagecluster-cephfs storage class. Multiple pods use the newly created RWX PVC at the same time. The application used is called File Uploader. Demonstration on how an application is spread across topology zones so that it is still available in the event of a site outage: Note This demonstration is possible since this application shares the same RWX volume for storing files. It works for persistent data access as well because Red Hat OpenShift Data Foundation is configured as a stretched cluster with zone awareness and high availability. Create a new project. Deploy the example PHP application called file-uploader. Example Output: View the build log and wait until the application is deployed. Example Output: The command prompt returns out of the tail mode once you see Push successful . Note The new-app command deploys the application directly from the git repository and does not use the OpenShift template, hence OpenShift route resource is not created by default. You need to create the route manually. Scaling the application Scale the application to four replicas and expose it's services to make the application zone aware and available. You should have four file-uploader pods in a few minutes. Repeat the above command until there are 4 file-uploader pods in the Running status. Create a PVC and attach it into an application. This command: Creates a PVC. Updates the application deployment to include a volume definition. Updates the application deployment to attach a volume mount into the specified mount-path. Creates a new deployment with the four application pods. Check the result of adding the volume. Example Output: Notice the ACCESS MODE is set to RWX. All the four file-uploader pods are using the same RWX volume. Without this access mode, OpenShift does not attempt to attach multiple pods to the same Persistent Volume (PV) reliably. If you attempt to scale up the deployments that are using ReadWriteOnce (RWO) PV, the pods may get colocated on the same node. 5.7.1. Modify Deployment to be Zone Aware Currently, the file-uploader Deployment is not zone aware and can schedule all the pods in the same zone. In this case, if there is a site outage then the application is unavailable. For more information, see Controlling pod placement by using pod topology spread constraints . Add the pod placement rule in the application deployment configuration to make the application zone aware. Run the following command, and review the output: Example Output: Edit the deployment to use the topology zone labels. Add add the following new lines between the Start and End (shown in the output in the step): Example output: Scale down the deployment to zero pods and then back to four pods. This is needed because the deployment changed in terms of pod placement. Scaling down to zero pods Example output: Scaling up to four pods Example output: Verify that the four pods are spread across the four nodes in datacenter1 and datacenter2 zones. Example output: Search for the zone labels used. Example output: Use the file-uploader web application using your browser to upload new files. Find the route that is created. Example Output: Point your browser to the web application using the route in the step. The web application lists all the uploaded files and offers the ability to upload new ones as well as you download the existing data. Right now, there is nothing. Select an arbitrary file from your local machine and upload it to the application. Click Choose file to select an arbitrary file. Click Upload . Figure 5.1. A simple PHP-based file upload tool Click List uploaded files to see the list of all currently uploaded files. Note The OpenShift Container Platform image registry, ingress routing, and monitoring services are not zone aware. 5.8. Recovering OpenShift Data Foundation stretch cluster Given that the stretch cluster disaster recovery solution is to provide resiliency in the face of a complete or partial site outage, it is important to understand the different methods of recovery for applications and their storage. How the application is architected determines how soon it becomes available again on the active zone. There are different methods of recovery for applications and their storage depending on the site outage. The recovery time depends on the application architecture. The different methods of recovery are as follows: Recovery for zone-aware HA applications with RWX storage . Recovery for HA applications with RWX storage . Recovery for applications with RWO storage . Recovery for StatefulSet pods . 5.8.1. Understanding zone failure For the purpose of this section, zone failure is considered as a failure where all OpenShift Container Platform, master and worker nodes in a zone are no longer communicating with the resources in the second data zone (for example, powered down nodes). If communication between the data zones is still partially working (intermittently up or down), the cluster, storage, and network admins should disconnect the communication path between the data zones for recovery to succeed. Important When you install the sample application, power off the OpenShift Container Platform nodes (at least the nodes with OpenShift Data Foundation devices) to test the failure of a data zone in order to validate that your file-uploader application is available, and you can upload new files. 5.8.2. Recovery for zone-aware HA applications with RWX storage Applications that are deployed with topologyKey: topology.kubernetes.io/zone , have one or more replicas scheduled in each data zone, and are using shared storage, that is, ReadWriteMany (RWX) CephFS volume, terminate themselves in the failed zone after few minutes and new pods are rolled in and stuck in pending state until the zones are recovered. An example of this type of application is detailed in the Install Zone Aware Sample Application section. Important During zone recovery if application pods go into CrashLoopBackOff (CLBO) state with permission denied error while mounting the CephFS volume, then restart the nodes where the pods are scheduled. Wait for some time and then check if the pods are running again. 5.8.3. Recovery for HA applications with RWX storage Applications that are using topologyKey: kubernetes.io/hostname or no topology configuration, have no protection against all of the application replicas being in the same zone. Note This can happen even with podAntiAffinity and topologyKey: kubernetes.io/hostname in the Pod spec because this anti-affinity rule is host-based and not zone-based. If this happens and all replicas are located in the zone that fails, the application using ReadWriteMany (RWX) storage takes 6-8 minutes to recover on the active zone. This pause is for the OpenShift Container Platform nodes in the failed zone to become NotReady (60 seconds) and then for the default pod eviction timeout to expire (300 seconds). 5.8.4. Recovering applications with RWO storage Applications that use ReadWriteOnce (RWO) storage have a known behavior described in this Kubernetes issue . Because of this issue, if there is a data zone failure, any application pods in that zone mounting RWO volumes (for example, cephrbd based volumes) are stuck with Terminating status after 6-8 minutes and is not re-created on the active zone without manual intervention. Check the OpenShift Container Platform nodes with a status of NotReady . There may be an issue that prevents the nodes from communicating with the OpenShift control plane. However, the nodes may still be performing I/O operations against Persistent Volumes (PVs). If two pods are concurrently writing to the same RWO volume, there is a risk of data corruption. Ensure that processes on the NotReady node are either terminated or blocked until they are terminated. Example solutions: Use an out of band management system to power off a node, with confirmation, to ensure process termination. Withdraw a network route that is used by nodes at a failed site to communicate with storage. Note Before restoring service to the failed zone or nodes, confirm that all the pods with PVs have terminated successfully. To get the Terminating pods to recreate on the active zone, you can either force delete the pod or delete the finalizer on the associated PV. Once one of these two actions are completed, the application pod should recreate on the active zone and successfully mount its RWO storage. Force deleting the pod Force deletions do not wait for confirmation from the kubelet that the pod has been terminated. <PODNAME> Is the name of the pod <NAMESPACE> Is the project namespace Deleting the finalizer on the associated PV Find the associated PV for the Persistent Volume Claim (PVC) that is mounted by the Terminating pod and delete the finalizer using the oc patch command. <PV_NAME> Is the name of the PV An easy way to find the associated PV is to describe the Terminating pod. If you see a multi-attach warning, it should have the PV names in the warning (for example, pvc-0595a8d2-683f-443b-aee0-6e547f5f5a7c ). <PODNAME> Is the name of the pod <NAMESPACE> Is the project namespace Example output: 5.8.5. Recovery for StatefulSet pods Pods that are part of a StatefulSet have a similar issue as pods mounting ReadWriteOnce (RWO) volumes. More information is referenced in the Kubernetes resource StatefulSet considerations . To get the pods part of a StatefulSet to re-create on the active zone after 6-8 minutes you need to force delete the pod with the same requirements (that is, OpenShift Container Platform node powered off or communication disconnected) as pods with RWO volumes.
[ "topology.kubernetes.io/zone=arbiter for Master0 topology.kubernetes.io/zone=datacenter1 for Master1, Worker1, Worker2 topology.kubernetes.io/zone=datacenter2 for Master2, Worker3, Worker4", "oc label node <NODENAME> topology.kubernetes.io/zone= <LABEL>", "oc get nodes -l topology.kubernetes.io/zone= <LABEL> -o name", "oc get nodes -L topology.kubernetes.io/zone", "oc annotate namespace openshift-storage openshift.io/node-selector=", "kind: ConfigMap apiVersion: v1 metadata: name: ocs-kms-connection-details [...] data: KMS_PROVIDER: vault KMS_SERVICE_NAME: vault [...] VAULT_BACKEND: v2 [...]", "spec: arbiter: enable: true [..] nodeTopologies: arbiterLocation: arbiter #arbiter zone storageDeviceSets: - config: {} count: 1 [..] replica: 4 status: conditions: [..] failureDomain: zone", "oc new-project my-shared-storage", "oc new-app openshift/php:7.3-ubi8~https://github.com/christianh814/openshift-php-upload-demo --name=file-uploader", "Found image 4f2dcc0 (9 days old) in image stream \"openshift/php\" under tag \"7.2-ubi8\" for \"openshift/php:7.2- ubi8\" Apache 2.4 with PHP 7.2 ----------------------- PHP 7.2 available as container is a base platform for building and running various PHP 7.2 applications and frameworks. PHP is an HTML-embedded scripting language. PHP attempts to make it easy for developers to write dynamically generated web pages. PHP also offers built-in database integration for several commercial and non-commercial database management systems, so writing a database-enabled webpage with PHP is fairly simple. The most common use of PHP coding is probably as a replacement for CGI scripts. Tags: builder, php, php72, php-72 * A source build using source code from https://github.com/christianh814/openshift-php-upload-demo will be cr eated * The resulting image will be pushed to image stream tag \"file-uploader:latest\" * Use 'oc start-build' to trigger a new build --> Creating resources imagestream.image.openshift.io \"file-uploader\" created buildconfig.build.openshift.io \"file-uploader\" created deployment.apps \"file-uploader\" created service \"file-uploader\" created --> Success Build scheduled, use 'oc logs -f buildconfig/file-uploader' to track its progress. Application is not exposed. You can expose services to the outside world by executing one or more of the commands below: 'oc expose service/file-uploader' Run 'oc status' to view your app.", "oc logs -f bc/file-uploader -n my-shared-storage", "Cloning \"https://github.com/christianh814/openshift-php-upload-demo\" [...] Generating dockerfile with builder image image-registry.openshift-image-regis try.svc:5000/openshift/php@sha256:d97466f33999951739a76bce922ab17088885db610c 0e05b593844b41d5494ea STEP 1: FROM image-registry.openshift-image-registry.svc:5000/openshift/php@s ha256:d97466f33999951739a76bce922ab17088885db610c0e05b593844b41d5494ea STEP 2: LABEL \"io.openshift.build.commit.author\"=\"Christian Hernandez <christ [email protected]>\" \"io.openshift.build.commit.date\"=\"Sun Oct 1 1 7:15:09 2017 -0700\" \"io.openshift.build.commit.id\"=\"288eda3dff43b02f7f7 b6b6b6f93396ffdf34cb2\" \"io.openshift.build.commit.ref\"=\"master\" \" io.openshift.build.commit.message\"=\"trying to modularize\" \"io.openshift .build.source-location\"=\"https://github.com/christianh814/openshift-php-uploa d-demo\" \"io.openshift.build.image\"=\"image-registry.openshift-image-regi stry.svc:5000/openshift/php@sha256:d97466f33999951739a76bce922ab17088885db610 c0e05b593844b41d5494ea\" STEP 3: ENV OPENSHIFT_BUILD_NAME=\"file-uploader-1\" OPENSHIFT_BUILD_NAMESP ACE=\"my-shared-storage\" OPENSHIFT_BUILD_SOURCE=\"https://github.com/christ ianh814/openshift-php-upload-demo\" OPENSHIFT_BUILD_COMMIT=\"288eda3dff43b0 2f7f7b6b6b6f93396ffdf34cb2\" STEP 4: USER root STEP 5: COPY upload/src /tmp/src STEP 6: RUN chown -R 1001:0 /tmp/src STEP 7: USER 1001 STEP 8: RUN /usr/libexec/s2i/assemble ---> Installing application source => sourcing 20-copy-config.sh ---> 17:24:39 Processing additional arbitrary httpd configuration provide d by s2i => sourcing 00-documentroot.conf => sourcing 50-mpm-tuning.conf => sourcing 40-ssl-certs.sh STEP 9: CMD /usr/libexec/s2i/run STEP 10: COMMIT temp.builder.openshift.io/my-shared-storage/file-uploader-1:3 b83e447 Getting image source signatures [...]", "oc expose svc/file-uploader -n my-shared-storage", "oc scale --replicas=4 deploy/file-uploader -n my-shared-storage", "oc get pods -o wide -n my-shared-storage", "oc set volume deploy/file-uploader --add --name=my-shared-storage -t pvc --claim-mode=ReadWriteMany --claim-size=10Gi --claim-name=my-shared-storage --claim-class=ocs-storagecluster-cephfs --mount-path=/opt/app-root/src/uploaded -n my-shared-storage", "oc get pvc -n my-shared-storage", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE my-shared-storage Bound pvc-5402cc8a-e874-4d7e-af76-1eb05bd2e7c7 10Gi RWX ocs-storagecluster-cephfs 52s", "oc get deployment file-uploader -o yaml -n my-shared-storage | less", "[...] spec: progressDeadlineSeconds: 600 replicas: 4 revisionHistoryLimit: 10 selector: matchLabels: deployment: file-uploader strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: annotations: openshift.io/generated-by: OpenShiftNewApp creationTimestamp: null labels: deployment: file-uploader spec: # <-- Start inserted lines after here containers: # <-- End inserted lines before here - image: image-registry.openshift-image-registry.svc:5000/my-shared-storage/file-uploader@sha256:a458ea62f990e431ad7d5f84c89e2fa27bdebdd5e29c5418c70c56eb81f0a26b imagePullPolicy: IfNotPresent name: file-uploader [...]", "oc edit deployment file-uploader -n my-shared-storage", "[...] spec: topologySpreadConstraints: - labelSelector: matchLabels: deployment: file-uploader maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: DoNotSchedule - labelSelector: matchLabels: deployment: file-uploader maxSkew: 1 topologyKey: kubernetes.io/hostname whenUnsatisfiable: ScheduleAnyway nodeSelector: node-role.kubernetes.io/worker: \"\" containers: [...]", "deployment.apps/file-uploader edited", "oc scale deployment file-uploader --replicas=0 -n my-shared-storage", "deployment.apps/file-uploader scaled", "oc scale deployment file-uploader --replicas=4 -n my-shared-storage", "deployment.apps/file-uploader scaled", "oc get pods -o wide -n my-shared-storage | egrep '^file-uploader'| grep -v build | awk '{print USD7}' | sort | uniq -c", "1 perf1-mz8bt-worker-d2hdm 1 perf1-mz8bt-worker-k68rv 1 perf1-mz8bt-worker-ntkp8 1 perf1-mz8bt-worker-qpwsr", "oc get nodes -L topology.kubernetes.io/zone | grep datacenter | grep -v master", "perf1-mz8bt-worker-d2hdm Ready worker 35d v1.20.0+5fbfd19 datacenter1 perf1-mz8bt-worker-k68rv Ready worker 35d v1.20.0+5fbfd19 datacenter1 perf1-mz8bt-worker-ntkp8 Ready worker 35d v1.20.0+5fbfd19 datacenter2 perf1-mz8bt-worker-qpwsr Ready worker 35d v1.20.0+5fbfd19 datacenter2", "oc get route file-uploader -n my-shared-storage -o jsonpath --template=\"http://{.spec.host}{'\\n'}\"", "http://file-uploader-my-shared-storage.apps.cluster-ocs4-abdf.ocs4-abdf.sandbox744.opentlc.com", "oc delete pod <PODNAME> --grace-period=0 --force --namespace <NAMESPACE>", "oc patch -n openshift-storage pv/ <PV_NAME> -p '{\"metadata\":{\"finalizers\":[]}}' --type=merge", "oc describe pod <PODNAME> --namespace <NAMESPACE>", "[...] Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 4m5s default-scheduler Successfully assigned openshift-storage/noobaa-db-pg-0 to perf1-mz8bt-worker-d2hdm Warning FailedAttachVolume 4m5s attachdetach-controller Multi-Attach error for volume \"pvc-0595a8d2-683f-443b-aee0-6e547f5f5a7c\" Volume is already exclusively attached to one node and can't be attached to another" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/configuring_openshift_data_foundation_disaster_recovery_for_openshift_workloads/introduction-to-stretch-cluster-disaster-recovery_stretch-cluster
Chapter 12. SelfSubjectAccessReview [authorization.k8s.io/v1]
Chapter 12. SelfSubjectAccessReview [authorization.k8s.io/v1] Description SelfSubjectAccessReview checks whether or the current user can perform an action. Not filling in a spec.namespace means "in all namespaces". Self is a special case, because users should always be able to check whether they can perform an action Type object Required spec 12.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object SelfSubjectAccessReviewSpec is a description of the access request. Exactly one of ResourceAuthorizationAttributes and NonResourceAuthorizationAttributes must be set status object SubjectAccessReviewStatus 12.1.1. .spec Description SelfSubjectAccessReviewSpec is a description of the access request. Exactly one of ResourceAuthorizationAttributes and NonResourceAuthorizationAttributes must be set Type object Property Type Description nonResourceAttributes object NonResourceAttributes includes the authorization attributes available for non-resource requests to the Authorizer interface resourceAttributes object ResourceAttributes includes the authorization attributes available for resource requests to the Authorizer interface 12.1.2. .spec.nonResourceAttributes Description NonResourceAttributes includes the authorization attributes available for non-resource requests to the Authorizer interface Type object Property Type Description path string Path is the URL path of the request verb string Verb is the standard HTTP verb 12.1.3. .spec.resourceAttributes Description ResourceAttributes includes the authorization attributes available for resource requests to the Authorizer interface Type object Property Type Description group string Group is the API Group of the Resource. "*" means all. name string Name is the name of the resource being requested for a "get" or deleted for a "delete". "" (empty) means all. namespace string Namespace is the namespace of the action being requested. Currently, there is no distinction between no namespace and all namespaces "" (empty) is defaulted for LocalSubjectAccessReviews "" (empty) is empty for cluster-scoped resources "" (empty) means "all" for namespace scoped resources from a SubjectAccessReview or SelfSubjectAccessReview resource string Resource is one of the existing resource types. "*" means all. subresource string Subresource is one of the existing resource types. "" means none. verb string Verb is a kubernetes resource API verb, like: get, list, watch, create, update, delete, proxy. "*" means all. version string Version is the API Version of the Resource. "*" means all. 12.1.4. .status Description SubjectAccessReviewStatus Type object Required allowed Property Type Description allowed boolean Allowed is required. True if the action would be allowed, false otherwise. denied boolean Denied is optional. True if the action would be denied, otherwise false. If both allowed is false and denied is false, then the authorizer has no opinion on whether to authorize the action. Denied may not be true if Allowed is true. evaluationError string EvaluationError is an indication that some error occurred during the authorization check. It is entirely possible to get an error and be able to continue determine authorization status in spite of it. For instance, RBAC can be missing a role, but enough roles are still present and bound to reason about the request. reason string Reason is optional. It indicates why a request was allowed or denied. 12.2. API endpoints The following API endpoints are available: /apis/authorization.k8s.io/v1/selfsubjectaccessreviews POST : create a SelfSubjectAccessReview 12.2.1. /apis/authorization.k8s.io/v1/selfsubjectaccessreviews Table 12.1. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. HTTP method POST Description create a SelfSubjectAccessReview Table 12.2. Body parameters Parameter Type Description body SelfSubjectAccessReview schema Table 12.3. HTTP responses HTTP code Reponse body 200 - OK SelfSubjectAccessReview schema 201 - Created SelfSubjectAccessReview schema 202 - Accepted SelfSubjectAccessReview schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/authorization_apis/selfsubjectaccessreview-authorization-k8s-io-v1
Chapter 5. Load Balancing, Scheduling, and Migration
Chapter 5. Load Balancing, Scheduling, and Migration 5.1. Load Balancing, Scheduling, and Migration Individual hosts have finite hardware resources, and are susceptible to failure. To mitigate against failure and resource exhaustion, hosts are grouped into clusters, which are essentially a grouping of shared resources. A Red Hat Virtualization environment responds to changes in demand for host resources using load balancing policy, scheduling, and migration. The Manager is able to ensure that no single host in a cluster is responsible for all of the virtual machines in that cluster. Conversely, the Manager is able to recognize an underutilized host, and migrate all virtual machines off of it, allowing an administrator to shut down that host to save power. Available resources are checked as a result of three events: Virtual machine start - Resources are checked to determine on which host a virtual machine will start. Virtual machine migration - Resources are checked in order to determine an appropriate target host. Time elapses - Resources are checked at a regular interval to determine whether individual host load is in compliance with cluster load balancing policy. The Manager responds to changes in available resources by using the load balancing policy for a cluster to schedule the migration of virtual machines from one host in a cluster to another. The relationship between load balancing policy, scheduling, and virtual machine migration are discussed in the following sections.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/technical_reference/chap-load_balancing_scheduling_and_migration
32.5. Using sadump on Fujitsu PRIMEQUEST systems
32.5. Using sadump on Fujitsu PRIMEQUEST systems On Fujitsu PRIMEQUEST systems, you can also enable the stand-alone dump ( sadump ) functionality provided by the hardware. This utility operates by combining kdump , which is a standard part of Red Hat Enterprise Linux, and the additional sadump BIOS-based function provided by Fujitsu. Note For the purpose of ensuring a dump is captured in the event of an unexpected reboot, Fujitsu recommend that sadump is always enabled on PRIMEQUEST hardware. The sadump utility is usually invoked when a kdump cannot be processed because Red Hat Enterprise Linux has became unresponsive. These conditions can include: Red Hat Enterprise Linux panic or hang before kdump starts An error while kdump is working How to use sadump To use sadump , complete the following steps: Install the following packages according to the kernel version in use: Configure UEFI for sadump For more details, see the FUJITSU Server PRIMEQUEST 2000 Series Installation Manual. Configure Red Hat Enterprise Linux for sadump For more details, see Section 32.5.1, "Configure Red Hat Enterprise Linux for sadump" . Start sadump For more details, see the FUJITSU Server PRIMEQUEST 2000 Series Installation Manual. Check the memory dump For more details, see Section 32.5.2, "Check the memory dump" . 32.5.1. Configure Red Hat Enterprise Linux for sadump Install and configure kdump as described in Section 32.1, "Installing the kdump Service" and Section 32.2, "Configuring the kdump Service" . Ensure that kdump starts as expected for sadump : Configure Red Hat Enterprise Linux to not reboot after a kernel panic: By default, Red Hat Enterprise Linux reboots automatically after a kernel panic, which prevents sadump to start. To avoid this behavior, configure the /etc/sysctl.conf file as follows: Configure Red Hat Enterprise Linux to start kdump by Nonmaskable Interrupt (NMI): In the procedure of starting sadump , starting kdump by NMI is needed at first. Configure /etc/sysctl.conf as follows: Ensure that kdump behaves correctly for sadump : Configure Red Hat Enterprise Linux to stop after kdump : By default, Red Hat Enterprise Linux reboots automatically when kdump fails, which prevents sadump to start. To avoid this behavior, configure the /etc/kdump.conf file as follows: or Configure Red Hat Enterprise Linux to start sadump : Configure /etc/kdump.conf to not block the System Management Interrupt (SMI) and thus to enable sadump to start:
[ "yum install kernel-debuginfo kernel-debuginfo-common", "kernel.panic=0", "kernel.unknown_nmi_panic=1", "default halt", "default shell", "blacklist kvm-intel" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s1-ppc-sadump-configure
Chapter 7. Red Hat Quay Application Programming Interface (API)
Chapter 7. Red Hat Quay Application Programming Interface (API) This API allows you to perform many of the operations required to work with Red Hat Quay repositories, users, and organizations. 7.1. Authorization oauth2_implicit Scopes The following scopes are used to control access to the API endpoints: Scope Description repo:read This application will be able to view and pull all repositories visible to the granting user or robot account repo:write This application will be able to view, push and pull to all repositories to which the granting user or robot account has write access repo:admin This application will have administrator access to all repositories to which the granting user or robot account has access repo:create This application will be able to create repositories in to any namespaces that the granting user or robot account is allowed to create repositories user:read This application will be able to read user information such as username and email address. org:admin This application will be able to administer your organizations including creating robots, creating teams, adjusting team membership, and changing billing settings. You should have absolute trust in the requesting application before granting this permission. super:user This application will be able to administer your installation including managing users, managing organizations and other features found in the superuser panel. You should have absolute trust in the requesting application before granting this permission. user:admin This application will be able to administer your account including creating robots and granting them permissions to your repositories. You should have absolute trust in the requesting application before granting this permission. 7.2. appspecifictokens Manages app specific tokens for the current user. 7.2.1. createAppToken Create a new app specific token for user. POST /api/v1/user/apptoken Authorizations: oauth2_implicit ( user:admin ) Request body schema (application/json) Description of a new token. Name Description Schema title required Friendly name to help identify the token string Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X POST \ -H "Authorization: Bearer <access_token>" \ -H "Content-Type: application/json" \ -d '{ "title": "MyAppToken" }' \ "http://quay-server.example.com/api/v1/user/apptoken" 7.2.2. listAppTokens Lists the app specific tokens for the user. GET /api/v1/user/apptoken Authorizations: oauth2_implicit ( user:admin ) Query parameters Type Name Description Schema query expiring optional If true, only returns those tokens expiring soon boolean Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET \ -H "Authorization: Bearer <access_token>" \ "http://quay-server.example.com/api/v1/user/apptoken" 7.2.3. getAppToken Returns a specific app token for the user. GET /api/v1/user/apptoken/{token_uuid} Authorizations: oauth2_implicit ( user:admin ) Path parameters Type Name Description Schema path token_uuid required The uuid of the app specific token string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET \ -H "Authorization: Bearer <access_token>" \ "http://quay-server.example.com/api/v1/user/apptoken/<token_uuid>" 7.2.4. revokeAppToken Revokes a specific app token for the user. DELETE /api/v1/user/apptoken/{token_uuid} Authorizations: oauth2_implicit ( user:admin ) Path parameters Type Name Description Schema path token_uuid required The uuid of the app specific token string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X DELETE \ -H "Authorization: Bearer <access_token>" \ "http://quay-server.example.com/api/v1/user/apptoken/<token_uuid>" 7.3. build Create, list, cancel and get status/logs of repository builds. 7.3.1. getRepoBuildStatus Return the status for the builds specified by the build uuids. GET /api/v1/repository/{repository}/build/{build_uuid}/status Authorizations: oauth2_implicit ( repo:read ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string path build_uuid required The UUID of the build string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.3.2. getRepoBuildLogs Return the build logs for the build specified by the build uuid. GET /api/v1/repository/{repository}/build/{build_uuid}/logs Authorizations: oauth2_implicit ( repo:read ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string path build_uuid required The UUID of the build string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.3.3. getRepoBuild Returns information about a build. GET /api/v1/repository/{repository}/build/{build_uuid} Authorizations: oauth2_implicit ( repo:read ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string path build_uuid required The UUID of the build string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.3.4. cancelRepoBuild Cancels a repository build. DELETE /api/v1/repository/{repository}/build/{build_uuid} Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string path build_uuid required The UUID of the build string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.3.5. requestRepoBuild Request that a repository be built and pushed from the specified input. POST /api/v1/repository/{repository}/build/ Authorizations: oauth2_implicit ( repo:write ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string Request body schema (application/json) Description of a new repository build. Name Description Schema file_id optional The file id that was generated when the build spec was uploaded string archive_url optional The URL of the .tar.gz to build. Must start with "http" or "https". string subdirectory optional Subdirectory in which the Dockerfile can be found. You can only specify this or dockerfile_path string dockerfile_path optional Path to a dockerfile. You can only specify this or subdirectory. string context optional Pass in the context for the dockerfile. This is optional. string pull_robot optional Username of a Quay robot account to use as pull credentials string tags optional The tags to which the built images will be pushed. If none specified, "latest" is used. array of string non-empty unique Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.3.6. getRepoBuilds Get the list of repository builds. GET /api/v1/repository/{repository}/build/ Authorizations: oauth2_implicit ( repo:read ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string Query parameters Type Name Description Schema query since optional Returns all builds since the given unix timecode integer query limit optional The maximum number of builds to return integer Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.4. discovery API discovery information. 7.4.1. discovery List all of the API endpoints available in the swagger API format. GET /api/v1/discovery Authorizations: Query parameters Type Name Description Schema query internal optional Whether to include internal APIs. boolean Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET "https://<quay-server.example.com>/api/v1/discovery?query=true" \ -H "Authorization: Bearer <access_token>" 7.5. error Error details API. 7.5.1. getErrorDescription Get a detailed description of the error. GET /api/v1/error/{error_type} Authorizations: Path parameters Type Name Description Schema path error_type required The error code identifying the type of error. string Responses HTTP Code Description Schema 200 Successful invocation ApiErrorDescription 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET "https://<quay-server.example.com>/api/v1/error/<error_type>" \ -H "Authorization: Bearer <access_token>" 7.6. globalmessages Messages API. 7.6.1. createGlobalMessage Create a message. POST /api/v1/messages Authorizations: oauth2_implicit ( super:user ) Request body schema (application/json) Create a new message Name Description Schema message required A single message object Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X POST "https://<quay-server.example.com>/api/v1/messages" \ -H "Authorization: Bearer <access_token>" \ -H "Content-Type: application/json" \ -d '{ "message": { "content": "Hi", "media_type": "text/plain", "severity": "info" } }' 7.6.2. getGlobalMessages Return a super users messages. GET /api/v1/messages Authorizations: Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET "https://<quay-server.example.com>/api/v1/messages" \ -H "Authorization: Bearer <access_token>" 7.6.3. deleteGlobalMessage Delete a message. DELETE /api/v1/message/{uuid} Authorizations: oauth2_implicit ( super:user ) Path parameters Type Name Description Schema path uuid required string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X DELETE "https://<quay-server.example.com>/api/v1/message/<uuid>" \ -H "Authorization: Bearer <access_token>" 7.7. logs Access usage logs for organizations or repositories. 7.7.1. getAggregateUserLogs Returns the aggregated logs for the current user. GET /api/v1/user/aggregatelogs Authorizations: oauth2_implicit ( user:admin ) Query parameters Type Name Description Schema query performer optional Username for which to filter logs. string query endtime optional Latest time for logs. Format: "%m/%d/%Y" in UTC. string query starttime optional Earliest time for logs. Format: "%m/%d/%Y" in UTC. string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET \ -H "Authorization: Bearer <bearer_token>" \ -H "Accept: application/json" \ "<quay-server.example.com>/api/v1/user/aggregatelogs?performer=<username>&starttime=<MM/DD/YYYY>&endtime=<MM/DD/YYYY>" 7.7.2. exportUserLogs Returns the aggregated logs for the current user. POST /api/v1/user/exportlogs Authorizations: oauth2_implicit ( user:admin ) Query parameters Type Name Description Schema query endtime optional Latest time for logs. Format: "%m/%d/%Y" in UTC. string query starttime optional Earliest time for logs. Format: "%m/%d/%Y" in UTC. string Request body schema (application/json) Configuration for an export logs operation Name Description Schema callback_url optional The callback URL to invoke with a link to the exported logs string callback_email optional The e-mail address at which to e-mail a link to the exported logs string Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X POST \ -H "Authorization: Bearer <bearer_token>" \ -H "Content-Type: application/json" \ -H "Accept: application/json" \ -d '{ "starttime": "<MM/DD/YYYY>", "endtime": "<MM/DD/YYYY>", "callback_email": "[email protected]" }' \ "http://<quay-server.example.com>/api/v1/user/exportlogs" 7.7.3. listUserLogs List the logs for the current user. GET /api/v1/user/logs Authorizations: oauth2_implicit ( user:admin ) Query parameters Type Name Description Schema query next_page optional The page token for the page string query performer optional Username for which to filter logs. string query endtime optional Latest time for logs. Format: "%m/%d/%Y" in UTC. string query starttime optional Earliest time for logs. Format: "%m/%d/%Y" in UTC. string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET -H "Authorization: Bearer <bearer_token>" -H "Accept: application/json" "<quay-server.example.com>/api/v1/user/logs" 7.7.4. getAggregateOrgLogs Gets the aggregated logs for the specified organization. GET /api/v1/organization/{orgname}/aggregatelogs Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path orgname required The name of the organization string Query parameters Type Name Description Schema query performer optional Username for which to filter logs. string query endtime optional Latest time for logs. Format: "%m/%d/%Y" in UTC. string query starttime optional Earliest time for logs. Format: "%m/%d/%Y" in UTC. string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET \ -H "Authorization: Bearer <bearer_token>" \ -H "Accept: application/json" \ "<quay-server.example.com>/api/v1/organization/{orgname}/aggregatelogs" 7.7.5. exportOrgLogs Exports the logs for the specified organization. POST /api/v1/organization/{orgname}/exportlogs Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path orgname required The name of the organization string Query parameters Type Name Description Schema query endtime optional Latest time for logs. Format: "%m/%d/%Y" in UTC. string query starttime optional Earliest time for logs. Format: "%m/%d/%Y" in UTC. string Request body schema (application/json) Configuration for an export logs operation Name Description Schema callback_url optional The callback URL to invoke with a link to the exported logs string callback_email optional The e-mail address at which to e-mail a link to the exported logs string Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X POST \ -H "Authorization: Bearer <bearer_token>" \ -H "Content-Type: application/json" \ -H "Accept: application/json" \ -d '{ "starttime": "<MM/DD/YYYY>", "endtime": "<MM/DD/YYYY>", "callback_email": "[email protected]" }' \ "http://<quay-server.example.com>/api/v1/organization/{orgname}/exportlogs" 7.7.6. listOrgLogs List the logs for the specified organization. GET /api/v1/organization/{orgname}/logs Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path orgname required The name of the organization string Query parameters Type Name Description Schema query next_page optional The page token for the page string query performer optional Username for which to filter logs. string query endtime optional Latest time for logs. Format: "%m/%d/%Y" in UTC. string query starttime optional Earliest time for logs. Format: "%m/%d/%Y" in UTC. string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET \ -H "Authorization: Bearer <bearer_token>" \ -H "Accept: application/json" \ "http://<quay-server.example.com>/api/v1/organization/{orgname}/logs" 7.7.7. getAggregateRepoLogs Returns the aggregated logs for the specified repository. GET /api/v1/repository/{repository}/aggregatelogs Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string Query parameters Type Name Description Schema query endtime optional Latest time for logs. Format: "%m/%d/%Y" in UTC. string query starttime optional Earliest time for logs. Format: "%m/%d/%Y" in UTC. string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET \ -H "Authorization: Bearer <bearer_token>" \ -H "Accept: application/json" \ "<quay-server.example.com>/api/v1/repository/<repository_name>/<namespace>/aggregatelogs?starttime=2024-01-01&endtime=2024-06-18"" 7.7.8. exportRepoLogs Queues an export of the logs for the specified repository. POST /api/v1/repository/{repository}/exportlogs Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string Query parameters Type Name Description Schema query endtime optional Latest time for logs. Format: "%m/%d/%Y" in UTC. string query starttime optional Earliest time for logs. Format: "%m/%d/%Y" in UTC. string Request body schema (application/json) Configuration for an export logs operation Name Description Schema callback_url optional The callback URL to invoke with a link to the exported logs string callback_email optional The e-mail address at which to e-mail a link to the exported logs string Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X POST \ -H "Authorization: Bearer <bearer_token>" \ -H "Content-Type: application/json" \ -H "Accept: application/json" \ -d '{ "starttime": "2024-01-01", "endtime": "2024-06-18", "callback_url": "http://your-callback-url.example.com" }' \ "http://<quay-server.example.com>/api/v1/repository/{repository}/exportlogs" 7.7.9. listRepoLogs List the logs for the specified repository. GET /api/v1/repository/{repository}/logs Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string Query parameters Type Name Description Schema query next_page optional The page token for the page string query endtime optional Latest time for logs. Format: "%m/%d/%Y" in UTC. string query starttime optional Earliest time for logs. Format: "%m/%d/%Y" in UTC. string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET \ -H "Authorization: Bearer <bearer_token>" \ -H "Accept: application/json" \ "http://<quay-server.example.com>/api/v1/repository/{repository}/logs" 7.8. manifest Manage the manifests of a repository. 7.8.1. getManifestLabel Retrieves the label with the specific ID under the manifest. GET /api/v1/repository/{repository}/manifest/{manifestref}/labels/{labelid} Authorizations: oauth2_implicit ( repo:read ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string path manifestref required The digest of the manifest string path labelid required The ID of the label string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET \ -H "Authorization: Bearer <bearer_token>" \ -H "Accept: application/json" \ https://<quay-server.example.com>/api/v1/repository/<repository>/manifest/<manifestref>/labels/<label_id> 7.8.2. deleteManifestLabel Deletes an existing label from a manifest. DELETE /api/v1/repository/{repository}/manifest/{manifestref}/labels/{labelid} Authorizations: oauth2_implicit ( repo:write ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string path manifestref required The digest of the manifest string path labelid required The ID of the label string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X DELETE \ -H "Authorization: Bearer <bearer_token>" \ https://<quay-server.example.com>/api/v1/repository/<repository>/manifest/<manifestref>/labels/<labelid> 7.8.3. addManifestLabel Adds a new label into the tag manifest. POST /api/v1/repository/{repository}/manifest/{manifestref}/labels Authorizations: oauth2_implicit ( repo:write ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string path manifestref required The digest of the manifest string Request body schema (application/json) Adds a label to a manifest Name Description Schema key required The key for the label string value required The value for the label string media_type required The media type for this label Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X POST \ -H "Authorization: Bearer <bearer_token>" \ -H "Content-Type: application/json" \ --data '{ "key": "<key>", "value": "<value>", "media_type": "<media_type>" }' \ https://<quay-server.example.com>/api/v1/repository/<repository>/manifest/<manifestref>/labels 7.8.4. listManifestLabels GET /api/v1/repository/{repository}/manifest/{manifestref}/labels Authorizations: oauth2_implicit ( repo:read ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string path manifestref required The digest of the manifest string Query parameters Type Name Description Schema query filter optional If specified, only labels matching the given prefix will be returned string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET \ -H "Authorization: Bearer <bearer_token>" \ -H "Accept: application/json" \ https://<quay-server.example.com>/api/v1/repository/<repository>/manifest/<manifestref>/labels 7.8.5. getRepoManifest GET /api/v1/repository/{repository}/manifest/{manifestref} Authorizations: oauth2_implicit ( repo:read ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string path manifestref required The digest of the manifest string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET \ -H "Authorization: Bearer <bearer_token>" \ -H "Accept: application/json" \ https://<quay-server.example.com>/api/v1/repository/<repository>/manifest/<manifestref> 7.9. mirror 7.9.1. syncCancel Update the sync_status for a given Repository's mirroring configuration. POST /api/v1/repository/{repository}/mirror/sync-cancel Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X POST "https://<quay-server.example.com>/api/v1/repository/<namespace>/<repo>/mirror/sync-cancel" \ 7.9.2. syncNow Update the sync_status for a given Repository's mirroring configuration. POST /api/v1/repository/{repository}/mirror/sync-now Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X POST "https://<quay-server.example.com>/api/v1/repository/<namespace>/<repo>/mirror/sync-now" \ -H "Authorization: Bearer <access_token>" 7.9.3. getRepoMirrorConfig Return the Mirror configuration for a given Repository. GET /api/v1/repository/{repository}/mirror Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string Responses HTTP Code Description Schema 200 Successful invocation ViewMirrorConfig 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET "https://<quay-server.example.com>/api/v1/repository/<namespace>/<repo>/mirror" \ -H "Authorization: Bearer <access_token>" 7.9.4. changeRepoMirrorConfig Allow users to modifying the repository's mirroring configuration. PUT /api/v1/repository/{repository}/mirror Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string Request body schema (application/json) Update the repository mirroring configuration. Name Description Schema is_enabled optional Used to enable or disable synchronizations. boolean external_reference optional Location of the external repository. string external_registry_username optional Username used to authenticate with external registry. external_registry_password optional Password used to authenticate with external registry. sync_start_date optional Determines the time this repository is ready for synchronization. string sync_interval optional Number of seconds after next_start_date to begin synchronizing. integer robot_username optional Username of robot which will be used for image pushes. string root_rule optional A list of glob-patterns used to determine which tags should be synchronized. object external_registry_config optional object Responses HTTP Code Description Schema 201 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X PUT "https://<quay-server.example.com>/api/v1/repository/<namespace>/<repo>/mirror" \ -H "Authorization: Bearer <access_token>" \ -H "Content-Type: application/json" \ -d '{ "is_enabled": <false>, 1 "external_reference": "<external_reference>", "external_registry_username": "<external_registry_username>", "external_registry_password": "<external_registry_password>", "sync_start_date": "<sync_start_date>", "sync_interval": <sync_interval>, "robot_username": "<robot_username>", "root_rule": { "rule": "<rule>", "rule_type": "<rule_type>" } }' 1 Disables automatic synchronization. 7.9.5. createRepoMirrorConfig Create a RepoMirrorConfig for a given Repository. POST /api/v1/repository/{repository}/mirror Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string Request body schema (application/json) Create the repository mirroring configuration. Name Description Schema is_enabled optional Used to enable or disable synchronizations. boolean external_reference required Location of the external repository. string external_registry_username optional Username used to authenticate with external registry. external_registry_password optional Password used to authenticate with external registry. sync_start_date required Determines the time this repository is ready for synchronization. string sync_interval required Number of seconds after next_start_date to begin synchronizing. integer robot_username required Username of robot which will be used for image pushes. string root_rule required A list of glob-patterns used to determine which tags should be synchronized. object external_registry_config optional object Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X POST "https://<quay-server.example.com>/api/v1/repository/<namespace>/<repo>/mirror" \ -H "Authorization: Bearer <access_token>" \ -H "Content-Type: application/json" \ -d '{ "is_enabled": <is_enabled>, "external_reference": "<external_reference>", "external_registry_username": "<external_registry_username>", "external_registry_password": "<external_registry_password>", "sync_start_date": "<sync_start_date>", "sync_interval": <sync_interval>, "robot_username": "<robot_username>", "root_rule": { "rule": "<rule>", "rule_type": "<rule_type>" } }' 7.10. namespacequota 7.10.1. listUserQuota GET /api/v1/user/quota Authorizations: oauth2_implicit ( user:admin ) Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.10.2. getOrganizationQuotaLimit GET /api/v1/organization/{orgname}/quota/{quota_id}/limit/{limit_id} Authorizations: Path parameters Type Name Description Schema path quota_id required string path limit_id required string path orgname required string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.10.3. changeOrganizationQuotaLimit PUT /api/v1/organization/{orgname}/quota/{quota_id}/limit/{limit_id} Authorizations: oauth2_implicit ( super:user ) Path parameters Type Name Description Schema path quota_id required string path limit_id required string path orgname required string Request body schema (application/json) Description of changing organization quota limit Name Description Schema type optional Type of quota limit: "Warning" or "Reject" string threshold_percent optional Quota threshold, in percent of quota integer Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.10.4. deleteOrganizationQuotaLimit DELETE /api/v1/organization/{orgname}/quota/{quota_id}/limit/{limit_id} Authorizations: oauth2_implicit ( super:user ) Path parameters Type Name Description Schema path quota_id required string path limit_id required string path orgname required string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.10.5. createOrganizationQuotaLimit POST /api/v1/organization/{orgname}/quota/{quota_id}/limit Authorizations: oauth2_implicit ( super:user ) Path parameters Type Name Description Schema path quota_id required string path orgname required string Request body schema (application/json) Description of a new organization quota limit Name Description Schema type required Type of quota limit: "Warning" or "Reject" string threshold_percent required Quota threshold, in percent of quota integer Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.10.6. listOrganizationQuotaLimit GET /api/v1/organization/{orgname}/quota/{quota_id}/limit Authorizations: Path parameters Type Name Description Schema path quota_id required string path orgname required string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.10.7. getUserQuotaLimit GET /api/v1/user/quota/{quota_id}/limit/{limit_id} Authorizations: oauth2_implicit ( user:admin ) Path parameters Type Name Description Schema path quota_id required string path limit_id required string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.10.8. listUserQuotaLimit GET /api/v1/user/quota/{quota_id}/limit Authorizations: oauth2_implicit ( user:admin ) Path parameters Type Name Description Schema path quota_id required string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.10.9. getOrganizationQuota GET /api/v1/organization/{orgname}/quota/{quota_id} Authorizations: Path parameters Type Name Description Schema path quota_id required string path orgname required string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.10.10. changeOrganizationQuota PUT /api/v1/organization/{orgname}/quota/{quota_id} Authorizations: oauth2_implicit ( super:user ) Path parameters Type Name Description Schema path quota_id required string path orgname required string Request body schema (application/json) Description of a new organization quota Name Description Schema limit_bytes optional Number of bytes the organization is allowed integer limits optional Human readable storage capacity of the organization. Accepts SI units like Mi, Gi, or Ti, as well as non-standard units like GB or MB. Must be mutually exclusive with limit_bytes . string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.10.11. deleteOrganizationQuota DELETE /api/v1/organization/{orgname}/quota/{quota_id} Authorizations: oauth2_implicit ( super:user ) Path parameters Type Name Description Schema path quota_id required string path orgname required string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.10.12. createOrganizationQuota Create a new organization quota. POST /api/v1/organization/{orgname}/quota Authorizations: oauth2_implicit ( super:user ) Path parameters Type Name Description Schema path orgname required string Request body schema (application/json) Description of a new organization quota Name Description Schema limit_bytes required Number of bytes the organization is allowed integer limits optional Human readable storage capacity of the organization. Accepts SI units like Mi, Gi, or Ti, as well as non-standard units like GB or MB. Must be mutually exclusive with limit_bytes . string Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.10.13. listOrganizationQuota GET /api/v1/organization/{orgname}/quota Authorizations: Path parameters Type Name Description Schema path orgname required string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.10.14. getUserQuota GET /api/v1/user/quota/{quota_id} Authorizations: oauth2_implicit ( user:admin ) Path parameters Type Name Description Schema path quota_id required string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.11. organization Manage organizations, members and OAuth applications. 7.11.1. createOrganization Create a new organization. POST /api/v1/organization/ Authorizations: oauth2_implicit ( user:admin ) Request body schema (application/json) Description of a new organization. Name Description Schema name required Organization username string email optional Organization contact email string recaptcha_response optional The (may be disabled) recaptcha response code for verification string Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X POST -H "Authorization: Bearer <bearer_token>" -H "Content-Type: application/json" -d '{ "name": "<new_organization_name>" }' "https://<quay-server.example.com>/api/v1/organization/" 7.11.2. validateProxyCacheConfig POST /api/v1/organization/{orgname}/validateproxycache Authorizations: Path parameters Type Name Description Schema path orgname required string Request body schema (application/json) Proxy cache configuration for an organization Name Description Schema upstream_registry required Name of the upstream registry that is to be cached string Responses HTTP Code Description Schema 202 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.11.3. getOrganizationCollaborators List outside collaborators of the specified organization. GET /api/v1/organization/{orgname}/collaborators Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path orgname required The name of the organization string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.11.4. getOrganizationApplication Retrieves the application with the specified client_id under the specified organization. GET /api/v1/organization/{orgname}/applications/{client_id} Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path client_id required The OAuth client ID string path orgname required The name of the organization string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.11.5. updateOrganizationApplication Updates an application under this organization. PUT /api/v1/organization/{orgname}/applications/{client_id} Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path client_id required The OAuth client ID string path orgname required The name of the organization string Request body schema (application/json) Description of an updated application. Name Description Schema name required The name of the application string redirect_uri required The URI for the application's OAuth redirect string application_uri required The URI for the application's homepage string description optional The human-readable description for the application string avatar_email optional The e-mail address of the avatar to use for the application string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.11.6. deleteOrganizationApplication Deletes the application under this organization. DELETE /api/v1/organization/{orgname}/applications/{client_id} Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path client_id required The OAuth client ID string path orgname required The name of the organization string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.11.7. createOrganizationApplication Creates a new application under this organization. POST /api/v1/organization/{orgname}/applications Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path orgname required The name of the organization string Request body schema (application/json) Description of a new organization application. Name Description Schema name required The name of the application string redirect_uri optional The URI for the application's OAuth redirect string application_uri optional The URI for the application's homepage string description optional The human-readable description for the application string avatar_email optional The e-mail address of the avatar to use for the application string Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.11.8. getOrganizationApplications List the applications for the specified organization. GET /api/v1/organization/{orgname}/applications Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path orgname required The name of the organization string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.11.9. getProxyCacheConfig Retrieves the proxy cache configuration of the organization. GET /api/v1/organization/{orgname}/proxycache Authorizations: Path parameters Type Name Description Schema path orgname required The name of the organization string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.11.10. deleteProxyCacheConfig Delete proxy cache configuration for the organization. DELETE /api/v1/organization/{orgname}/proxycache Authorizations: Path parameters Type Name Description Schema path orgname required The name of the organization string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.11.11. createProxyCacheConfig Creates proxy cache configuration for the organization. POST /api/v1/organization/{orgname}/proxycache Authorizations: Path parameters Type Name Description Schema path orgname required The name of the organization string Request body schema (application/json) Proxy cache configuration for an organization Name Description Schema upstream_registry required Name of the upstream registry that is to be cached string Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.11.12. getOrganizationMember Retrieves the details of a member of the organization. GET /api/v1/organization/{orgname}/members/{membername} Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path membername required The username of the organization member string path orgname required The name of the organization string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.11.13. removeOrganizationMember Removes a member from an organization, revoking all its repository priviledges and removing it from all teams in the organization. DELETE /api/v1/organization/{orgname}/members/{membername} Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path membername required The username of the organization member string path orgname required The name of the organization string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.11.14. getOrganizationMembers List the human members of the specified organization. GET /api/v1/organization/{orgname}/members Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path orgname required The name of the organization string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.11.15. getOrganization Get the details for the specified organization. GET /api/v1/organization/{orgname} Authorizations: Path parameters Type Name Description Schema path orgname required The name of the organization string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET \ -H "Authorization: Bearer <bearer_token>" \ "<quay-server.example.com>/api/v1/organization/<organization_name>" 7.11.16. changeOrganizationDetails Change the details for the specified organization. PUT /api/v1/organization/{orgname} Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path orgname required The name of the organization string Request body schema (application/json) Description of updates for an existing organization Name Description Schema email optional Organization contact email string invoice_email optional Whether the organization desires to receive emails for invoices boolean invoice_email_address optional The email address at which to receive invoices tag_expiration_s optional The number of seconds for tag expiration integer Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.11.17. deleteAdminedOrganization Deletes the specified organization. DELETE /api/v1/organization/{orgname} Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path orgname required The name of the organization string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET \ -H "Authorization: Bearer <bearer_token>" \ "<quay-server.example.com>/api/v1/organization/<organization_name>" 7.11.18. getApplicationInformation Get information on the specified application. GET /api/v1/app/{client_id} Authorizations: Path parameters Type Name Description Schema path client_id required The OAuth client ID string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.12. permission Manage repository permissions. 7.12.1. getUserTransitivePermission Get the fetch the permission for the specified user. GET /api/v1/repository/{repository}/permissions/user/{username}/transitive Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path username required The username of the user to which the permissions apply string path repository required The full path of the repository. e.g. namespace/name string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.12.2. getUserPermissions Get the permission for the specified user. GET /api/v1/repository/{repository}/permissions/user/{username} Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path username required The username of the user to which the permission applies string path repository required The full path of the repository. e.g. namespace/name string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.12.3. changeUserPermissions Update the perimssions for an existing repository. PUT /api/v1/repository/{repository}/permissions/user/{username} Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path username required The username of the user to which the permission applies string path repository required The full path of the repository. e.g. namespace/name string Request body schema (application/json) Description of a user permission. Name Description Schema role required Role to use for the user string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X PUT \ -H "Authorization: Bearer <bearer_token>" \ -H "Content-Type: application/json" \ -d '{"role": "admin"}' \ https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository>/permissions/user/<username> 7.12.4. deleteUserPermissions Delete the permission for the user. DELETE /api/v1/repository/{repository}/permissions/user/{username} Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path username required The username of the user to which the permission applies string path repository required The full path of the repository. e.g. namespace/name string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X DELETE \ -H "Authorization: Bearer <bearer_token>" \ -H "Accept: application/json" \ https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository>/permissions/user/<username> 7.12.5. getTeamPermissions Fetch the permission for the specified team. GET /api/v1/repository/{repository}/permissions/team/{teamname} Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string path teamname required The name of the team to which the permission applies string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.12.6. changeTeamPermissions Update the existing team permission. PUT /api/v1/repository/{repository}/permissions/team/{teamname} Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string path teamname required The name of the team to which the permission applies string Request body schema (application/json) Description of a team permission. Name Description Schema role required Role to use for the team string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.12.7. deleteTeamPermissions Delete the permission for the specified team. DELETE /api/v1/repository/{repository}/permissions/team/{teamname} Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string path teamname required The name of the team to which the permission applies string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.12.8. listRepoTeamPermissions List all team permission. GET /api/v1/repository/{repository}/permissions/team/ Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.12.9. listRepoUserPermissions List all user permissions. GET /api/v1/repository/{repository}/permissions/user/ Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET \ -H "Authorization: Bearer <bearer_token>" \ -H "Accept: application/json" \ https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository>/permissions/user/<username>/ 7.13. policy 7.13.1. createOrganizationAutoPrunePolicy Creates an auto-prune policy for the organization POST /api/v1/organization/{orgname}/autoprunepolicy/ Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path orgname required The name of the organization string Request body schema (application/json) The policy configuration that is to be applied to the user namespace Name Description Schema method required The method to use for pruning tags (number_of_tags, creation_date) string value required The value to use for the pruning method (number of tags e.g. 10, time delta e.g. 7d (7 days)) tagPattern optional Tags only matching this pattern will be pruned string tagPatternMatches optional Determine whether pruned tags should or should not match the tagPattern boolean Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.13.2. listOrganizationAutoPrunePolicies Lists the auto-prune policies for the organization GET /api/v1/organization/{orgname}/autoprunepolicy/ Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path orgname required The name of the organization string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.13.3. getOrganizationAutoPrunePolicy Fetches the auto-prune policy for the organization GET /api/v1/organization/{orgname}/autoprunepolicy/{policy_uuid} Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path policy_uuid required The unique ID of the policy string path orgname required The name of the organization string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.13.4. deleteOrganizationAutoPrunePolicy Deletes the auto-prune policy for the organization DELETE /api/v1/organization/{orgname}/autoprunepolicy/{policy_uuid} Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path policy_uuid required The unique ID of the policy string path orgname required The name of the organization string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.13.5. updateOrganizationAutoPrunePolicy Updates the auto-prune policy for the organization PUT /api/v1/organization/{orgname}/autoprunepolicy/{policy_uuid} Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path policy_uuid required The unique ID of the policy string path orgname required The name of the organization string Request body schema (application/json) The policy configuration that is to be applied to the user namespace Name Description Schema method required The method to use for pruning tags (number_of_tags, creation_date) string value required The value to use for the pruning method (number of tags e.g. 10, time delta e.g. 7d (7 days)) tagPattern optional Tags only matching this pattern will be pruned string tagPatternMatches optional Determine whether pruned tags should or should not match the tagPattern boolean Responses HTTP Code Description Schema 204 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.13.6. createRepositoryAutoPrunePolicy Creates an auto-prune policy for the repository POST /api/v1/repository/{repository}/autoprunepolicy/ Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string Request body schema (application/json) The policy configuration that is to be applied to the user namespace Name Description Schema method required The method to use for pruning tags (number_of_tags, creation_date) string value required The value to use for the pruning method (number of tags e.g. 10, time delta e.g. 7d (7 days)) tagPattern optional Tags only matching this pattern will be pruned string tagPatternMatches optional Determine whether pruned tags should or should not match the tagPattern boolean Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.13.7. listRepositoryAutoPrunePolicies Lists the auto-prune policies for the repository GET /api/v1/repository/{repository}/autoprunepolicy/ Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.13.8. getRepositoryAutoPrunePolicy Fetches the auto-prune policy for the repository GET /api/v1/repository/{repository}/autoprunepolicy/{policy_uuid} Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path policy_uuid required The unique ID of the policy string path repository required The full path of the repository. e.g. namespace/name string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.13.9. deleteRepositoryAutoPrunePolicy Deletes the auto-prune policy for the repository DELETE /api/v1/repository/{repository}/autoprunepolicy/{policy_uuid} Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path policy_uuid required The unique ID of the policy string path repository required The full path of the repository. e.g. namespace/name string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.13.10. updateRepositoryAutoPrunePolicy Updates the auto-prune policy for the repository PUT /api/v1/repository/{repository}/autoprunepolicy/{policy_uuid} Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path policy_uuid required The unique ID of the policy string path repository required The full path of the repository. e.g. namespace/name string Request body schema (application/json) The policy configuration that is to be applied to the user namespace Name Description Schema method required The method to use for pruning tags (number_of_tags, creation_date) string value required The value to use for the pruning method (number of tags e.g. 10, time delta e.g. 7d (7 days)) tagPattern optional Tags only matching this pattern will be pruned string tagPatternMatches optional Determine whether pruned tags should or should not match the tagPattern boolean Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.13.11. createUserAutoPrunePolicy Creates the auto-prune policy for the currently logged in user POST /api/v1/user/autoprunepolicy/ Authorizations: oauth2_implicit ( user:admin ) Request body schema (application/json) The policy configuration that is to be applied to the user namespace Name Description Schema method required The method to use for pruning tags (number_of_tags, creation_date) string value required The value to use for the pruning method (number of tags e.g. 10, time delta e.g. 7d (7 days)) tagPattern optional Tags only matching this pattern will be pruned string tagPatternMatches optional Determine whether pruned tags should or should not match the tagPattern boolean Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.13.12. listUserAutoPrunePolicies Lists the auto-prune policies for the currently logged in user GET /api/v1/user/autoprunepolicy/ Authorizations: oauth2_implicit ( user:admin ) Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.13.13. getUserAutoPrunePolicy Fetches the auto-prune policy for the currently logged in user GET /api/v1/user/autoprunepolicy/{policy_uuid} Authorizations: oauth2_implicit ( user:admin ) Path parameters Type Name Description Schema path policy_uuid required The unique ID of the policy string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.13.14. deleteUserAutoPrunePolicy Deletes the auto-prune policy for the currently logged in user DELETE /api/v1/user/autoprunepolicy/{policy_uuid} Authorizations: oauth2_implicit ( user:admin ) Path parameters Type Name Description Schema path policy_uuid required The unique ID of the policy string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.13.15. updateUserAutoPrunePolicy Updates the auto-prune policy for the currently logged in user PUT /api/v1/user/autoprunepolicy/{policy_uuid} Authorizations: oauth2_implicit ( user:admin ) Path parameters Type Name Description Schema path policy_uuid required The unique ID of the policy string Request body schema (application/json) The policy configuration that is to be applied to the user namespace Name Description Schema method required The method to use for pruning tags (number_of_tags, creation_date) string value required The value to use for the pruning method (number of tags e.g. 10, time delta e.g. 7d (7 days)) tagPattern optional Tags only matching this pattern will be pruned string tagPatternMatches optional Determine whether pruned tags should or should not match the tagPattern boolean Responses HTTP Code Description Schema 204 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.14. prototype Manage default permissions added to repositories. 7.14.1. updateOrganizationPrototypePermission Update the role of an existing permission prototype. PUT /api/v1/organization/{orgname}/prototypes/{prototypeid} Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path prototypeid required The ID of the prototype string path orgname required The name of the organization string Request body schema (application/json) Description of a the new prototype role Name Description Schema role optional Role that should be applied to the permission string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X PUT \ -H "Authorization: Bearer <bearer_token>" \ -H "Content-Type: application/json" \ --data '{ "role": "write" }' \ https://<quay-server.example.com>/api/v1/organization/<organization_name>/prototypes/<prototypeid> 7.14.2. deleteOrganizationPrototypePermission Delete an existing permission prototype. DELETE /api/v1/organization/{orgname}/prototypes/{prototypeid} Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path prototypeid required The ID of the prototype string path orgname required The name of the organization string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command curl -X DELETE \ -H "Authorization: Bearer <bearer_token>" \ -H "Accept: application/json" \ https://<quay-server.example.com>/api/v1/organization/<organization_name>/prototypes/<prototype_id> 7.14.3. createOrganizationPrototypePermission Create a new permission prototype. POST /api/v1/organization/{orgname}/prototypes Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path orgname required The name of the organization string Request body schema (application/json) Description of a new prototype Name Description Schema role required Role that should be applied to the delegate string activating_user optional Repository creating user to whom the rule should apply object delegate required Information about the user or team to which the rule grants access object Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X POST -H "Authorization: Bearer <bearer_token>" -H "Content-Type: application/json" --data '{ "role": "<admin_read_or_write>", "delegate": { "name": "<username>", "kind": "user" }, "activating_user": { "name": "<robot_name>" } }' https://<quay-server.example.com>/api/v1/organization/<organization_name>/prototypes 7.14.4. getOrganizationPrototypePermissions List the existing prototypes for this organization. GET /api/v1/organization/{orgname}/prototypes Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path orgname required The name of the organization string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET \ -H "Authorization: Bearer <bearer_token>" \ -H "Accept: application/json" \ https://<quay-server.example.com>/api/v1/organization/<organization_name>/prototypes 7.15. referrers List v2 API referrers 7.15.1. getReferrers List v2 API referrers of an image digest. GET /v2/{organization_name}/{repository_name}/referrers/{digest} Request body schema (application/json) Referrers of an image digest. Type Name Description Schema path orgname required The name of the organization string path repository required The full path of the repository. e.g. namespace/name string path referrers required Looks up the OCI referrers of a manifest under a repository. string 7.16. repository List, create and manage repositories. 7.16.1. createRepo Create a new repository. POST /api/v1/repository Authorizations: oauth2_implicit ( repo:create ) Request body schema (application/json) Description of a new repository Name Description Schema repository required Repository name string visibility required Visibility which the repository will start with string namespace optional Namespace in which the repository should be created. If omitted, the username of the caller is used string description required Markdown encoded description for the repository string repo_kind optional The kind of repository Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X POST \ -H "Authorization: Bearer <bearer_token>" \ -H "Content-Type: application/json" \ -d '{ "repository": "<new_repository_name>", "visibility": "<public>", "description": "<This is a description of the new repository>." }' \ "https://quay-server.example.com/api/v1/repository" 7.16.2. listRepos Fetch the list of repositories visible to the current user under a variety of situations. GET /api/v1/repository Authorizations: oauth2_implicit ( repo:read ) Query parameters Type Name Description Schema query next_page optional The page token for the page string query repo_kind optional The kind of repositories to return string query popularity optional Whether to include the repository's popularity metric. boolean query last_modified optional Whether to include when the repository was last modified. boolean query public required Adds any repositories visible to the user by virtue of being public boolean query starred required Filters the repositories returned to those starred by the user boolean query namespace required Filters the repositories returned to this namespace string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.16.3. changeRepoVisibility Change the visibility of a repository. POST /api/v1/repository/{repository}/changevisibility Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string Request body schema (application/json) Change the visibility for the repository. Name Description Schema visibility required Visibility which the repository will start with string Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.16.4. changeRepoState Change the state of a repository. PUT /api/v1/repository/{repository}/changestate Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string Request body schema (application/json) Change the state of the repository. Name Description Schema state required Determines whether pushes are allowed. string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.16.5. getRepo Fetch the specified repository. GET /api/v1/repository/{repository} Authorizations: oauth2_implicit ( repo:read ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string Query parameters Type Name Description Schema query includeTags optional Whether to include repository tags boolean query includeStats optional Whether to include action statistics boolean Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET -H "Authorization: Bearer <bearer_token>" "<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>" 7.16.6. updateRepo Update the description in the specified repository. PUT /api/v1/repository/{repository} Authorizations: oauth2_implicit ( repo:write ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string Request body schema (application/json) Fields which can be updated in a repository. Name Description Schema description required Markdown encoded description for the repository string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.16.7. deleteRepository Delete a repository. DELETE /api/v1/repository/{repository} Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X DELETE -H "Authorization: Bearer <bearer_token>" "<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>" 7.17. repositorynotification List, create and manage repository events/notifications. 7.17.1. testRepoNotification Queues a test notification for this repository. POST /api/v1/repository/{repository}/notification/{uuid}/test Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string path uuid required The UUID of the notification string Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X POST \ -H "Authorization: Bearer <bearer_token>" \ https://<quay-server.example.com>/api/v1/repository/<repository>/notification/<uuid>/test 7.17.2. getRepoNotification Get information for the specified notification. GET /api/v1/repository/{repository}/notification/{uuid} Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string path uuid required The UUID of the notification string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X POST \ -H "Authorization: Bearer <bearer_token>" \ https://<quay-server.example.com>/api/v1/repository/<repository>/notification/<uuid> 7.17.3. deleteRepoNotification Deletes the specified notification. DELETE /api/v1/repository/{repository}/notification/{uuid} Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string path uuid required The UUID of the notification string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X DELETE \ -H "Authorization: Bearer <bearer_token>" \ https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/notification/<uuid> 7.17.4. resetRepositoryNotificationFailures Resets repository notification to 0 failures. POST /api/v1/repository/{repository}/notification/{uuid} Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string path uuid required The UUID of the notification string Responses HTTP Code Description Schema 204 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.17.5. createRepoNotification POST /api/v1/repository/{repository}/notification/ Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string Request body schema (application/json) Information for creating a notification on a repository Name Description Schema event required The event on which the notification will respond string method required The method of notification (such as email or web callback) string config required JSON config information for the specific method of notification object eventConfig required JSON config information for the specific event of notification object title optional The human-readable title of the notification string Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X POST \ -H "Authorization: Bearer <bearer_token>" \ -H "Content-Type: application/json" \ --data '{ "event": "<event>", "method": "<method>", "config": { "<config_key>": "<config_value>" }, "eventConfig": { "<eventConfig_key>": "<eventConfig_value>" } }' \ https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/notification/ 7.17.6. listRepoNotifications List the notifications for the specified repository. GET /api/v1/repository/{repository}/notification/ Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.18. robot Manage user and organization robot accounts. 7.18.1. getUserRobots List the available robots for the user. GET /api/v1/user/robots Authorizations: oauth2_implicit ( user:admin ) Query parameters Type Name Description Schema query limit optional If specified, the number of robots to return. integer query token optional If false, the robot's token is not returned. boolean query permissions optional Whether to include repositories and teams in which the robots have permission. boolean Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.18.2. getOrgRobotPermissions Returns the list of repository permissions for the org's robot. GET /api/v1/organization/{orgname}/robots/{robot_shortname}/permissions Authorizations: oauth2_implicit ( user:admin ) Path parameters Type Name Description Schema path robot_shortname required The short name for the robot, without any user or organization prefix string path orgname required The name of the organization string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.18.3. regenerateOrgRobotToken Regenerates the token for an organization robot. POST /api/v1/organization/{orgname}/robots/{robot_shortname}/regenerate Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path robot_shortname required The short name for the robot, without any user or organization prefix string path orgname required The name of the organization string Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X POST \ -H "Authorization: Bearer <bearer_token>" \ "<quay-server.example.com>/api/v1/organization/<orgname>/robots/<robot_shortname>/regenerate" 7.18.4. getUserRobotPermissions Returns the list of repository permissions for the user's robot. GET /api/v1/user/robots/{robot_shortname}/permissions Authorizations: oauth2_implicit ( user:admin ) Path parameters Type Name Description Schema path robot_shortname required The short name for the robot, without any user or organization prefix string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.18.5. regenerateUserRobotToken Regenerates the token for a user's robot. POST /api/v1/user/robots/{robot_shortname}/regenerate Authorizations: oauth2_implicit ( user:admin ) Path parameters Type Name Description Schema path robot_shortname required The short name for the robot, without any user or organization prefix string Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X POST \ -H "Authorization: Bearer <bearer_token>" \ "<quay-server.example.com>/api/v1/user/robots/<robot_shortname>/regenerate" 7.18.6. getOrgRobot Returns the organization's robot with the specified name. GET /api/v1/organization/{orgname}/robots/{robot_shortname} Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path robot_shortname required The short name for the robot, without any user or organization prefix string path orgname required The name of the organization string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.18.7. createOrgRobot Create a new robot in the organization. PUT /api/v1/organization/{orgname}/robots/{robot_shortname} Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path robot_shortname required The short name for the robot, without any user or organization prefix string path orgname required The name of the organization string Request body schema (application/json) Optional data for creating a robot Name Description Schema description optional Optional text description for the robot string unstructured_metadata optional Optional unstructured metadata for the robot object Responses HTTP Code Description Schema 201 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X PUT -H "Authorization: Bearer <bearer_token>" "https://<quay-server.example.com>/api/v1/organization/<organization_name>/robots/<robot_name>" 7.18.8. deleteOrgRobot Delete an existing organization robot. DELETE /api/v1/organization/{orgname}/robots/{robot_shortname} Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path robot_shortname required The short name for the robot, without any user or organization prefix string path orgname required The name of the organization string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command curl -X DELETE \ -H "Authorization: Bearer <bearer_token>" \ "<quay-server.example.com>/api/v1/organization/<organization_name>/robots/<robot_shortname>" 7.18.9. getOrgRobots List the organization's robots. GET /api/v1/organization/{orgname}/robots Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path orgname required The name of the organization string Query parameters Type Name Description Schema query limit optional If specified, the number of robots to return. integer query token optional If false, the robot's token is not returned. boolean query permissions optional Whether to include repositories and teams in which the robots have permission. boolean Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET -H "Authorization: Bearer <bearer_token>" "https://<quay-server.example.com>/api/v1/organization/<organization_name>/robots" 7.18.10. getUserRobot Returns the user's robot with the specified name. GET /api/v1/user/robots/{robot_shortname} Authorizations: oauth2_implicit ( user:admin ) Path parameters Type Name Description Schema path robot_shortname required The short name for the robot, without any user or organization prefix string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET \ -H "Authorization: Bearer <bearer_token>" \ "<quay-server.example.com>/api/v1/user/robots/<robot_shortname>" 7.18.11. createUserRobot Create a new user robot with the specified name. PUT /api/v1/user/robots/{robot_shortname} Authorizations: oauth2_implicit ( user:admin ) Path parameters Type Name Description Schema path robot_shortname required The short name for the robot, without any user or organization prefix string Request body schema (application/json) Optional data for creating a robot Name Description Schema description optional Optional text description for the robot string unstructured_metadata optional Optional unstructured metadata for the robot object Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X PUT -H "Authorization: Bearer <bearer_token>" "https://<quay-server.example.com>/api/v1/user/robots/<robot_name>" 7.18.12. deleteUserRobot Delete an existing robot. DELETE /api/v1/user/robots/{robot_shortname} Authorizations: oauth2_implicit ( user:admin ) Path parameters Type Name Description Schema path robot_shortname required The short name for the robot, without any user or organization prefix string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X DELETE \ -H "Authorization: Bearer <bearer_token>" \ "<quay-server.example.com>/api/v1/user/robots/<robot_shortname>" 7.18.13. Auth Federated Robot Token Return an expiring robot token using the robot identity federation mechanism. GET oauth2/federation/robot/token Authorizations: oauth2_implicit ( robot:auth ) Responses HTTP Code Description Schema 200 Successful authentication and token generation { "token": "string" } 401 Unauthorized: missing or invalid authentication { "error": "string" } Request Body Type Name Description Schema body auth_result required The result of the authentication process, containing information about the robot identity. { "missing": "boolean", "error_message": "string", "context": { "robot": "RobotObject" } } 7.18.14. createOrgRobotFederation Create a federation configuration for the specified organization robot. POST /api/v1/organization/{orgname}/robots/{robot_shortname}/federation Retrieve the federation configuration for the specified organization robot. Authorizations: oauth2_implicit ( user:admin ) Path parameters Type Name Description Schema path orgname + robot_shortname required The name of the organization and the short name for the robot, without any user or organization prefix string Responses HTTP Code Description Schema 201 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 7.19. search Conduct searches against all registry context. 7.19.1. conductRepoSearch Get a list of apps and repositories that match the specified query. GET /api/v1/find/repositories Authorizations: Query parameters Type Name Description Schema query includeUsage optional Whether to include usage metadata boolean query page optional The page. integer query query optional The search query. string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.19.2. conductSearch Get a list of entities and resources that match the specified query. GET /api/v1/find/all Authorizations: oauth2_implicit ( repo:read ) Query parameters Type Name Description Schema query query optional The search query. string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.19.3. getMatchingEntities Get a list of entities that match the specified prefix. GET /api/v1/entities/{prefix} Authorizations: Path parameters Type Name Description Schema path prefix required string Query parameters Type Name Description Schema query includeOrgs optional Whether to include orgs names. boolean query includeTeams optional Whether to include team names. boolean query namespace optional Namespace to use when querying for org entities. string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.20. secscan List and manage repository vulnerabilities and other security information. 7.20.1. getRepoManifestSecurity GET /api/v1/repository/{repository}/manifest/{manifestref}/security Authorizations: oauth2_implicit ( repo:read ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string path manifestref required The digest of the manifest string Query parameters Type Name Description Schema query vulnerabilities optional Include vulnerabilities informations boolean Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET \ -H "Authorization: Bearer <bearer_token>" \ -H "Accept: application/json" \ "https://quay-server.example.com/api/v1/repository/<namespace>/<repository>/manifest/<manifest_digest>/security?vulnerabilities=<true_or_false>" 7.21. superuser Superuser API. 7.21.1. createInstallUser Creates a new user. POST /api/v1/superuser/users/ Authorizations: oauth2_implicit ( super:user ) Request body schema (application/json) Data for creating a user Name Description Schema username required The username of the user being created string email optional The email address of the user being created string Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X POST -H "Authorization: Bearer <bearer_token>" -H "Content-Type: application/json" -d '{ "username": "newuser", "email": "[email protected]" }' "https://<quay-server.example.com>/api/v1/superuser/users/" 7.21.2. deleteInstallUser Deletes a user. DELETE /api/v1/superuser/users/{username} Authorizations: oauth2_implicit ( super:user ) Request body schema (application/json) Data for deleting a user Name Description Schema username required The username of the user being deleted string Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X DELETE -H "Authorization: Bearer <bearer_token>" "https://<quay-server.example.com>/api/v1/superuser/users/{username}" 7.21.3. listAllUsers Returns a list of all users in the system. GET /api/v1/superuser/users/ Authorizations: oauth2_implicit ( super:user ) Query parameters Type Name Description Schema query next_page optional The page token for the page string query limit optional Limit to the number of results to return per page. Max 100. integer query disabled optional If false, only enabled users will be returned. boolean Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET -H "Authorization: Bearer <bearer_token>" "https://<quay-server.example.com>/api/v1/superuser/users/" 7.21.4. listAllLogs List the usage logs for the current system. GET /api/v1/superuser/logs Authorizations: oauth2_implicit ( super:user ) Query parameters Type Name Description Schema query next_page optional The page token for the page string query page optional The page number for the logs integer query endtime optional Latest time to which to get logs (%m/%d/%Y %Z) string query starttime optional Earliest time from which to get logs (%m/%d/%Y %Z) string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.21.5. listAllOrganizations List the organizations for the current system. GET /api/v1/superuser/organizations Authorizations: oauth2_implicit ( super:user ) Query parameters Type Name Description Schema path name required The name of the organization being managed string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET -H "Authorization: Bearer <bearer_token>" "https://<quay-server.example.com>/api/v1/superuser/organizations/" 7.21.6. createServiceKey POST /api/v1/superuser/keys Authorizations: oauth2_implicit ( super:user ) Request body schema (application/json) Description of creation of a service key Name Description Schema service required The service authenticating with this key string name optional The friendly name of a service key string metadata optional The key/value pairs of this key's metadata object notes optional If specified, the extra notes for the key string expiration required The expiration date as a unix timestamp Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.21.7. listServiceKeys GET /api/v1/superuser/keys Authorizations: oauth2_implicit ( super:user ) Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.21.8. changeUserQuotaSuperUser PUT /api/v1/superuser/organization/{namespace}/quota/{quota_id} Authorizations: oauth2_implicit ( super:user ) Path parameters Type Name Description Schema path namespace required string path quota_id required string Request body schema (application/json) Description of a new organization quota Name Description Schema limit_bytes optional Number of bytes the organization is allowed integer Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.21.9. deleteUserQuotaSuperUser DELETE /api/v1/superuser/organization/{namespace}/quota/{quota_id} Authorizations: oauth2_implicit ( super:user ) Path parameters Type Name Description Schema path namespace required string path quota_id required string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.21.10. createUserQuotaSuperUser POST /api/v1/superuser/organization/{namespace}/quota Authorizations: oauth2_implicit ( super:user ) Path parameters Type Name Description Schema path namespace required string Request body schema (application/json) Description of a new organization quota Name Description Schema limit_bytes required Number of bytes the organization is allowed integer Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.21.11. listUserQuotaSuperUser GET /api/v1/superuser/organization/{namespace}/quota Authorizations: oauth2_implicit ( super:user ) Path parameters Type Name Description Schema path namespace required string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.21.12. changeOrganizationQuotaSuperUser PUT /api/v1/superuser/users/{namespace}/quota/{quota_id} Authorizations: oauth2_implicit ( super:user ) Path parameters Type Name Description Schema path namespace required string path quota_id required string Request body schema (application/json) Description of a new organization quota Name Description Schema limit_bytes optional Number of bytes the organization is allowed integer Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.21.13. deleteOrganizationQuotaSuperUser DELETE /api/v1/superuser/users/{namespace}/quota/{quota_id} Authorizations: oauth2_implicit ( super:user ) Path parameters Type Name Description Schema path namespace required string path quota_id required string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.21.14. createOrganizationQuotaSuperUser POST /api/v1/superuser/users/{namespace}/quota Authorizations: oauth2_implicit ( super:user ) Path parameters Type Name Description Schema path namespace required string Request body schema (application/json) Description of a new organization quota Name Description Schema limit_bytes optional Number of bytes the organization is allowed integer Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.21.15. listOrganizationQuotaSuperUser GET /api/v1/superuser/users/{namespace}/quota Authorizations: oauth2_implicit ( super:user ) Path parameters Type Name Description Schema path namespace required string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.21.16. changeOrganization Updates information about the specified user. PUT /api/v1/superuser/organizations/{name} Authorizations: oauth2_implicit ( super:user ) Path parameters Type Name Description Schema path name required The name of the organizaton being managed string Request body schema (application/json) Description of updates for an existing organization Name Description Schema email optional Organization contact email string invoice_email optional Whether the organization desires to receive emails for invoices boolean invoice_email_address optional The email address at which to receive invoices tag_expiration_s optional The number of seconds for tag expiration integer Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.21.17. deleteOrganization Deletes the specified organization. DELETE /api/v1/superuser/organizations/{name} Authorizations: oauth2_implicit ( super:user ) Path parameters Type Name Description Schema path name required The name of the organizaton being managed string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.21.18. approveServiceKey POST /api/v1/superuser/approvedkeys/{kid} Authorizations: oauth2_implicit ( super:user ) Path parameters Type Name Description Schema path kid required The unique identifier for a service key string Request body schema (application/json) Information for approving service keys Name Description Schema notes optional Optional approval notes string Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.21.19. deleteServiceKey DELETE /api/v1/superuser/keys/{kid} Authorizations: oauth2_implicit ( super:user ) Path parameters Type Name Description Schema path kid required The unique identifier for a service key string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.21.20. updateServiceKey PUT /api/v1/superuser/keys/{kid} Authorizations: oauth2_implicit ( super:user ) Path parameters Type Name Description Schema path kid required The unique identifier for a service key string Request body schema (application/json) Description of updates for a service key Name Description Schema name optional The friendly name of a service key string metadata optional The key/value pairs of this key's metadata object expiration optional The expiration date as a unix timestamp Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.21.21. getServiceKey GET /api/v1/superuser/keys/{kid} Authorizations: oauth2_implicit ( super:user ) Path parameters Type Name Description Schema path kid required The unique identifier for a service key string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.21.22. getRepoBuildStatusSuperUser Return the status for the builds specified by the build uuids. GET /api/v1/superuser/{build_uuid}/status Authorizations: oauth2_implicit ( super:user ) Path parameters Type Name Description Schema path build_uuid required The UUID of the build string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.21.23. getRepoBuildSuperUser Returns information about a build. GET /api/v1/superuser/{build_uuid}/build Authorizations: oauth2_implicit ( super:user ) Path parameters Type Name Description Schema path build_uuid required The UUID of the build string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.21.24. getRepoBuildLogsSuperUser Return the build logs for the build specified by the build uuid. GET /api/v1/superuser/{build_uuid}/logs Authorizations: oauth2_implicit ( super:user ) Path parameters Type Name Description Schema path build_uuid required The UUID of the build string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.21.25. getRegistrySize GET /api/v1/superuser/registrysize/ Authorizations: oauth2_implicit ( super:user ) Path parameters Type Name Description Schema path namespace required string Description of a image registry size Name Description Schema size_bytes * optional Number of bytes the organization is allowed integer last_ran integer queued boolean running boolean Responses HTTP Code Description Schema 200 CREATED 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.21.26. postRegistrySize POST /api/v1/superuser/registrysize/ Authorizations: oauth2_implicit ( super:user ) Path parameters Type Name Description Schema path namespace required string Request body schema (application/json) Description of a image registry size Name Description Schema last_ran integer queued boolean running boolean Responses HTTP Code Description Schema 201 CREATED 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.22. tag Manage the tags of a repository. 7.22.1. restoreTag Restores a repository tag back to a image in the repository. POST /api/v1/repository/{repository}/tag/{tag}/restore Authorizations: oauth2_implicit ( repo:write ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string path tag required The name of the tag string Request body schema (application/json) Restores a tag to a specific image Name Description Schema manifest_digest required If specified, the manifest digest that should be used string Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X POST \ -H "Authorization: Bearer <bearer_token>" \ -H "Content-Type: application/json" \ --data '{ "manifest_digest": <manifest_digest> }' \ quay-server.example.com/api/v1/repository/quayadmin/busybox/tag/test/restore 7.22.2. changeTag Change which image a tag points to or create a new tag. PUT /api/v1/repository/{repository}/tag/{tag} Authorizations: oauth2_implicit ( repo:write ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string path tag required The name of the tag string Request body schema (application/json) Makes changes to a specific tag Name Description Schema manifest_digest optional (If specified) The manifest digest to which the tag should point expiration optional (If specified) The expiration for the image Responses HTTP Code Description Schema 201 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X PUT \ -H "Authorization: Bearer <bearer_token>" \ -H "Content-Type: application/json" \ --data '{ "manifest_digest": "<manifest_digest>" }' \ https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/tag/<tag> 7.22.3. deleteFullTag Delete the specified repository tag. DELETE /api/v1/repository/{repository}/tag/{tag} Authorizations: oauth2_implicit ( repo:write ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string path tag required The name of the tag string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.22.4. listRepoTags GET /api/v1/repository/{repository}/tag/ Authorizations: oauth2_implicit ( repo:read ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string Query parameters Type Name Description Schema query onlyActiveTags optional Filter to only active tags. boolean query page optional Page index for the results. Default 1. integer query limit optional Limit to the number of results to return per page. Max 100. integer query filter_tag_name optional Syntax: <op>:<name> Filters the tag names based on the operation.<op> can be 'like' or 'eq'. string query specificTag optional Filters the tags to the specific tag. string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET \ -H "Authorization: Bearer <bearer_token>" \ -H "Accept: application/json" \ https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/tag/ 7.23. team Create, list and manage an organization's teams. 7.23.1. getOrganizationTeamPermissions Returns the list of repository permissions for the org's team. GET /api/v1/organization/{orgname}/team/{teamname}/permissions Authorizations: Path parameters Type Name Description Schema path teamname required The name of the team string path orgname required The name of the organization string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.23.2. updateOrganizationTeamMember Adds or invites a member to an existing team. PUT /api/v1/organization/{orgname}/team/{teamname}/members/{membername} Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path teamname required The name of the team string path membername required The username of the team member string path orgname required The name of the organization string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X PUT \ -H "Authorization: Bearer <your_access_token>" \ "<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/members/<member_name>" 7.23.3. deleteOrganizationTeamMember Delete a member of a team. DELETE /api/v1/organization/{orgname}/team/{teamname}/members/{membername} Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path teamname required The name of the team string path membername required The username of the team member string path orgname required The name of the organization string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X DELETE \ -H "Authorization: Bearer <your_access_token>" \ "<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/members/<member_name>" 7.23.4. getOrganizationTeamMembers Retrieve the list of members for the specified team. GET /api/v1/organization/{orgname}/team/{teamname}/members Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path teamname required The name of the team string path orgname required The name of the organization string Query parameters Type Name Description Schema query includePending optional Whether to include pending members boolean Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET \ -H "Authorization: Bearer <your_access_token>" \ "<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/members" 7.23.5. inviteTeamMemberEmail Invites an email address to an existing team. PUT /api/v1/organization/{orgname}/team/{teamname}/invite/{email} Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path email required string path teamname required string path orgname required string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X PUT \ -H "Authorization: Bearer <your_access_token>" \ "<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/invite/<email>" 7.23.6. deleteTeamMemberEmailInvite Delete an invite of an email address to join a team. DELETE /api/v1/organization/{orgname}/team/{teamname}/invite/{email} Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path email required string path teamname required string path orgname required string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command + USD curl -X DELETE \ -H "Authorization: Bearer <your_access_token>" \ "<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/invite/<email>" 7.23.7. updateOrganizationTeam Update the org-wide permission for the specified team. Note This API is also used to create a team. PUT /api/v1/organization/{orgname}/team/{teamname} Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path teamname required The name of the team string path orgname required The name of the organization string Request body schema (application/json) Description of a team Name Description Schema role required Org wide permissions that should apply to the team string description optional Markdown description for the team string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -k -X PUT -H 'Accept: application/json' -H 'Content-Type: application/json' -H "Authorization: Bearer <bearer_token>" --data '{"role": "creator"}' https://<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name> 7.23.8. deleteOrganizationTeam Delete the specified team. DELETE /api/v1/organization/{orgname}/team/{teamname} Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path teamname required The name of the team string path orgname required The name of the organization string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X DELETE \ -H "Authorization: Bearer <your_access_token>" \ "<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>" 7.24. trigger Create, list and manage build triggers. 7.24.1. activateBuildTrigger Activate the specified build trigger. POST /api/v1/repository/{repository}/trigger/{trigger_uuid}/activate Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path trigger_uuid required The UUID of the build trigger string path repository required The full path of the repository. e.g. namespace/name string Request body schema (application/json) Name Description Schema config required Arbitrary json. object pull_robot optional The name of the robot that will be used to pull images. string Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.24.2. listTriggerRecentBuilds List the builds started by the specified trigger. GET /api/v1/repository/{repository}/trigger/{trigger_uuid}/builds Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path trigger_uuid required The UUID of the build trigger string path repository required The full path of the repository. e.g. namespace/name string Query parameters Type Name Description Schema query limit optional The maximum number of builds to return integer Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.24.3. manuallyStartBuildTrigger Manually start a build from the specified trigger. POST /api/v1/repository/{repository}/trigger/{trigger_uuid}/start Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path trigger_uuid required The UUID of the build trigger string path repository required The full path of the repository. e.g. namespace/name string Request body schema (application/json) Optional run parameters for activating the build trigger Name Description Schema branch_name optional (SCM only) If specified, the name of the branch to build. string commit_sha optional (Custom Only) If specified, the ref/SHA1 used to checkout a git repository. string refs optional (SCM Only) If specified, the ref to build. Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.24.4. getBuildTrigger Get information for the specified build trigger. GET /api/v1/repository/{repository}/trigger/{trigger_uuid} Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path trigger_uuid required The UUID of the build trigger string path repository required The full path of the repository. e.g. namespace/name string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.24.5. updateBuildTrigger Updates the specified build trigger. PUT /api/v1/repository/{repository}/trigger/{trigger_uuid} Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path trigger_uuid required The UUID of the build trigger string path repository required The full path of the repository. e.g. namespace/name string Request body schema (application/json) Options for updating a build trigger Name Description Schema enabled required Whether the build trigger is enabled boolean Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.24.6. deleteBuildTrigger Delete the specified build trigger. DELETE /api/v1/repository/{repository}/trigger/{trigger_uuid} Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path trigger_uuid required The UUID of the build trigger string path repository required The full path of the repository. e.g. namespace/name string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.24.7. listBuildTriggers List the triggers for the specified repository. GET /api/v1/repository/{repository}/trigger/ Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.25. user Manage the current user. 7.25.1. createStar Star a repository. POST /api/v1/user/starred Authorizations: oauth2_implicit ( repo:read ) Request body schema (application/json) Name Description Schema namespace required Namespace in which the repository belongs string repository required Repository name string Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.25.2. listStarredRepos List all starred repositories. GET /api/v1/user/starred Authorizations: oauth2_implicit ( user:admin ) Query parameters Type Name Description Schema query next_page optional The page token for the page string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.25.3. getLoggedInUser Get user information for the authenticated user. GET /api/v1/user/ Authorizations: oauth2_implicit ( user:read ) Responses HTTP Code Description Schema 200 Successful invocation UserView 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.25.4. deleteStar Removes a star from a repository. DELETE /api/v1/user/starred/{repository} Authorizations: oauth2_implicit ( user:admin ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.25.5. getUserInformation Get user information for the specified user. GET /api/v1/users/{username} Authorizations: Path parameters Type Name Description Schema path username required string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.26. Definitions 7.26.1. ApiError Name Description Schema status optional Status code of the response. integer type optional Reference to the type of the error. string detail optional Details about the specific instance of the error. string title optional Unique error code to identify the type of error. string error_message optional Deprecated; alias for detail string error_type optional Deprecated; alias for detail string 7.26.2. UserView Name Description Schema verified optional Whether the user's email address has been verified boolean anonymous optional true if this user data represents a guest user boolean email optional The user's email address string avatar optional Avatar data representing the user's icon object organizations optional Information about the organizations in which the user is a member array of object logins optional The list of external login providers against which the user has authenticated array of object can_create_repo optional Whether the user has permission to create repositories boolean preferred_namespace optional If true, the user's namespace is the preferred namespace to display boolean 7.26.3. ViewMirrorConfig Name Description Schema is_enabled optional Used to enable or disable synchronizations. boolean external_reference optional Location of the external repository. string external_registry_username optional Username used to authenticate with external registry. external_registry_password optional Password used to authenticate with external registry. sync_start_date optional Determines the time this repository is ready for synchronization. string sync_interval optional Number of seconds after next_start_date to begin synchronizing. integer robot_username optional Username of robot which will be used for image pushes. string root_rule optional A list of glob-patterns used to determine which tags should be synchronized. object external_registry_config optional object 7.26.4. ApiErrorDescription Name Description Schema type optional A reference to the error type resource string title optional The title of the error. Can be used to uniquely identify the kind of error. string description optional A more detailed description of the error that may include help for fixing the issue. string
[ "curl -X POST -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{ \"title\": \"MyAppToken\" }' \"http://quay-server.example.com/api/v1/user/apptoken\"", "curl -X GET -H \"Authorization: Bearer <access_token>\" \"http://quay-server.example.com/api/v1/user/apptoken\"", "curl -X GET -H \"Authorization: Bearer <access_token>\" \"http://quay-server.example.com/api/v1/user/apptoken/<token_uuid>\"", "curl -X DELETE -H \"Authorization: Bearer <access_token>\" \"http://quay-server.example.com/api/v1/user/apptoken/<token_uuid>\"", "curl -X GET \"https://<quay-server.example.com>/api/v1/discovery?query=true\" -H \"Authorization: Bearer <access_token>\"", "curl -X GET \"https://<quay-server.example.com>/api/v1/error/<error_type>\" -H \"Authorization: Bearer <access_token>\"", "curl -X POST \"https://<quay-server.example.com>/api/v1/messages\" -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{ \"message\": { \"content\": \"Hi\", \"media_type\": \"text/plain\", \"severity\": \"info\" } }'", "curl -X GET \"https://<quay-server.example.com>/api/v1/messages\" -H \"Authorization: Bearer <access_token>\"", "curl -X DELETE \"https://<quay-server.example.com>/api/v1/message/<uuid>\" -H \"Authorization: Bearer <access_token>\"", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" \"<quay-server.example.com>/api/v1/user/aggregatelogs?performer=<username>&starttime=<MM/DD/YYYY>&endtime=<MM/DD/YYYY>\"", "curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -H \"Accept: application/json\" -d '{ \"starttime\": \"<MM/DD/YYYY>\", \"endtime\": \"<MM/DD/YYYY>\", \"callback_email\": \"[email protected]\" }' \"http://<quay-server.example.com>/api/v1/user/exportlogs\"", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" \"<quay-server.example.com>/api/v1/user/logs\"", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" \"<quay-server.example.com>/api/v1/organization/{orgname}/aggregatelogs\"", "curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -H \"Accept: application/json\" -d '{ \"starttime\": \"<MM/DD/YYYY>\", \"endtime\": \"<MM/DD/YYYY>\", \"callback_email\": \"[email protected]\" }' \"http://<quay-server.example.com>/api/v1/organization/{orgname}/exportlogs\"", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" \"http://<quay-server.example.com>/api/v1/organization/{orgname}/logs\"", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" \"<quay-server.example.com>/api/v1/repository/<repository_name>/<namespace>/aggregatelogs?starttime=2024-01-01&endtime=2024-06-18\"\"", "curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -H \"Accept: application/json\" -d '{ \"starttime\": \"2024-01-01\", \"endtime\": \"2024-06-18\", \"callback_url\": \"http://your-callback-url.example.com\" }' \"http://<quay-server.example.com>/api/v1/repository/{repository}/exportlogs\"", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" \"http://<quay-server.example.com>/api/v1/repository/{repository}/logs\"", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<repository>/manifest/<manifestref>/labels/<label_id>", "curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" https://<quay-server.example.com>/api/v1/repository/<repository>/manifest/<manifestref>/labels/<labelid>", "curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" --data '{ \"key\": \"<key>\", \"value\": \"<value>\", \"media_type\": \"<media_type>\" }' https://<quay-server.example.com>/api/v1/repository/<repository>/manifest/<manifestref>/labels", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<repository>/manifest/<manifestref>/labels", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<repository>/manifest/<manifestref>", "curl -X POST \"https://<quay-server.example.com>/api/v1/repository/<namespace>/<repo>/mirror/sync-cancel\" \\", "curl -X POST \"https://<quay-server.example.com>/api/v1/repository/<namespace>/<repo>/mirror/sync-now\" -H \"Authorization: Bearer <access_token>\"", "curl -X GET \"https://<quay-server.example.com>/api/v1/repository/<namespace>/<repo>/mirror\" -H \"Authorization: Bearer <access_token>\"", "curl -X PUT \"https://<quay-server.example.com>/api/v1/repository/<namespace>/<repo>/mirror\" -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{ \"is_enabled\": <false>, 1 \"external_reference\": \"<external_reference>\", \"external_registry_username\": \"<external_registry_username>\", \"external_registry_password\": \"<external_registry_password>\", \"sync_start_date\": \"<sync_start_date>\", \"sync_interval\": <sync_interval>, \"robot_username\": \"<robot_username>\", \"root_rule\": { \"rule\": \"<rule>\", \"rule_type\": \"<rule_type>\" } }'", "curl -X POST \"https://<quay-server.example.com>/api/v1/repository/<namespace>/<repo>/mirror\" -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{ \"is_enabled\": <is_enabled>, \"external_reference\": \"<external_reference>\", \"external_registry_username\": \"<external_registry_username>\", \"external_registry_password\": \"<external_registry_password>\", \"sync_start_date\": \"<sync_start_date>\", \"sync_interval\": <sync_interval>, \"robot_username\": \"<robot_username>\", \"root_rule\": { \"rule\": \"<rule>\", \"rule_type\": \"<rule_type>\" } }'", "curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -d '{ \"name\": \"<new_organization_name>\" }' \"https://<quay-server.example.com>/api/v1/organization/\"", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>\"", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>\"", "curl -X PUT -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -d '{\"role\": \"admin\"}' https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository>/permissions/user/<username>", "curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository>/permissions/user/<username>", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository>/permissions/user/<username>/", "curl -X PUT -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" --data '{ \"role\": \"write\" }' https://<quay-server.example.com>/api/v1/organization/<organization_name>/prototypes/<prototypeid>", "curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/organization/<organization_name>/prototypes/<prototype_id>", "curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" --data '{ \"role\": \"<admin_read_or_write>\", \"delegate\": { \"name\": \"<username>\", \"kind\": \"user\" }, \"activating_user\": { \"name\": \"<robot_name>\" } }' https://<quay-server.example.com>/api/v1/organization/<organization_name>/prototypes", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/organization/<organization_name>/prototypes", "curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -d '{ \"repository\": \"<new_repository_name>\", \"visibility\": \"<public>\", \"description\": \"<This is a description of the new repository>.\" }' \"https://quay-server.example.com/api/v1/repository\"", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>\"", "curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>\"", "curl -X POST -H \"Authorization: Bearer <bearer_token>\" https://<quay-server.example.com>/api/v1/repository/<repository>/notification/<uuid>/test", "curl -X POST -H \"Authorization: Bearer <bearer_token>\" https://<quay-server.example.com>/api/v1/repository/<repository>/notification/<uuid>", "curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/notification/<uuid>", "curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" --data '{ \"event\": \"<event>\", \"method\": \"<method>\", \"config\": { \"<config_key>\": \"<config_value>\" }, \"eventConfig\": { \"<eventConfig_key>\": \"<eventConfig_value>\" } }' https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/notification/", "curl -X POST -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/organization/<orgname>/robots/<robot_shortname>/regenerate\"", "curl -X POST -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/user/robots/<robot_shortname>/regenerate\"", "curl -X PUT -H \"Authorization: Bearer <bearer_token>\" \"https://<quay-server.example.com>/api/v1/organization/<organization_name>/robots/<robot_name>\"", "curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/robots/<robot_shortname>\"", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"https://<quay-server.example.com>/api/v1/organization/<organization_name>/robots\"", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/user/robots/<robot_shortname>\"", "curl -X PUT -H \"Authorization: Bearer <bearer_token>\" \"https://<quay-server.example.com>/api/v1/user/robots/<robot_name>\"", "curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/user/robots/<robot_shortname>\"", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" \"https://quay-server.example.com/api/v1/repository/<namespace>/<repository>/manifest/<manifest_digest>/security?vulnerabilities=<true_or_false>\"", "curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -d '{ \"username\": \"newuser\", \"email\": \"[email protected]\" }' \"https://<quay-server.example.com>/api/v1/superuser/users/\"", "curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" \"https://<quay-server.example.com>/api/v1/superuser/users/{username}\"", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"https://<quay-server.example.com>/api/v1/superuser/users/\"", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"https://<quay-server.example.com>/api/v1/superuser/organizations/\"", "curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" --data '{ \"manifest_digest\": <manifest_digest> }' quay-server.example.com/api/v1/repository/quayadmin/busybox/tag/test/restore", "curl -X PUT -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" --data '{ \"manifest_digest\": \"<manifest_digest>\" }' https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/tag/<tag>", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/tag/", "curl -X PUT -H \"Authorization: Bearer <your_access_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/members/<member_name>\"", "If the user is merely invited to join the team, then the invite is removed instead.", "curl -X DELETE -H \"Authorization: Bearer <your_access_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/members/<member_name>\"", "curl -X GET -H \"Authorization: Bearer <your_access_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/members\"", "curl -X PUT -H \"Authorization: Bearer <your_access_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/invite/<email>\"", "curl -X DELETE -H \"Authorization: Bearer <your_access_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/invite/<email>\"", "curl -k -X PUT -H 'Accept: application/json' -H 'Content-Type: application/json' -H \"Authorization: Bearer <bearer_token>\" --data '{\"role\": \"creator\"}' https://<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>", "curl -X DELETE -H \"Authorization: Bearer <your_access_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>\"" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3/html/red_hat_quay_api_guide/red_hat_quay_application_programming_interface_api
Chapter 4. External storage services
Chapter 4. External storage services Red Hat OpenShift Data Foundation can use IBM FlashSystems or make services from an external Red Hat Ceph Storage cluster available for consumption through OpenShift Container Platform clusters running on the following platforms: VMware vSphere Bare metal Red Hat OpenStack platform (Technology Preview) IBM Power IBM Z The OpenShift Data Foundation operators create and manage services to satisfy Persistent Volume (PV) and Object Bucket Claims (OBCs) against the external services. External cluster can serve block, file and object storage classes for applications that run on OpenShift Container Platform. The operators do not deploy or manage the external clusters.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/planning_your_deployment/external-storage-services_rhodf
Chapter 17. Creating a performance profile
Chapter 17. Creating a performance profile Learn about the Performance Profile Creator (PPC) and how you can use it to create a performance profile. 17.1. About the Performance Profile Creator The Performance Profile Creator (PPC) is a command-line tool, delivered with the Performance Addon Operator, used to create the performance profile. The tool consumes must-gather data from the cluster and several user-supplied profile arguments. The PPC generates a performance profile that is appropriate for your hardware and topology. The tool is run by one of the following methods: Invoking podman Calling a wrapper script 17.1.1. Gathering data about your cluster using the must-gather command The Performance Profile Creator (PPC) tool requires must-gather data. As a cluster administrator, run the must-gather command to capture information about your cluster. Prerequisites Access to the cluster as a user with the cluster-admin role. Access to the Performance Addon Operator image. The OpenShift CLI ( oc ) installed. Procedure Optional: Verify that a matching machine config pool exists with a label: USD oc describe mcp/worker-rt Example output Name: worker-rt Namespace: Labels: machineconfiguration.openshift.io/role=worker-rt If a matching label does not exist add a label for a machine config pool (MCP) that matches with the MCP name: USD oc label mcp <mcp_name> <mcp_name>="" Navigate to the directory where you want to store the must-gather data. Run must-gather on your cluster: USD oc adm must-gather --image=<PAO_image> --dest-dir=<dir> Note The must-gather command must be run with the performance-addon-operator-must-gather image. The output can optionally be compressed. Compressed output is required if you are running the Performance Profile Creator wrapper script. Example USD oc adm must-gather --image=registry.redhat.io/openshift4/performance-addon-operator-must-gather-rhel8:v4.10 --dest-dir=must-gather Create a compressed file from the must-gather directory: USD tar cvaf must-gather.tar.gz must-gather/ 17.1.2. Running the Performance Profile Creator using podman As a cluster administrator, you can run podman and the Performance Profile Creator to create a performance profile. Prerequisites Access to the cluster as a user with the cluster-admin role. A cluster installed on bare metal hardware. A node with podman and OpenShift CLI ( oc ) installed. Procedure Check the machine config pool: USD oc get mcp Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-acd1358917e9f98cbdb599aea622d78b True False False 3 3 3 0 22h worker-cnf rendered-worker-cnf-1d871ac76e1951d32b2fe92369879826 False True False 2 1 1 0 22h Use Podman to authenticate to registry.redhat.io : USD podman login registry.redhat.io Username: <username> Password: <password> Optional: Display help for the PPC tool: USD podman run --entrypoint performance-profile-creator registry.redhat.io/openshift4/performance-addon-rhel8-operator:v4.10 -h Example output A tool that automates creation of Performance Profiles Usage: performance-profile-creator [flags] Flags: --disable-ht Disable Hyperthreading -h, --help help for performance-profile-creator --info string Show cluster information; requires --must-gather-dir-path, ignore the other arguments. [Valid values: log, json] (default "log") --mcp-name string MCP name corresponding to the target machines (required) --must-gather-dir-path string Must gather directory path (default "must-gather") --power-consumption-mode string The power consumption mode. [Valid values: default, low-latency, ultra-low-latency] (default "default") --profile-name string Name of the performance profile to be created (default "performance") --reserved-cpu-count int Number of reserved CPUs (required) --rt-kernel Enable Real Time Kernel (required) --split-reserved-cpus-across-numa Split the Reserved CPUs across NUMA nodes --topology-manager-policy string Kubelet Topology Manager Policy of the performance profile to be created. [Valid values: single-numa-node, best-effort, restricted] (default "restricted") --user-level-networking Run with User level Networking(DPDK) enabled Run the Performance Profile Creator tool in discovery mode: Note Discovery mode inspects your cluster using the output from must-gather . The output produced includes information on: The NUMA cell partitioning with the allocated CPU ids Whether hyperthreading is enabled Using this information you can set appropriate values for some of the arguments supplied to the Performance Profile Creator tool. USD podman run --entrypoint performance-profile-creator -v /must-gather:/must-gather:z registry.redhat.io/openshift4/performance-addon-rhel8-operator:v4.10 --info log --must-gather-dir-path /must-gather Note This command uses the performance profile creator as a new entry point to podman . It maps the must-gather data for the host into the container image and invokes the required user-supplied profile arguments to produce the my-performance-profile.yaml file. The -v option can be the path to either: The must-gather output directory An existing directory containing the must-gather decompressed tarball The info option requires a value which specifies the output format. Possible values are log and JSON. The JSON format is reserved for debugging. Run podman : USD podman run --entrypoint performance-profile-creator -v /must-gather:/must-gather:z registry.redhat.io/openshift4/performance-addon-rhel8-operator:v4.10 --mcp-name=worker-cnf --reserved-cpu-count=20 --rt-kernel=true --split-reserved-cpus-across-numa=false --topology-manager-policy=single-numa-node --must-gather-dir-path /must-gather --power-consumption-mode=ultra-low-latency > my-performance-profile.yaml Note The Performance Profile Creator arguments are shown in the Performance Profile Creator arguments table. The following arguments are required: reserved-cpu-count mcp-name rt-kernel The mcp-name argument in this example is set to worker-cnf based on the output of the command oc get mcp . For single-node OpenShift use --mcp-name=master . Review the created YAML file: USD cat my-performance-profile.yaml Example output apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: additionalKernelArgs: - nmi_watchdog=0 - audit=0 - mce=off - processor.max_cstate=1 - intel_idle.max_cstate=0 - idle=poll cpu: isolated: 1,3,5,7,9,11,13,15,17,19-39,41,43,45,47,49,51,53,55,57,59-79 reserved: 0,2,4,6,8,10,12,14,16,18,40,42,44,46,48,50,52,54,56,58 nodeSelector: node-role.kubernetes.io/worker-cnf: "" numa: topologyPolicy: single-numa-node realTimeKernel: enabled: true Apply the generated profile: Note Install the Performance Addon Operator before applying the profile. USD oc apply -f my-performance-profile.yaml 17.1.2.1. How to run podman to create a performance profile The following example illustrates how to run podman to create a performance profile with 20 reserved CPUs that are to be split across the NUMA nodes. Node hardware configuration: 80 CPUs Hyperthreading enabled Two NUMA nodes Even numbered CPUs run on NUMA node 0 and odd numbered CPUs run on NUMA node 1 Run podman to create the performance profile: USD podman run --entrypoint performance-profile-creator -v /must-gather:/must-gather:z registry.redhat.io/openshift4/performance-addon-rhel8-operator:v4.10 --mcp-name=worker-cnf --reserved-cpu-count=20 --rt-kernel=true --split-reserved-cpus-across-numa=true --must-gather-dir-path /must-gather > my-performance-profile.yaml The created profile is described in the following YAML: apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: cpu: isolated: 10-39,50-79 reserved: 0-9,40-49 nodeSelector: node-role.kubernetes.io/worker-cnf: "" numa: topologyPolicy: restricted realTimeKernel: enabled: true Note In this case, 10 CPUs are reserved on NUMA node 0 and 10 are reserved on NUMA node 1. 17.1.3. Running the Performance Profile Creator wrapper script The performance profile wrapper script simplifies the running of the Performance Profile Creator (PPC) tool. It hides the complexities associated with running podman and specifying the mapping directories and it enables the creation of the performance profile. Prerequisites Access to the Performance Addon Operator image. Access to the must-gather tarball. Procedure Create a file on your local machine named, for example, run-perf-profile-creator.sh : USD vi run-perf-profile-creator.sh Paste the following code into the file: #!/bin/bash readonly CONTAINER_RUNTIME=USD{CONTAINER_RUNTIME:-podman} readonly CURRENT_SCRIPT=USD(basename "USD0") readonly CMD="USD{CONTAINER_RUNTIME} run --entrypoint performance-profile-creator" readonly IMG_EXISTS_CMD="USD{CONTAINER_RUNTIME} image exists" readonly IMG_PULL_CMD="USD{CONTAINER_RUNTIME} image pull" readonly MUST_GATHER_VOL="/must-gather" PAO_IMG="registry.redhat.io/openshift4/performance-addon-rhel8-operator:v4.10" MG_TARBALL="" DATA_DIR="" usage() { print "Wrapper usage:" print " USD{CURRENT_SCRIPT} [-h] [-p image][-t path] -- [performance-profile-creator flags]" print "" print "Options:" print " -h help for USD{CURRENT_SCRIPT}" print " -p Performance Addon Operator image" print " -t path to a must-gather tarball" USD{IMG_EXISTS_CMD} "USD{PAO_IMG}" && USD{CMD} "USD{PAO_IMG}" -h } function cleanup { [ -d "USD{DATA_DIR}" ] && rm -rf "USD{DATA_DIR}" } trap cleanup EXIT exit_error() { print "error: USD*" usage exit 1 } print() { echo "USD*" >&2 } check_requirements() { USD{IMG_EXISTS_CMD} "USD{PAO_IMG}" || USD{IMG_PULL_CMD} "USD{PAO_IMG}" || \ exit_error "Performance Addon Operator image not found" [ -n "USD{MG_TARBALL}" ] || exit_error "Must-gather tarball file path is mandatory" [ -f "USD{MG_TARBALL}" ] || exit_error "Must-gather tarball file not found" DATA_DIR=USD(mktemp -d -t "USD{CURRENT_SCRIPT}XXXX") || exit_error "Cannot create the data directory" tar -zxf "USD{MG_TARBALL}" --directory "USD{DATA_DIR}" || exit_error "Cannot decompress the must-gather tarball" chmod a+rx "USD{DATA_DIR}" return 0 } main() { while getopts ':hp:t:' OPT; do case "USD{OPT}" in h) usage exit 0 ;; p) PAO_IMG="USD{OPTARG}" ;; t) MG_TARBALL="USD{OPTARG}" ;; ?) exit_error "invalid argument: USD{OPTARG}" ;; esac done shift USD((OPTIND - 1)) check_requirements || exit 1 USD{CMD} -v "USD{DATA_DIR}:USD{MUST_GATHER_VOL}:z" "USD{PAO_IMG}" "USD@" --must-gather-dir-path "USD{MUST_GATHER_VOL}" echo "" 1>&2 } main "USD@" Add execute permissions for everyone on this script: USD chmod a+x run-perf-profile-creator.sh Optional: Display the run-perf-profile-creator.sh command usage: USD ./run-perf-profile-creator.sh -h Expected output Wrapper usage: run-perf-profile-creator.sh [-h] [-p image][-t path] -- [performance-profile-creator flags] Options: -h help for run-perf-profile-creator.sh -p Performance Addon Operator image 1 -t path to a must-gather tarball 2 A tool that automates creation of Performance Profiles Usage: performance-profile-creator [flags] Flags: --disable-ht Disable Hyperthreading -h, --help help for performance-profile-creator --info string Show cluster information; requires --must-gather-dir-path, ignore the other arguments. [Valid values: log, json] (default "log") --mcp-name string MCP name corresponding to the target machines (required) --must-gather-dir-path string Must gather directory path (default "must-gather") --power-consumption-mode string The power consumption mode. [Valid values: default, low-latency, ultra-low-latency] (default "default") --profile-name string Name of the performance profile to be created (default "performance") --reserved-cpu-count int Number of reserved CPUs (required) --rt-kernel Enable Real Time Kernel (required) --split-reserved-cpus-across-numa Split the Reserved CPUs across NUMA nodes --topology-manager-policy string Kubelet Topology Manager Policy of the performance profile to be created. [Valid values: single-numa-node, best-effort, restricted] (default "restricted") --user-level-networking Run with User level Networking(DPDK) enabled Note There two types of arguments: Wrapper arguments namely -h , -p and -t PPC arguments 1 Optional: Specify the Performance Addon Operator image. If not set, the default upstream image is used: registry.redhat.io/openshift4/performance-addon-rhel8-operator:v4.10 . 2 -t is a required wrapper script argument and specifies the path to a must-gather tarball. Run the performance profile creator tool in discovery mode: Note Discovery mode inspects your cluster using the output from must-gather . The output produced includes information on: The NUMA cell partitioning with the allocated CPU IDs Whether hyperthreading is enabled Using this information you can set appropriate values for some of the arguments supplied to the Performance Profile Creator tool. USD ./run-perf-profile-creator.sh -t /must-gather/must-gather.tar.gz -- --info=log Note The info option requires a value which specifies the output format. Possible values are log and JSON. The JSON format is reserved for debugging. Check the machine config pool: USD oc get mcp Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-acd1358917e9f98cbdb599aea622d78b True False False 3 3 3 0 22h worker-cnf rendered-worker-cnf-1d871ac76e1951d32b2fe92369879826 False True False 2 1 1 0 22h Create a performance profile: USD ./run-perf-profile-creator.sh -t /must-gather/must-gather.tar.gz -- --mcp-name=worker-cnf --reserved-cpu-count=2 --rt-kernel=true > my-performance-profile.yaml Note The Performance Profile Creator arguments are shown in the Performance Profile Creator arguments table. The following arguments are required: reserved-cpu-count mcp-name rt-kernel The mcp-name argument in this example is set to worker-cnf based on the output of the command oc get mcp . For single-node OpenShift use --mcp-name=master . Review the created YAML file: USD cat my-performance-profile.yaml Example output apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: cpu: isolated: 1-39,41-79 reserved: 0,40 nodeSelector: node-role.kubernetes.io/worker-cnf: "" numa: topologyPolicy: restricted realTimeKernel: enabled: false Apply the generated profile: Note Install the Performance Addon Operator before applying the profile. USD oc apply -f my-performance-profile.yaml 17.1.4. Performance Profile Creator arguments Table 17.1. Performance Profile Creator arguments Argument Description disable-ht Disable hyperthreading. Possible values: true or false . Default: false . Warning If this argument is set to true you should not disable hyperthreading in the BIOS. Disabling hyperthreading is accomplished with a kernel command line argument. info This captures cluster information and is used in discovery mode only. Discovery mode also requires the must-gather-dir-path argument. If any other arguments are set they are ignored. Possible values: log JSON Note These options define the output format with the JSON format being reserved for debugging. Default: log . mcp-name MCP name for example worker-cnf corresponding to the target machines. This parameter is required. must-gather-dir-path Must gather directory path. This parameter is required. When the user runs the tool with the wrapper script must-gather is supplied by the script itself and the user must not specify it. power-consumption-mode The power consumption mode. Possible values: default low-latency ultra-low-latency Default: default . profile-name Name of the performance profile to create. Default: performance . reserved-cpu-count Number of reserved CPUs. This parameter is required. Note This must be a natural number. A value of 0 is not allowed. rt-kernel Enable real-time kernel. This parameter is required. Possible values: true or false . split-reserved-cpus-across-numa Split the reserved CPUs across NUMA nodes. Possible values: true or false . Default: false . topology-manager-policy Kubelet Topology Manager policy of the performance profile to be created. Possible values: single-numa-node best-effort restricted Default: restricted . user-level-networking Run with user level networking (DPDK) enabled. Possible values: true or false . Default: false . 17.2. Additional resources For more information about the must-gather tool, see Gathering data about your cluster .
[ "oc describe mcp/worker-rt", "Name: worker-rt Namespace: Labels: machineconfiguration.openshift.io/role=worker-rt", "oc label mcp <mcp_name> <mcp_name>=\"\"", "oc adm must-gather --image=<PAO_image> --dest-dir=<dir>", "oc adm must-gather --image=registry.redhat.io/openshift4/performance-addon-operator-must-gather-rhel8:v4.10 --dest-dir=must-gather", "tar cvaf must-gather.tar.gz must-gather/", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-acd1358917e9f98cbdb599aea622d78b True False False 3 3 3 0 22h worker-cnf rendered-worker-cnf-1d871ac76e1951d32b2fe92369879826 False True False 2 1 1 0 22h", "podman login registry.redhat.io", "Username: <username> Password: <password>", "podman run --entrypoint performance-profile-creator registry.redhat.io/openshift4/performance-addon-rhel8-operator:v4.10 -h", "A tool that automates creation of Performance Profiles Usage: performance-profile-creator [flags] Flags: --disable-ht Disable Hyperthreading -h, --help help for performance-profile-creator --info string Show cluster information; requires --must-gather-dir-path, ignore the other arguments. [Valid values: log, json] (default \"log\") --mcp-name string MCP name corresponding to the target machines (required) --must-gather-dir-path string Must gather directory path (default \"must-gather\") --power-consumption-mode string The power consumption mode. [Valid values: default, low-latency, ultra-low-latency] (default \"default\") --profile-name string Name of the performance profile to be created (default \"performance\") --reserved-cpu-count int Number of reserved CPUs (required) --rt-kernel Enable Real Time Kernel (required) --split-reserved-cpus-across-numa Split the Reserved CPUs across NUMA nodes --topology-manager-policy string Kubelet Topology Manager Policy of the performance profile to be created. [Valid values: single-numa-node, best-effort, restricted] (default \"restricted\") --user-level-networking Run with User level Networking(DPDK) enabled", "podman run --entrypoint performance-profile-creator -v /must-gather:/must-gather:z registry.redhat.io/openshift4/performance-addon-rhel8-operator:v4.10 --info log --must-gather-dir-path /must-gather", "podman run --entrypoint performance-profile-creator -v /must-gather:/must-gather:z registry.redhat.io/openshift4/performance-addon-rhel8-operator:v4.10 --mcp-name=worker-cnf --reserved-cpu-count=20 --rt-kernel=true --split-reserved-cpus-across-numa=false --topology-manager-policy=single-numa-node --must-gather-dir-path /must-gather --power-consumption-mode=ultra-low-latency > my-performance-profile.yaml", "cat my-performance-profile.yaml", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: additionalKernelArgs: - nmi_watchdog=0 - audit=0 - mce=off - processor.max_cstate=1 - intel_idle.max_cstate=0 - idle=poll cpu: isolated: 1,3,5,7,9,11,13,15,17,19-39,41,43,45,47,49,51,53,55,57,59-79 reserved: 0,2,4,6,8,10,12,14,16,18,40,42,44,46,48,50,52,54,56,58 nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" numa: topologyPolicy: single-numa-node realTimeKernel: enabled: true", "oc apply -f my-performance-profile.yaml", "podman run --entrypoint performance-profile-creator -v /must-gather:/must-gather:z registry.redhat.io/openshift4/performance-addon-rhel8-operator:v4.10 --mcp-name=worker-cnf --reserved-cpu-count=20 --rt-kernel=true --split-reserved-cpus-across-numa=true --must-gather-dir-path /must-gather > my-performance-profile.yaml", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: cpu: isolated: 10-39,50-79 reserved: 0-9,40-49 nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" numa: topologyPolicy: restricted realTimeKernel: enabled: true", "vi run-perf-profile-creator.sh", "#!/bin/bash readonly CONTAINER_RUNTIME=USD{CONTAINER_RUNTIME:-podman} readonly CURRENT_SCRIPT=USD(basename \"USD0\") readonly CMD=\"USD{CONTAINER_RUNTIME} run --entrypoint performance-profile-creator\" readonly IMG_EXISTS_CMD=\"USD{CONTAINER_RUNTIME} image exists\" readonly IMG_PULL_CMD=\"USD{CONTAINER_RUNTIME} image pull\" readonly MUST_GATHER_VOL=\"/must-gather\" PAO_IMG=\"registry.redhat.io/openshift4/performance-addon-rhel8-operator:v4.10\" MG_TARBALL=\"\" DATA_DIR=\"\" usage() { print \"Wrapper usage:\" print \" USD{CURRENT_SCRIPT} [-h] [-p image][-t path] -- [performance-profile-creator flags]\" print \"\" print \"Options:\" print \" -h help for USD{CURRENT_SCRIPT}\" print \" -p Performance Addon Operator image\" print \" -t path to a must-gather tarball\" USD{IMG_EXISTS_CMD} \"USD{PAO_IMG}\" && USD{CMD} \"USD{PAO_IMG}\" -h } function cleanup { [ -d \"USD{DATA_DIR}\" ] && rm -rf \"USD{DATA_DIR}\" } trap cleanup EXIT exit_error() { print \"error: USD*\" usage exit 1 } print() { echo \"USD*\" >&2 } check_requirements() { USD{IMG_EXISTS_CMD} \"USD{PAO_IMG}\" || USD{IMG_PULL_CMD} \"USD{PAO_IMG}\" || exit_error \"Performance Addon Operator image not found\" [ -n \"USD{MG_TARBALL}\" ] || exit_error \"Must-gather tarball file path is mandatory\" [ -f \"USD{MG_TARBALL}\" ] || exit_error \"Must-gather tarball file not found\" DATA_DIR=USD(mktemp -d -t \"USD{CURRENT_SCRIPT}XXXX\") || exit_error \"Cannot create the data directory\" tar -zxf \"USD{MG_TARBALL}\" --directory \"USD{DATA_DIR}\" || exit_error \"Cannot decompress the must-gather tarball\" chmod a+rx \"USD{DATA_DIR}\" return 0 } main() { while getopts ':hp:t:' OPT; do case \"USD{OPT}\" in h) usage exit 0 ;; p) PAO_IMG=\"USD{OPTARG}\" ;; t) MG_TARBALL=\"USD{OPTARG}\" ;; ?) exit_error \"invalid argument: USD{OPTARG}\" ;; esac done shift USD((OPTIND - 1)) check_requirements || exit 1 USD{CMD} -v \"USD{DATA_DIR}:USD{MUST_GATHER_VOL}:z\" \"USD{PAO_IMG}\" \"USD@\" --must-gather-dir-path \"USD{MUST_GATHER_VOL}\" echo \"\" 1>&2 } main \"USD@\"", "chmod a+x run-perf-profile-creator.sh", "./run-perf-profile-creator.sh -h", "Wrapper usage: run-perf-profile-creator.sh [-h] [-p image][-t path] -- [performance-profile-creator flags] Options: -h help for run-perf-profile-creator.sh -p Performance Addon Operator image 1 -t path to a must-gather tarball 2 A tool that automates creation of Performance Profiles Usage: performance-profile-creator [flags] Flags: --disable-ht Disable Hyperthreading -h, --help help for performance-profile-creator --info string Show cluster information; requires --must-gather-dir-path, ignore the other arguments. [Valid values: log, json] (default \"log\") --mcp-name string MCP name corresponding to the target machines (required) --must-gather-dir-path string Must gather directory path (default \"must-gather\") --power-consumption-mode string The power consumption mode. [Valid values: default, low-latency, ultra-low-latency] (default \"default\") --profile-name string Name of the performance profile to be created (default \"performance\") --reserved-cpu-count int Number of reserved CPUs (required) --rt-kernel Enable Real Time Kernel (required) --split-reserved-cpus-across-numa Split the Reserved CPUs across NUMA nodes --topology-manager-policy string Kubelet Topology Manager Policy of the performance profile to be created. [Valid values: single-numa-node, best-effort, restricted] (default \"restricted\") --user-level-networking Run with User level Networking(DPDK) enabled", "./run-perf-profile-creator.sh -t /must-gather/must-gather.tar.gz -- --info=log", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-acd1358917e9f98cbdb599aea622d78b True False False 3 3 3 0 22h worker-cnf rendered-worker-cnf-1d871ac76e1951d32b2fe92369879826 False True False 2 1 1 0 22h", "./run-perf-profile-creator.sh -t /must-gather/must-gather.tar.gz -- --mcp-name=worker-cnf --reserved-cpu-count=2 --rt-kernel=true > my-performance-profile.yaml", "cat my-performance-profile.yaml", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: cpu: isolated: 1-39,41-79 reserved: 0,40 nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" numa: topologyPolicy: restricted realTimeKernel: enabled: false", "oc apply -f my-performance-profile.yaml" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/scalability_and_performance/cnf-create-performance-profiles
Chapter 3. Hardware requirements for NFV
Chapter 3. Hardware requirements for NFV This section describes the hardware requirements for NFV. Red Hat certifies hardware for use with Red Hat OpenStack Platform. For more information, see Certified hardware . 3.1. Tested NICs for NFV For a list of tested NICs for NFV, see the Red Hat Knowledgebase solution Network Adapter Fast Datapath Feature Support Matrix . Use the default driver for the supported NIC, unless you are configuring OVS-DPDK on NVIDIA (Mellanox) network interfaces. For NVIDIA network interfaces, you must set the corresponding kernel driver in the j2 network configuration template. Example In this example, the mlx5_core driver is set for the Mellanox ConnectX-5 network interface: 3.2. Troubleshooting hardware offload In a Red Hat OpenStack Platform(RHOSP) 17.1 deployment, OVS Hardware Offload might not offload flows for VMs with switchdev -capable ports and Mellanox ConnectX5 NICs. To troubleshoot and configure offload flows in this scenario, disable the ESWITCH_IPV4_TTL_MODIFY_ENABLE Mellanox firmware parameter. For more troubleshooting information about OVS Hardware Offload in RHOSP 17.1, see the Red Hat Knowledgebase solution OVS Hardware Offload with Mellanox NIC in OpenStack Platform 16.2 . Procedure Log in to the Compute nodes in your RHOSP deployment that have Mellanox NICs that you want to configure. Use the mstflint utility to query the ESWITCH_IPV4_TTL_MODIFY_ENABLE Mellanox firmware parameter . If the ESWITCH_IPV4_TTL_MODIFY_ENABLE parameter is enabled and set to 1 , then set the value to 0 to disable it. Reboot the node. 3.3. Discovering your NUMA node topology When you plan your deployment, you must understand the NUMA topology of your Compute node to partition the CPU and memory resources for optimum performance. To determine the NUMA information, perform one of the following tasks: Enable hardware introspection to retrieve this information from bare-metal nodes. Log on to each bare-metal node to manually collect the information. Note You must install and configure the undercloud before you can retrieve NUMA information through hardware introspection. For more information about undercloud configuration, see Installing and managing Red Hat OpenStack Platform with director Guide . 3.4. Retrieving hardware introspection details The Bare Metal service hardware-inspection-extras feature is enabled by default, and you can use it to retrieve hardware details for overcloud configuration. For more information about the inspection_extras parameter in the undercloud.conf file, see Director configuration parameters . For example, the numa_topology collector is part of the hardware-inspection extras and includes the following information for each NUMA node: RAM (in kilobytes) Physical CPU cores and their sibling threads NICs associated with the NUMA node Procedure To retrieve the information listed above, substitute <UUID> with the UUID of the bare-metal node to complete the following command: The following example shows the retrieved NUMA information for a bare-metal node: 3.5. NFV BIOS settings The following table describes the required BIOS settings for NFV: Note You must enable SR-IOV global and NIC settings in the BIOS, or your Red Hat OpenStack Platform (RHOSP) deployment with SR-IOV Compute nodes will fail. Table 3.1. BIOS Settings Parameter Setting C3 Power State Disabled. C6 Power State Disabled. MLC Streamer Enabled. MLC Spatial Prefetcher Enabled. DCU Data Prefetcher Enabled. DCA Enabled. CPU Power and Performance Performance. Memory RAS and Performance Config NUMA Optimized Enabled. Turbo Boost Disabled in NFV deployments that require deterministic performance. Enabled in all other scenarios. VT-d Enabled for Intel cards if VFIO functionality is needed. NUMA memory interleave Disabled. On processors that use the intel_idle driver, Red Hat Enterprise Linux can ignore BIOS settings and re-enable the processor C-state. You can disable intel_idle and instead use the acpi_idle driver by specifying the key-value pair intel_idle.max_cstate=0 on the kernel boot command line. Confirm that the processor is using the acpi_idle driver by checking the contents of current_driver : Note You will experience some latency after changing drivers, because it takes time for the Tuned daemon to start. However, after Tuned loads, the processor does not use the deeper C-state.
[ "members - type: ovs_dpdk_port name: dpdk0 driver: mlx5_core members: - type: interface name: enp3s0f0", "yum install -y mstflint mstconfig -d <PF PCI BDF> q ESWITCH_IPV4_TTL_MODIFY_ENABLE", "mstconfig -d <PF PCI BDF> s ESWITCH_IPV4_TTL_MODIFY_ENABLE=0`", "openstack baremetal introspection data save <UUID> | jq .numa_topology", "{ \"cpus\": [ { \"cpu\": 1, \"thread_siblings\": [ 1, 17 ], \"numa_node\": 0 }, { \"cpu\": 2, \"thread_siblings\": [ 10, 26 ], \"numa_node\": 1 }, { \"cpu\": 0, \"thread_siblings\": [ 0, 16 ], \"numa_node\": 0 }, { \"cpu\": 5, \"thread_siblings\": [ 13, 29 ], \"numa_node\": 1 }, { \"cpu\": 7, \"thread_siblings\": [ 15, 31 ], \"numa_node\": 1 }, { \"cpu\": 7, \"thread_siblings\": [ 7, 23 ], \"numa_node\": 0 }, { \"cpu\": 1, \"thread_siblings\": [ 9, 25 ], \"numa_node\": 1 }, { \"cpu\": 6, \"thread_siblings\": [ 6, 22 ], \"numa_node\": 0 }, { \"cpu\": 3, \"thread_siblings\": [ 11, 27 ], \"numa_node\": 1 }, { \"cpu\": 5, \"thread_siblings\": [ 5, 21 ], \"numa_node\": 0 }, { \"cpu\": 4, \"thread_siblings\": [ 12, 28 ], \"numa_node\": 1 }, { \"cpu\": 4, \"thread_siblings\": [ 4, 20 ], \"numa_node\": 0 }, { \"cpu\": 0, \"thread_siblings\": [ 8, 24 ], \"numa_node\": 1 }, { \"cpu\": 6, \"thread_siblings\": [ 14, 30 ], \"numa_node\": 1 }, { \"cpu\": 3, \"thread_siblings\": [ 3, 19 ], \"numa_node\": 0 }, { \"cpu\": 2, \"thread_siblings\": [ 2, 18 ], \"numa_node\": 0 } ], \"ram\": [ { \"size_kb\": 66980172, \"numa_node\": 0 }, { \"size_kb\": 67108864, \"numa_node\": 1 } ], \"nics\": [ { \"name\": \"ens3f1\", \"numa_node\": 1 }, { \"name\": \"ens3f0\", \"numa_node\": 1 }, { \"name\": \"ens2f0\", \"numa_node\": 0 }, { \"name\": \"ens2f1\", \"numa_node\": 0 }, { \"name\": \"ens1f1\", \"numa_node\": 0 }, { \"name\": \"ens1f0\", \"numa_node\": 0 }, { \"name\": \"eno4\", \"numa_node\": 0 }, { \"name\": \"eno1\", \"numa_node\": 0 }, { \"name\": \"eno3\", \"numa_node\": 0 }, { \"name\": \"eno2\", \"numa_node\": 0 } ] }", "cat /sys/devices/system/cpu/cpuidle/current_driver acpi_idle" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_network_functions_virtualization/hardware-req-nfv_rhosp-nfv
Chapter 14. Uninstalling a cluster on Azure
Chapter 14. Uninstalling a cluster on Azure You can remove a cluster that you deployed to Microsoft Azure. 14.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. Prerequisites You have a copy of the installation program that you used to deploy the cluster. You have the files that the installation program generated when you created your cluster. Procedure From the directory that contains the installation program on the computer that you used to install the cluster, run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program. 14.2. Deleting Microsoft Azure resources with the Cloud Credential Operator utility After uninstalling an OpenShift Container Platform cluster that uses short-term credentials managed outside the cluster, you can use the CCO utility ( ccoctl ) to remove the Microsoft Azure (Azure) resources that ccoctl created during installation. Prerequisites Extract and prepare the ccoctl binary. Uninstall an OpenShift Container Platform cluster on Azure that uses short-term credentials. Procedure Delete the Azure resources that ccoctl created by running the following command: USD ccoctl azure delete \ --name=<name> \ 1 --region=<azure_region> \ 2 --subscription-id=<azure_subscription_id> \ 3 --delete-oidc-resource-group 1 <name> matches the name that was originally used to create and tag the cloud resources. 2 <azure_region> is the Azure region in which to delete cloud resources. 3 <azure_subscription_id> is the Azure subscription ID for which to delete cloud resources. Verification To verify that the resources are deleted, query Azure. For more information, refer to Azure documentation.
[ "./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2", "ccoctl azure delete --name=<name> \\ 1 --region=<azure_region> \\ 2 --subscription-id=<azure_subscription_id> \\ 3 --delete-oidc-resource-group" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_azure/uninstalling-cluster-azure
3.3.3. Converting a local Xen virtual machine
3.3.3. Converting a local Xen virtual machine Ensure that the guest virtual machine's XML is available locally, and that the storage referred to in the XML is available locally at the same paths. To convert the virtual machine from an XML file, run: Where pool is the local storage pool to hold the image, bridge_name is the name of a local network bridge to connect the converted virtual machine's network to, and guest_name.xml is the path to the virtual machine's exported XML. You may also use the --network parameter to connect to a locally managed network if your virtual machine only has a single network interface. If your virtual machine has multiple network interfaces, edit /etc/virt-v2v.conf to specify the network mapping for all interfaces. If your virtual machine uses a Xen paravirtualized kernel (it would be called something like kernel-xen or kernel-xenU ), virt-v2v will attempt to install a new kernel during the conversion process. You can avoid this requirement by installing a regular kernel, which will not reference a hypervisor in its name, alongside the Xen kernel prior to conversion. You should not make this newly installed kernel your default kernel, because Xen will not boot it. virt-v2v will make it the default during conversion. Note When converting from Xen, virt-v2v requires that the image of the source virtual machine exists in a storage pool. If the image is not currently in a storage pool, you must create one. Contact Red Hat Support for assistance creating an appropriate storage pool. Note Presently, there is a known issue with importing Citrix Xen virtual machines to run on KVM or Red Hat Enterprise Virtualization. For more information, see https://access.redhat.com/solutions/54076 .
[ "virt-v2v -i libvirtxml -op pool --bridge bridge_name guest_name.xml" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/v2v_guide/sub-sect-convert-a-local-xen-virtual-machine
Chapter 10. Identity: Managing Hosts
Chapter 10. Identity: Managing Hosts Both DNS and Kerberos are configured as part of the initial client configuration. This is required because these are the two services that bring the machine within the IdM domain and allow it to identify the IdM server it will connect with. After the initial configuration, IdM has tools to manage both of these services in response to changes in the domain services, changes to the IT environment, or changes on the machines themselves which affect Kerberos, certificate, and DNS services, like changing the client hostname. This chapter describes how to manage identity services that relate directly to the client machine: DNS entries and settings Machine authentication Hostname changes (which affect domain services) 10.1. About Hosts, Services, and Machine Identity and Authentication The basic function of an enrollment process is to create a host entry for the client machine in the IdM directory. This host entry is used to establish relationships between other hosts and even services within the domain. These relationships are part of delegating authorization and control to hosts within the domain. A host entry contains all of the information about the client within IdM: Service entries associated with the host The host and service principal Access control rules Machine information, such as its physical location and operating system Some services that run on a host can also belong to the IdM domain. Any service that can store a Kerberos principal or an SSL certificate (or both) can be configured as an IdM service. Adding a service to the IdM domain allows the service to request an SSL certificate or keytab from the domain. (Only the public key for the certificate is stored in the service record. The private key is local to the service.) An IdM domain establishes a commonality between machines, with common identity information, common policies, and shared services. Any machine which belongs to a domain functions as a client of the domain, which means it uses the services that the domain provides. An IdM domain provides three main services specifically for machines: An IdM domain establishes a commonality between machines, with common identity information, common policies, and shared services. Any machine which belongs to a domain functions as a client of the domain, which means it uses the services that the domain provides. An IdM domain (as described in Section 1.2, "Bringing Linux Services Together" ) provides three main services specifically for machines: DNS Kerberos Certificate management Machines are treated as another identity that is managed by IdM. Clients use DNS to identify IdM servers, services, and domain members - which, like user identities are stored in the 389 Directory Server instance for the IdM server. Like users, machines can be authenticated to the domain using Kerberos or certificates to verify the machine's identity. From the machine perspective, there are several tasks that can be performed that access these domain services: Joining the DNS domain ( machine enrollment ) Managing DNS entries and zones Managing machine authentication Authentication in IdM includes machines as well as users. Machine authentication is required for the IdM server to trust the machine and to accept IdM connections from the client software installed on that machine. After authenticating the client, the IdM server can respond to its requests. IdM supports three different approaches to machine authentication: SSH keys. The SSH public key for the host is created and uploaded to the host entry. From there, the System Security Services Daemon (SSSD) uses IdM as an identity provider and can work in conjunction with OpenSSH and other services to reference the public keys located centrally in Identity Management. This is described in Section 10.4, "Managing Public SSH Keys for Hosts" and the Red Hat Enterprise Linux Deployment Guide . Key tables (or keytabs , a symmetric key resembling to some extent a user password) and machine certificates. Kerberos tickets are generated as part of the Kerberos services and policies defined by the server. Initially granting a Kerberos ticket, renewing the Kerberos credentials, and even destroying the Kerberos session are all handled by the IdM services. Managing Kerberos is covered in Chapter 20, Policy: Managing the Kerberos Domain . Machine certificates. In this case, the machine uses an SSL certificate that is issued by the IdM server's certificate authority and then stored in IdM's Directory Server. The certificate is then sent to the machine to present when it authenticates to the server. On the client, certificates are managed by a service called certmonger , which is described in Appendix B, Working with certmonger .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/hosts
function::task_ns_gid
function::task_ns_gid Name function::task_ns_gid - The group identifier of the task as seen in a namespace Synopsis Arguments task task_struct pointer Description This function returns the group id of the given task as seen in in the given user namespace.
[ "task_ns_gid:long(task:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-task-ns-gid
function::d_path
function::d_path Name function::d_path - get the full nameidata path Synopsis Arguments nd Pointer to nameidata. Description Returns the full dirent name (full path to the root), like the kernel d_path function.
[ "function d_path:string(nd:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-d-path
Chapter 2. Projects
Chapter 2. Projects 2.1. Working with projects A project allows a community of users to organize and manage their content in isolation from other communities. Note Projects starting with openshift- and kube- are default projects. These projects host cluster components that run as pods and other infrastructure components. As such, OpenShift Dedicated does not allow you to create projects starting with openshift- or kube- using the oc new-project command. For OpenShift Dedicated clusters that use the Customer Cloud Subscription (CCS) model, users with cluster-admin privileges can create these projects using the oc adm new-project command. Note In OpenShift Dedicated clusters that use the Customer Cloud Subscription (CCS) model, you cannot assign an SCC to pods created in one of the default namespaces: default , kube-system , kube-public , openshift-node , openshift-infra , and openshift . You cannot use these namespaces for running pods or services. You cannot create any SCCs for OpenShift Dedicated clusters that use a Red Hat cloud account, because SCC resource creation requires cluster-admin privileges. 2.1.1. Creating a project You can use the OpenShift Dedicated web console or the OpenShift CLI ( oc ) to create a project in your cluster. 2.1.1.1. Creating a project by using the web console You can use the OpenShift Dedicated web console to create a project in your cluster. Note Projects starting with openshift- and kube- are considered critical by OpenShift Dedicated. As such, OpenShift Dedicated does not allow you to create projects starting with openshift- using the web console. Prerequisites Ensure that you have the appropriate roles and permissions to create projects, applications, and other workloads in OpenShift Dedicated. Procedure If you are using the Administrator perspective: Navigate to Home Projects . Click Create Project : In the Create Project dialog box, enter a unique name, such as myproject , in the Name field. Optional: Add the Display name and Description details for the project. Click Create . The dashboard for your project is displayed. Optional: Select the Details tab to view the project details. Optional: If you have adequate permissions for a project, you can use the Project Access tab to provide or revoke admin, edit, and view privileges for the project. If you are using the Developer perspective: Click the Project menu and select Create Project : Figure 2.1. Create project In the Create Project dialog box, enter a unique name, such as myproject , in the Name field. Optional: Add the Display name and Description details for the project. Click Create . Optional: Use the left navigation panel to navigate to the Project view and see the dashboard for your project. Optional: In the project dashboard, select the Details tab to view the project details. Optional: If you have adequate permissions for a project, you can use the Project Access tab of the project dashboard to provide or revoke admin, edit, and view privileges for the project. Additional resources Customizing the available cluster roles using the web console 2.1.1.2. Creating a project by using the CLI If allowed by your cluster administrator, you can create a new project. Note Projects starting with openshift- and kube- are considered critical by OpenShift Dedicated. As such, OpenShift Dedicated does not allow you to create Projects starting with openshift- or kube- using the oc new-project command. For OpenShift Dedicated clusters that use the Customer Cloud Subscription (CCS) model, users with cluster-admin privileges can create these projects using the oc adm new-project command. Procedure Run: USD oc new-project <project_name> \ --description="<description>" --display-name="<display_name>" For example: USD oc new-project hello-openshift \ --description="This is an example project" \ --display-name="Hello OpenShift" Note The number of projects you are allowed to create might be limited by the system administrator. After your limit is reached, you might have to delete an existing project in order to create a new one. 2.1.2. Viewing a project You can use the OpenShift Dedicated web console or the OpenShift CLI ( oc ) to view a project in your cluster. 2.1.2.1. Viewing a project by using the web console You can view the projects that you have access to by using the OpenShift Dedicated web console. Procedure If you are using the Administrator perspective: Navigate to Home Projects in the navigation menu. Select a project to view. The Overview tab includes a dashboard for your project. Select the Details tab to view the project details. Select the YAML tab to view and update the YAML configuration for the project resource. Select the Workloads tab to see workloads in the project. Select the RoleBindings tab to view and create role bindings for your project. If you are using the Developer perspective: Navigate to the Project page in the navigation menu. Select All Projects from the Project drop-down menu at the top of the screen to list all of the projects in your cluster. Select a project to view. The Overview tab includes a dashboard for your project. Select the Details tab to view the project details. If you have adequate permissions for a project, select the Project access tab view and update the privileges for the project. 2.1.2.2. Viewing a project using the CLI When viewing projects, you are restricted to seeing only the projects you have access to view based on the authorization policy. Procedure To view a list of projects, run: USD oc get projects You can change from the current project to a different project for CLI operations. The specified project is then used in all subsequent operations that manipulate project-scoped content: USD oc project <project_name> 2.1.3. Providing access permissions to your project using the Developer perspective You can use the Project view in the Developer perspective to grant or revoke access permissions to your project. Prerequisites You have created a project. Procedure To add users to your project and provide Admin , Edit , or View access to them: In the Developer perspective, navigate to the Project page. Select your project from the Project menu. Select the Project Access tab. Click Add access to add a new row of permissions to the default ones. Figure 2.2. Project permissions Enter the user name, click the Select a role drop-down list, and select an appropriate role. Click Save to add the new permissions. You can also use: The Select a role drop-down list, to modify the access permissions of an existing user. The Remove Access icon, to completely remove the access permissions of an existing user to the project. Note Advanced role-based access control is managed in the Roles and Roles Binding views in the Administrator perspective. 2.1.4. Customizing the available cluster roles using the web console In the Developer perspective of the web console, the Project Project access page enables a project administrator to grant roles to users in a project. By default, the available cluster roles that can be granted to users in a project are admin, edit, and view. As a cluster administrator, you can define which cluster roles are available in the Project access page for all projects cluster-wide. You can specify the available roles by customizing the spec.customization.projectAccess.availableClusterRoles object in the Console configuration resource. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure In the Administrator perspective, navigate to Administration Cluster settings . Click the Configuration tab. From the Configuration resource list, select Console operator.openshift.io . Navigate to the YAML tab to view and edit the YAML code. In the YAML code under spec , customize the list of available cluster roles for project access. The following example specifies the default admin , edit , and view roles: apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster # ... spec: customization: projectAccess: availableClusterRoles: - admin - edit - view Click Save to save the changes to the Console configuration resource. Verification In the Developer perspective, navigate to the Project page. Select a project from the Project menu. Select the Project access tab. Click the menu in the Role column and verify that the available roles match the configuration that you applied to the Console resource configuration. 2.1.5. Adding to a project You can add items to your project by using the +Add page in the Developer perspective. Prerequisites You have created a project. Procedure In the Developer perspective, navigate to the +Add page. Select your project from the Project menu. Click on an item on the +Add page and then follow the workflow. Note You can also use the search feature in the Add* page to find additional items to add to your project. Click * under Add at the top of the page and type the name of a component in the search field. 2.1.6. Checking the project status You can use the OpenShift Dedicated web console or the OpenShift CLI ( oc ) to view the status of your project. 2.1.6.1. Checking project status by using the web console You can review the status of your project by using the web console. Prerequisites You have created a project. Procedure If you are using the Administrator perspective: Navigate to Home Projects . Select a project from the list. Review the project status in the Overview page. If you are using the Developer perspective: Navigate to the Project page. Select a project from the Project menu. Review the project status in the Overview page. 2.1.6.2. Checking project status by using the CLI You can review the status of your project by using the OpenShift CLI ( oc ). Prerequisites You have installed the OpenShift CLI ( oc ). You have created a project. Procedure Switch to your project: USD oc project <project_name> 1 1 Replace <project_name> with the name of your project. Obtain a high-level overview of the project: USD oc status 2.1.7. Deleting a project You can use the OpenShift Dedicated web console or the OpenShift CLI ( oc ) to delete a project. When you delete a project, the server updates the project status to Terminating from Active . Then, the server clears all content from a project that is in the Terminating state before finally removing the project. While a project is in Terminating status, you cannot add new content to the project. Projects can be deleted from the CLI or the web console. 2.1.7.1. Deleting a project by using the web console You can delete a project by using the web console. Prerequisites You have created a project. You have the required permissions to delete the project. Procedure If you are using the Administrator perspective: Navigate to Home Projects . Select a project from the list. Click the Actions drop-down menu for the project and select Delete Project . Note The Delete Project option is not available if you do not have the required permissions to delete the project. In the Delete Project? pane, confirm the deletion by entering the name of your project. Click Delete . If you are using the Developer perspective: Navigate to the Project page. Select the project that you want to delete from the Project menu. Click the Actions drop-down menu for the project and select Delete Project . Note If you do not have the required permissions to delete the project, the Delete Project option is not available. In the Delete Project? pane, confirm the deletion by entering the name of your project. Click Delete . 2.1.7.2. Deleting a project by using the CLI You can delete a project by using the OpenShift CLI ( oc ). Prerequisites You have installed the OpenShift CLI ( oc ). You have created a project. You have the required permissions to delete the project. Procedure Delete your project: USD oc delete project <project_name> 1 1 Replace <project_name> with the name of the project that you want to delete. 2.2. Configuring project creation In OpenShift Dedicated, projects are used to group and isolate related objects. When a request is made to create a new project using the web console or oc new-project command, an endpoint in OpenShift Dedicated is used to provision the project according to a template, which can be customized. As a cluster administrator, you can allow and configure how developers and service accounts can create, or self-provision , their own projects. 2.2.1. About project creation The OpenShift Dedicated API server automatically provisions new projects based on the project template that is identified by the projectRequestTemplate parameter in the cluster's project configuration resource. If the parameter is not defined, the API server creates a default template that creates a project with the requested name, and assigns the requesting user to the admin role for that project. When a project request is submitted, the API substitutes the following parameters into the template: Table 2.1. Default project template parameters Parameter Description PROJECT_NAME The name of the project. Required. PROJECT_DISPLAYNAME The display name of the project. May be empty. PROJECT_DESCRIPTION The description of the project. May be empty. PROJECT_ADMIN_USER The user name of the administrating user. PROJECT_REQUESTING_USER The user name of the requesting user. Access to the API is granted to developers with the self-provisioner role and the self-provisioners cluster role binding. This role is available to all authenticated developers by default. 2.2.2. Modifying the template for new projects As a cluster administrator, you can modify the default project template so that new projects are created using your custom requirements. To create your own custom project template: Prerequisites You have access to an OpenShift Dedicated cluster using an account with dedicated-admin permissions. Procedure Log in as a user with cluster-admin privileges. Generate the default project template: USD oc adm create-bootstrap-project-template -o yaml > template.yaml Use a text editor to modify the generated template.yaml file by adding objects or modifying existing objects. The project template must be created in the openshift-config namespace. Load your modified template: USD oc create -f template.yaml -n openshift-config Edit the project configuration resource using the web console or CLI. Using the web console: Navigate to the Administration Cluster Settings page. Click Configuration to view all configuration resources. Find the entry for Project and click Edit YAML . Using the CLI: Edit the project.config.openshift.io/cluster resource: USD oc edit project.config.openshift.io/cluster Update the spec section to include the projectRequestTemplate and name parameters, and set the name of your uploaded project template. The default name is project-request . Project configuration resource with custom project template apiVersion: config.openshift.io/v1 kind: Project metadata: # ... spec: projectRequestTemplate: name: <template_name> # ... After you save your changes, create a new project to verify that your changes were successfully applied. 2.2.3. Disabling project self-provisioning You can prevent an authenticated user group from self-provisioning new projects. Procedure Log in as a user with cluster-admin privileges. View the self-provisioners cluster role binding usage by running the following command: USD oc describe clusterrolebinding.rbac self-provisioners Example output Name: self-provisioners Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate=true Role: Kind: ClusterRole Name: self-provisioner Subjects: Kind Name Namespace ---- ---- --------- Group system:authenticated:oauth Review the subjects in the self-provisioners section. Remove the self-provisioner cluster role from the group system:authenticated:oauth . If the self-provisioners cluster role binding binds only the self-provisioner role to the system:authenticated:oauth group, run the following command: USD oc patch clusterrolebinding.rbac self-provisioners -p '{"subjects": null}' If the self-provisioners cluster role binding binds the self-provisioner role to more users, groups, or service accounts than the system:authenticated:oauth group, run the following command: USD oc adm policy \ remove-cluster-role-from-group self-provisioner \ system:authenticated:oauth Edit the self-provisioners cluster role binding to prevent automatic updates to the role. Automatic updates reset the cluster roles to the default state. To update the role binding using the CLI: Run the following command: USD oc edit clusterrolebinding.rbac self-provisioners In the displayed role binding, set the rbac.authorization.kubernetes.io/autoupdate parameter value to false , as shown in the following example: apiVersion: authorization.openshift.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "false" # ... To update the role binding by using a single command: USD oc patch clusterrolebinding.rbac self-provisioners -p '{ "metadata": { "annotations": { "rbac.authorization.kubernetes.io/autoupdate": "false" } } }' Log in as an authenticated user and verify that it can no longer self-provision a project: USD oc new-project test Example output Error from server (Forbidden): You may not request a new project via this API. Consider customizing this project request message to provide more helpful instructions specific to your organization. 2.2.4. Customizing the project request message When a developer or a service account that is unable to self-provision projects makes a project creation request using the web console or CLI, the following error message is returned by default: You may not request a new project via this API. Cluster administrators can customize this message. Consider updating it to provide further instructions on how to request a new project specific to your organization. For example: To request a project, contact your system administrator at [email protected] . To request a new project, fill out the project request form located at https://internal.example.com/openshift-project-request . To customize the project request message: Procedure Edit the project configuration resource using the web console or CLI. Using the web console: Navigate to the Administration Cluster Settings page. Click Configuration to view all configuration resources. Find the entry for Project and click Edit YAML . Using the CLI: Log in as a user with cluster-admin privileges. Edit the project.config.openshift.io/cluster resource: USD oc edit project.config.openshift.io/cluster Update the spec section to include the projectRequestMessage parameter and set the value to your custom message: Project configuration resource with custom project request message apiVersion: config.openshift.io/v1 kind: Project metadata: # ... spec: projectRequestMessage: <message_string> # ... For example: apiVersion: config.openshift.io/v1 kind: Project metadata: # ... spec: projectRequestMessage: To request a project, contact your system administrator at [email protected]. # ... After you save your changes, attempt to create a new project as a developer or service account that is unable to self-provision projects to verify that your changes were successfully applied.
[ "oc new-project <project_name> --description=\"<description>\" --display-name=\"<display_name>\"", "oc new-project hello-openshift --description=\"This is an example project\" --display-name=\"Hello OpenShift\"", "oc get projects", "oc project <project_name>", "apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster spec: customization: projectAccess: availableClusterRoles: - admin - edit - view", "oc project <project_name> 1", "oc status", "oc delete project <project_name> 1", "oc adm create-bootstrap-project-template -o yaml > template.yaml", "oc create -f template.yaml -n openshift-config", "oc edit project.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestTemplate: name: <template_name>", "oc describe clusterrolebinding.rbac self-provisioners", "Name: self-provisioners Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate=true Role: Kind: ClusterRole Name: self-provisioner Subjects: Kind Name Namespace ---- ---- --------- Group system:authenticated:oauth", "oc patch clusterrolebinding.rbac self-provisioners -p '{\"subjects\": null}'", "oc adm policy remove-cluster-role-from-group self-provisioner system:authenticated:oauth", "oc edit clusterrolebinding.rbac self-provisioners", "apiVersion: authorization.openshift.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: \"false\"", "oc patch clusterrolebinding.rbac self-provisioners -p '{ \"metadata\": { \"annotations\": { \"rbac.authorization.kubernetes.io/autoupdate\": \"false\" } } }'", "oc new-project test", "Error from server (Forbidden): You may not request a new project via this API.", "You may not request a new project via this API.", "oc edit project.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestMessage: <message_string>", "apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestMessage: To request a project, contact your system administrator at [email protected]." ]
https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/building_applications/projects
Preface
Preface OpenJDK (Open Java Development Kit) is a free and open source implementation of the Java Platform, Standard Edition (Java SE). The Red Hat build of OpenJDK is available in three versions: 8u, 11u, and 17u. Packages for the Red Hat build of OpenJDK are available on Red Hat Enterprise Linux and Microsoft Windows platforms and shipped as a JDK and a JRE in the Red Hat Ecosystem Catalog.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/release_notes_for_red_hat_build_of_openjdk_8.0.392/pr01
Chapter 5. Monitoring and managing upgrade of the storage cluster
Chapter 5. Monitoring and managing upgrade of the storage cluster After running the ceph orch upgrade start command to upgrade the Red Hat Ceph Storage cluster, you can check the status, pause, resume, or stop the upgrade process. The health of the cluster changes to HEALTH_WARNING during an upgrade. If the host of the cluster is offline, the upgrade is paused. Note You have to upgrade one daemon type after the other. If a daemon cannot be upgraded, the upgrade is paused. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all the nodes. At least two Ceph Manager nodes in the storage cluster: one active and one standby. Upgrade for the storage cluster initiated. Procedure Determine whether an upgrade is in process and the version to which the cluster is upgrading: Example Note You do not get a message once the upgrade is successful. Run ceph versions and ceph orch ps commands to verify the new image ID and the version of the storage cluster. Optional: Pause the upgrade process: Example Optional: Resume a paused upgrade process: Example Optional: Stop the upgrade process: Example
[ "ceph orch upgrade status", "ceph orch upgrade pause", "ceph orch upgrade resume", "ceph orch upgrade stop" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/upgrade_guide/monitoring-and-managing-upgrade-of-the-storage-cluster_upgrade
Chapter 4. Adding user preferences
Chapter 4. Adding user preferences You can change the default preferences for your profile to meet your requirements. You can set your default project, topology view (graph or list), editing medium (form or YAML), language preferences, and resource type. The changes made to the user preferences are automatically saved. 4.1. Setting user preferences You can set the default user preferences for your cluster. Procedure Log in to the OpenShift Container Platform web console using your login credentials. Use the masthead to access the user preferences under the user profile. In the General section: In the Theme field, you can set the theme that you want to work in. The console defaults to the selected theme each time you log in. In the Perspective field, you can set the default perspective you want to be logged in to. You can select the Administrator or the Developer perspective as required. If a perspective is not selected, you are logged into the perspective you last visited. In the Project field, select a project you want to work in. The console defaults to the project every time you log in. In the Topology field, you can set the topology view to default to the graph or list view. If not selected, the console defaults to the last view you used. In the Create/Edit resource method field, you can set a preference for creating or editing a resource. If both the form and YAML options are available, the console defaults to your selection. In the Language section, select Default browser language to use the default browser language settings. Otherwise, select the language that you want to use for the console. In the Notifications section, you can toggle display notifications created by users for specific projects on the Overview page or notification drawer. In the Applications section: You can view the default Resource type . For example, if the OpenShift Serverless Operator is installed, the default resource type is Serverless Deployment . Otherwise, the default resource type is Deployment . You can select another resource type to be the default resource type from the Resource Type field.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/web_console/adding-user-preferences
Chapter 2. Why Use Virtualization?
Chapter 2. Why Use Virtualization? Virtualization can be useful both for server deployments and individual desktop stations. Desktop virtualization offers cost-efficient centralized management and better disaster recovery. In addition, by using connection tools such as ssh , it is possible to connect to a desktop remotely. When used for servers, virtualization can benefit not only larger networks, but also deployments with more than a single server. Virtualization provides live migration, high availability, fault tolerance, and streamlined backups. 2.1. Virtualization Costs Virtualization can be expensive to introduce, but it often saves money in the long term. Consider the following benefits: Less power Using virtualization negates much of the need for multiple physical platforms. This equates to less power being drawn for machine operation and cooling, resulting in reduced energy costs. The initial cost of purchasing multiple physical platforms, combined with the machines' power consumption and required cooling, is drastically cut by using virtualization. Less maintenance Provided that adequate planning is performed before migrating physical systems to virtualized ones, less time is spent maintaining them. This means less money needs to be spent on parts and labor. Extended life for installed software Older versions of software may not be able to run directly on more recent physical machines. By running older software virtually on a larger, faster system, the life of the software may be extended while taking advantage of better performance from a newer system. Predictable costs A Red Hat Enterprise Linux subscription provides support for virtualization at a fixed rate, making it easy to predict costs. Less space Consolidating servers onto fewer machines means less physical space is required for computer systems.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_getting_started_guide/chap-Virtualization_Getting_Started-Advantages
4.3. Profiling
4.3. Profiling The following sections showcase scripts that profile kernel activity by monitoring function calls. 4.3.1. Counting Function Calls Made This section describes how to identify how many times the system called a specific kernel function in a 30-second sample. Depending on your use of wildcards, you can also use this script to target multiple kernel functions. functioncallcount.stp functioncallcount.stp takes the targeted kernel function as an argument. The argument supports wildcards, which enables you to target multiple kernel functions up to a certain extent. The output of functioncallcount.stp contains the name of the function called and how many times it was called during the sample time (in alphabetical order). Example 4.11, "functioncallcount.stp Sample Output" contains an excerpt from the output of stap functioncallcount.stp "*@mm/*.c" : Example 4.11. functioncallcount.stp Sample Output
[ "#! /usr/bin/env stap The following line command will probe all the functions in kernel's memory management code: # stap functioncallcount.stp \"*@mm/*.c\" probe kernel.function(@1).call { # probe functions listed on commandline called[probefunc()] <<< 1 # add a count efficiently } global called probe end { foreach (fn in called-) # Sort by call count (in decreasing order) # (fn+ in called) # Sort by function name printf(\"%s %d\\n\", fn, @count(called[fn])) exit() }", "[...] __vma_link 97 __vma_link_file 66 __vma_link_list 97 __vma_link_rb 97 __xchg 103 add_page_to_active_list 102 add_page_to_inactive_list 19 add_to_page_cache 19 add_to_page_cache_lru 7 all_vm_events 6 alloc_pages_node 4630 alloc_slabmgmt 67 anon_vma_alloc 62 anon_vma_free 62 anon_vma_lock 66 anon_vma_prepare 98 anon_vma_unlink 97 anon_vma_unlock 66 arch_get_unmapped_area_topdown 94 arch_get_unmapped_exec_area 3 arch_unmap_area_topdown 97 atomic_add 2 atomic_add_negative 97 atomic_dec_and_test 5153 atomic_inc 470 atomic_inc_and_test 1 [...]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_beginners_guide/mainsect-profiling
7.2. Role Mapping
7.2. Role Mapping Each JBoss Data Virtualization data role can be mapped to any number of container roles or any authenticated user. Control role membership through whatever system the JBoss Data Virtualization security domain login modules are associated with. It is possible for a user to have any number of container roles, which in turn imply a subset of JBoss Data Virtualization data roles. Each applicable JBoss Data Virtualization data role contributes cumulatively to the permissions of the user. No one role supersedes or negates the permissions of the other data roles. Note If you have an alternative security domain that your VDB should use, set the VDB security-domain property to the relevant domain.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/role_mapping1
Chapter 2. Getting started using the compliance service
Chapter 2. Getting started using the compliance service This section describes how to configure your RHEL systems to report compliance data to the Insights for RHEL application. This installs necessary additional components such as the SCAP Security Guide (SSG), which is used to perform the compliance scan. Prerequisites The Insights client is deployed on the system. You must have root privileges on the system. Procedure Check the version of RHEL on the system: Review the Insights Compliance - Supported configurations article and make note of the supported SSG version for the RHEL minor version on the system. Note Some minor versions of RHEL support more than one version of SSG. The Insights compliance service will always show results for the latest supported version. Check if the supported version of the SSG package is installed on the system: Example - for RHEL 8.4 run: If it is not already installed, install the supported version of SSG on the system. Example - for RHEL 8.4 run: Assign systems to policies using the Insights compliance service UI, or using insights-client commands in the CLI: Use the compliance service UI to navigate to Security > Compliance > SCAP policies and use one of the following methods to add systems: Creating new SCAP policies Editing included systems You can also add systems by using the following insights-client commands on the CLI: insights-client --compliance-policies to list available policies and their associated ID insights-client --compliance-assign <ID> For more information about using insights-client commands to add systems, see Managing SCAP security policies in the Insights for RHEL compliance service in Assessing and Monitoring Security Policy Compliance of RHEL Systems with FedRAMP . Options for the Insights client in Client Configuration Guide for Red Hat Insights with FedRAMP . After adding each system to the needed security policy, return to the system and run the compliance scan using: Note The scan can take 1-5 minutes to complete. Navigate to Security > Compliance > Reports to view results. Optional: Schedule the compliance jobs to run with cron . Additional Resources To learn which versions of the SCAP Security Guide are supported for Red Hat Enterprise Linux minor versions, see Insights Compliance - Supported configurations . 2.1. Setting up recurring scans for Insights services To get the most accurate recommendations from Red Hat Insights services such as compliance and malware detection, you might need to manually scan and upload data collection reports to the services on a regular schedule. Use the following insights-client commands to run the commands manually: Currently, Insights does not have an automated scheduler to perform the scans for you, but you can configure a cron job to schedule automatic scans. Important Before you create a cron job, make sure that the commands work properly when you run them manually. Prerequisites The services you want to use (Compliance and Malware Detection) are configured and running on your system. Procedure At the system prompt, issue the crontab -e command to edit the crontab file. This command opens your default text editor. Add a crontab entry for the service you want to run. For example: In this example, the first command uploads a Compliance report to Insights every day at 20:10 local time. The second command uploads a malware detection report to Insights every day at 21:10 local time. Save the file and exit the text editor.
[ "[user@insights]USD cat /etc/redhat-release", "dnf info scap-security-guide-0.1.57-3.el8_4", "dnf install scap-security-guide-0.1.57-3.el8_4", "insights-client --compliance", "insights-client --compliance insights-client --collector malware-detection", "crontab -e", "10 20 * * * /bin/insights-client --compliance 10 21 * * * /bin/insights-client --collector malware-detection" ]
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/assessing_and_monitoring_security_policy_compliance_of_rhel_systems_with_fedramp/compliance-getting-started_intro-compliance
Chapter 4. Working with tags
Chapter 4. Working with tags An image tag refers to a label or identifier assigned to a specific version or variant of a container image. Container images are typically composed of multiple layers that represent different parts of the image. Image tags are used to differentiate between different versions of an image or to provide additional information about the image. Image tags have the following benefits: Versioning and Releases : Image tags allow you to denote different versions or releases of an application or software. For example, you might have an image tagged as v1.0 to represent the initial release and v1.1 for an updated version. This helps in maintaining a clear record of image versions. Rollbacks and Testing : If you encounter issues with a new image version, you can easily revert to a version by specifying its tag. This is particularly helpful during debugging and testing phases. Development Environments : Image tags are beneficial when working with different environments. You might use a dev tag for a development version, qa for quality assurance testing, and prod for production, each with their respective features and configurations. Continuous Integration/Continuous Deployment (CI/CD) : CI/CD pipelines often utilize image tags to automate the deployment process. New code changes can trigger the creation of a new image with a specific tag, enabling seamless updates. Feature Branches : When multiple developers are working on different features or bug fixes, they can create distinct image tags for their changes. This helps in isolating and testing individual features. Customization : You can use image tags to customize images with different configurations, dependencies, or optimizations, while keeping track of each variant. Security and Patching : When security vulnerabilities are discovered, you can create patched versions of images with updated tags, ensuring that your systems are using the latest secure versions. Dockerfile Changes : If you modify the Dockerfile or build process, you can use image tags to differentiate between images built from the and updated Dockerfiles. Overall, image tags provide a structured way to manage and organize container images, enabling efficient development, deployment, and maintenance workflows. 4.1. Viewing and modifying tags To view image tags on Red Hat Quay, navigate to a repository and click on the Tags tab. For example: View and modify tags from your repository 4.1.1. Adding a new image tag to an image You can add a new tag to an image in Red Hat Quay. Procedure Click the Settings , or gear , icon to the tag and clicking Add New Tag . Enter a name for the tag, then, click Create Tag . The new tag is now listed on the Repository Tags page. 4.1.2. Moving an image tag You can move a tag to a different image if desired. Procedure Click the Settings , or gear , icon to the tag and clicking Add New Tag and enter an existing tag name. Red Hat Quay confirms that you want the tag moved instead of added. 4.1.3. Deleting an image tag Deleting an image tag effectively removes that specific version of the image from the registry. To delete an image tag, use the following procedure. Procedure Navigate to the Tags page of a repository. Click Delete Tag . This deletes the tag and any images unique to it. Note Deleting an image tag can be reverted based on the amount of time allotted assigned to the time machine feature. For more information, see "Reverting tag changes". 4.1.3.1. Viewing tag history Red Hat Quay offers a comprehensive history of images and their respective image tags. Procedure Navigate to the Tag History page of a repository to view the image tag history. 4.1.3.2. Reverting tag changes Red Hat Quay offers a comprehensive time machine feature that allows older images tags to remain in the repository for set periods of time so that they can revert changes made to tags. This feature allows users to revert tag changes, like tag deletions. Procedure Navigate to the Tag History page of a repository. Find the point in the timeline at which image tags were changed or removed. , click the option under Revert to restore a tag to its image, or click the option under Permanently Delete to permanently delete the image tag. 4.1.4. Fetching an image by tag or digest Red Hat Quay offers multiple ways of pulling images using Docker and Podman clients. Procedure Navigate to the Tags page of a repository. Under Manifest , click the Fetch Tag icon. When the popup box appears, users are presented with the following options: Podman Pull (by tag) Docker Pull (by tag) Podman Pull (by digest) Docker Pull (by digest) Selecting any one of the four options returns a command for the respective client that allows users to pull the image. Click Copy Command to copy the command, which can be used on the command-line interface (CLI). For example: USD podman pull quay-server.example.com/quayadmin/busybox:test2 4.2. Tag Expiration Images can be set to expire from a Red Hat Quay repository at a chosen date and time using the tag expiration feature. This feature includes the following characteristics: When an image tag expires, it is deleted from the repository. If it is the last tag for a specific image, the image is also set to be deleted. Expiration is set on a per-tag basis. It is not set for a repository as a whole. After a tag is expired or deleted, it is not immediately removed from the registry. This is contingent upon the allotted time designed in the time machine feature, which defines when the tag is permanently deleted, or garbage collected. By default, this value is set at 14 days , however the administrator can adjust this time to one of multiple options. Up until the point that garbage collection occurs, tags changes can be reverted. The Red Hat Quay superuser has no special privilege related to deleting expired images from user repositories. There is no central mechanism for the superuser to gather information and act on user repositories. It is up to the owners of each repository to manage expiration and the deletion of their images. Tag expiration can be set up in one of two ways: By setting the quay.expires-after= LABEL in the Dockerfile when the image is created. This sets a time to expire from when the image is built. By selecting an expiration date on the Red Hat Quay UI. For example: 4.2.1. Setting tag expiration from a Dockerfile Adding a label, for example, quay.expires-after=20h by using the docker label command causes a tag to automatically expire after the time indicated. The following values for hours, days, or weeks are accepted: 1h 2d 3w Expiration begins from the time that the image is pushed to the registry. 4.2.2. Setting tag expiration from the repository Tag expiration can be set on the Red Hat Quay UI. Procedure Navigate to a repository and click Tags in the navigation pane. Click the Settings , or gear icon, for an image tag and select Change Expiration . Select the date and time when prompted, and select Change Expiration . The tag is set to be deleted from the repository when the expiration time is reached. 4.3. Viewing Clair security scans Clair security scanner is not enabled for Red Hat Quay by default. To enable Clair, see Clair on Red Hat Quay . Procedure Navigate to a repository and click Tags in the navigation pane. This page shows the results of the security scan. To reveal more information about multi-architecture images, click See Child Manifests to see the list of manifests in extended view. Click a relevant link under See Child Manifests , for example, 1 Unknown to be redirected to the Security Scanner page. The Security Scanner page provides information for the tag, such as which CVEs the image is susceptible to, and what remediation options you might have available. Note Image scanning only lists vulnerabilities found by Clair security scanner. What users do about the vulnerabilities are uncovered is up to said user. Red Hat Quay superusers do not act on found vulnerabilities.
[ "podman pull quay-server.example.com/quayadmin/busybox:test2" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/use_red_hat_quay/working-with-tags
Chapter 12. Volume Snapshots
Chapter 12. Volume Snapshots A volume snapshot is the state of the storage volume in a cluster at a particular point in time. These snapshots help to use storage more efficiently by not having to make a full copy each time and can be used as building blocks for developing an application. You can create multiple snapshots of the same persistent volume claim (PVC). For CephFS, you can create up to 100 snapshots per PVC. For RADOS Block Device (RBD), you can create up to 512 snapshots per PVC. Note You cannot schedule periodic creation of snapshots. 12.1. Creating volume snapshots You can create a volume snapshot either from the Persistent Volume Claim (PVC) page or the Volume Snapshots page. Prerequisites For a consistent snapshot, the PVC should be in Bound state and not be in use. Ensure to stop all IO before taking the snapshot. Note OpenShift Data Foundation only provides crash consistency for a volume snapshot of a PVC if a pod is using it. For application consistency, be sure to first tear down a running pod to ensure consistent snapshots or use any quiesce mechanism provided by the application to ensure it. Procedure From the Persistent Volume Claims page Click Storage Persistent Volume Claims from the OpenShift Web Console. To create a volume snapshot, do one of the following: Beside the desired PVC, click Action menu (...) Create Snapshot . Click on the PVC for which you want to create the snapshot and click Actions Create Snapshot . Enter a Name for the volume snapshot. Choose the Snapshot Class from the drop-down list. Click Create . You will be redirected to the Details page of the volume snapshot that is created. From the Volume Snapshots page Click Storage Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots page, click Create Volume Snapshot . Choose the required Project from the drop-down list. Choose the Persistent Volume Claim from the drop-down list. Enter a Name for the snapshot. Choose the Snapshot Class from the drop-down list. Click Create . You will be redirected to the Details page of the volume snapshot that is created. Verification steps Go to the Details page of the PVC and click the Volume Snapshots tab to see the list of volume snapshots. Verify that the new volume snapshot is listed. Click Storage Volume Snapshots from the OpenShift Web Console. Verify that the new volume snapshot is listed. Wait for the volume snapshot to be in Ready state. 12.2. Restoring volume snapshots When you restore a volume snapshot, a new Persistent Volume Claim (PVC) gets created. The restored PVC is independent of the volume snapshot and the parent PVC. You can restore a volume snapshot from either the Persistent Volume Claim page or the Volume Snapshots page. Procedure From the Persistent Volume Claims page You can restore volume snapshot from the Persistent Volume Claims page only if the parent PVC is present. Click Storage Persistent Volume Claims from the OpenShift Web Console. Click on the PVC name with the volume snapshot to restore a volume snapshot as a new PVC. In the Volume Snapshots tab, click the Action menu (...) to the volume snapshot you want to restore. Click Restore as new PVC . Enter a name for the new PVC. Select the Storage Class name. Note For Rados Block Device (RBD), you must select a storage class with the same pool as that of the parent PVC. Restoring the snapshot of an encrypted PVC using a storage class where encryption is not enabled and vice versa is not supported. Select the Access Mode of your choice. Important The ReadOnlyMany (ROX) access mode is a Developer Preview feature and is subject to Developer Preview support limitations. Developer Preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with ReadOnlyMany feature, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. See Creating a clone or restoring a snapshot with the new readonly access mode to use the ROX access mode. Optional: For RBD, select Volume mode . Click Restore . You are redirected to the new PVC details page. From the Volume Snapshots page Click Storage Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots tab, click the Action menu (...) to the volume snapshot you want to restore. Click Restore as new PVC . Enter a name for the new PVC. Select the Storage Class name. Note For Rados Block Device (RBD), you must select a storage class with the same pool as that of the parent PVC. Restoring the snapshot of an encrypted PVC using a storage class where encryption is not enabled and vice versa is not supported. Select the Access Mode of your choice. Important The ReadOnlyMany (ROX) access mode is a Developer Preview feature and is subject to Developer Preview support limitations. Developer Preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with ReadOnlyMany feature, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. See Creating a clone or restoring a snapshot with the new readonly access mode to use the ROX access mode. Optional: For RBD, select Volume mode . Click Restore . You are redirected to the new PVC details page. Verification steps Click Storage Persistent Volume Claims from the OpenShift Web Console and confirm that the new PVC is listed in the Persistent Volume Claims page. Wait for the new PVC to reach Bound state. 12.3. Deleting volume snapshots Prerequisites For deleting a volume snapshot, the volume snapshot class which is used in that particular volume snapshot should be present. Procedure From Persistent Volume Claims page Click Storage Persistent Volume Claims from the OpenShift Web Console. Click on the PVC name which has the volume snapshot that needs to be deleted. In the Volume Snapshots tab, beside the desired volume snapshot, click Action menu (...) Delete Volume Snapshot . From Volume Snapshots page Click Storage Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots page, beside the desired volume snapshot click Action menu (...) Delete Volume Snapshot . Verfication steps Ensure that the deleted volume snapshot is not present in the Volume Snapshots tab of the PVC details page. Click Storage Volume Snapshots and ensure that the deleted volume snapshot is not listed.
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/volume-snapshots_osp
Chapter 2. Managing DNS zones in IdM
Chapter 2. Managing DNS zones in IdM As Identity Management (IdM) administrator, you can manage how IdM DNS zones work. The chapter describes the following topics and procedures: What DNS zone types are supported in IdM How to add primary IdM DNS zones using the IdM Web UI How to add primary IdM DNS zones using the IdM CLI How to remove primary IdM DNS zones using the IdM Web UI How to remove primary IdM DNS zones using the IdM CLI What DNS attributes you can configure in IdM How you can configure these attributes in the IdM Web UI How you can configure these attributes in the IdM CLI How zone transfers work in IdM How you can allow zone transfers in the IdM Web UI How you can allow zone transfers in the IdM CLI Prerequisites DNS service is installed on the IdM server. For more information about how to install an IdM server with integrated DNS, see one of the following links: Installing an IdM server: With integrated DNS, with an integrated CA as the root CA Installing an IdM server: With integrated DNS, with an external CA as the root CA Installing an IdM server: With integrated DNS, without a CA 2.1. Supported DNS zone types Identity Management (IdM) supports two types of DNS zones: primary and forward zones. These two types of zones are described here, including an example scenario for DNS forwarding. Note This guide uses the BIND terminology for zone types which is different from the terminology used for Microsoft Windows DNS. Primary zones in BIND serve the same purpose as forward lookup zones and reverse lookup zones in Microsoft Windows DNS. Forward zones in BIND serve the same purpose as conditional forwarders in Microsoft Windows DNS. Primary DNS zones Primary DNS zones contain authoritative DNS data and can accept dynamic DNS updates. This behavior is equivalent to the type master setting in standard BIND configuration. You can manage primary zones using the ipa dnszone-* commands. In compliance with standard DNS rules, every primary zone must contain start of authority (SOA) and nameserver (NS) records. IdM generates these records automatically when the DNS zone is created, but you must copy the NS records manually to the parent zone to create proper delegation. In accordance with standard BIND behavior, queries for names for which the server is not authoritative are forwarded to other DNS servers. These DNS servers, so called forwarders, may or may not be authoritative for the query. Example 2.1. Example scenario for DNS forwarding The IdM server contains the test.example. primary zone. This zone contains an NS delegation record for the sub.test.example. name. In addition, the test.example. zone is configured with the 192.0.2.254 forwarder IP address for the sub.text.example subzone. A client querying the name nonexistent.test.example. receives the NXDomain answer, and no forwarding occurs because the IdM server is authoritative for this name. On the other hand, querying for the host1.sub.test.example. name is forwarded to the configured forwarder 192.0.2.254 because the IdM server is not authoritative for this name. Forward DNS zones From the perspective of IdM, forward DNS zones do not contain any authoritative data. In fact, a forward "zone" usually only contains two pieces of information: A domain name The IP address of a DNS server associated with the domain All queries for names belonging to the domain defined are forwarded to the specified IP address. This behavior is equivalent to the type forward setting in standard BIND configuration. You can manage forward zones using the ipa dnsforwardzone-* commands. Forward DNS zones are especially useful in the context of IdM-Active Directory (AD) trusts. If the IdM DNS server is authoritative for the idm.example.com zone and the AD DNS server is authoritative for the ad.example.com zone, then ad.example.com is a DNS forward zone for the idm.example.com primary zone. That means that when a query comes from an IdM client for the IP address of somehost.ad.example.com , the query is forwarded to an AD domain controller specified in the ad.example.com IdM DNS forward zone. 2.2. Adding a primary DNS zone in IdM Web UI Follow this procedure to add a primary DNS zone using the Identity Management (IdM) Web UI. Prerequisites You are logged in as IdM administrator. Procedure In the IdM Web UI, click Network Services DNS DNS Zones . Figure 2.1. Managing IdM DNS primary zones Click Add at the top of the list of all zones. Provide the zone name. Figure 2.2. Entering an new IdM primary zone Click Add . 2.3. Adding a primary DNS zone in IdM CLI Follow this procedure to add a primary DNS zone using the Identity Management (IdM) command-line interface (CLI). Prerequisites You are logged in as IdM administrator. Procedure The ipa dnszone-add command adds a new zone to the DNS domain. Adding a new zone requires you to specify the name of the new subdomain. You can pass the subdomain name directly with the command: If you do not pass the name to ipa dnszone-add , the script prompts for it automatically. Additional resources See ipa dnszone-add --help . 2.4. Removing a primary DNS zone in IdM Web UI Follow this procedure to remove a primary DNS zone from Identity Management (IdM) using the IdM Web UI. Prerequisites You are logged in as IdM administrator. Procedure In the IdM Web UI, click Network Services DNS DNS Zones . Select the check box by the zone name and click Delete . Figure 2.3. Removing a primary DNS Zone In the Remove DNS zones dialog window, confirm that you want to delete the selected zone. 2.5. Removing a primary DNS zone in IdM CLI Follow this procedure to remove a primary DNS zone from Identity Management (IdM) using the IdM command-line interface (CLI). Prerequisites You are logged in as IdM administrator. Procedure To remove a primary DNS zone, enter the ipa dnszone-del command, followed by the name of the zone you want to remove. For example: 2.6. DNS configuration priorities You can configure many DNS configuration options on the following levels. Each level has a different priority. Zone-specific configuration The level of configuration specific for a particular zone defined in IdM has the highest priority. You can manage zone-specific configuration by using the ipa dnszone-* and ipa dnsforwardzone-* commands. Per-server configuration You are asked to define per-server forwarders during the installation of an IdM server. You can manage per-server forwarders by using the ipa dnsserver-* commands. If you do not want to set a per-server forwarder when installing a replica, you can use the --no-forwarder option. Global DNS configuration If no zone-specific configuration is defined, IdM uses global DNS configuration stored in LDAP. You can manage global DNS configuration using the ipa dnsconfig-* commands. Settings defined in global DNS configuration are applied to all IdM DNS servers. Configuration in /etc/named.conf Configuration defined in the /etc/named.conf file on each IdM DNS server has the lowest priority. It is specific for each server and must be edited manually. The /etc/named.conf file is usually only used to specify DNS forwarding to a local DNS cache. Other options are managed using the commands for zone-specific and global DNS configuration mentioned above. You can configure DNS options on multiple levels at the same time. In such cases, configuration with the highest priority takes precedence over configuration defined at lower levels. Additional resources The Priority order of configuration section in Per Server Config in LDAP 2.7. Configuration attributes of primary IdM DNS zones Identity Management (IdM) creates a new zone with certain default configuration, such as the refresh periods, transfer settings, or cache settings. In IdM DNS zone attributes , you can find the attributes of the default zone configuration that you can modify using one of the following options: The dnszone-mod command in the command-line interface (CLI). For more information, see Editing the configuration of a primary DNS zone in IdM CLI . The IdM Web UI. For more information, see Editing the configuration of a primary DNS zone in IdM Web UI . An Ansible playbook that uses the ipadnszone module. For more information, see Managing DNS zones in IdM . Along with setting the actual information for the zone, the settings define how the DNS server handles the start of authority (SOA) record entries and how it updates its records from the DNS name server. Table 2.1. IdM DNS zone attributes Attribute Command-Line Option Description Authoritative name server --name-server Sets the domain name of the primary DNS name server, also known as SOA MNAME. By default, each IdM server advertises itself in the SOA MNAME field. Consequently, the value stored in LDAP using --name-server is ignored. Administrator e-mail address --admin-email Sets the email address to use for the zone administrator. This defaults to the root account on the host. SOA serial --serial Sets a serial number in the SOA record. Note that IdM sets the version number automatically and users are not expected to modify it. SOA refresh --refresh Sets the interval, in seconds, for a secondary DNS server to wait before requesting updates from the primary DNS server. SOA retry --retry Sets the time, in seconds, to wait before retrying a failed refresh operation. SOA expire --expire Sets the time, in seconds, that a secondary DNS server will try to perform a refresh update before ending the operation attempt. SOA minimum --minimum Sets the time to live (TTL) value in seconds for negative caching according to RFC 2308 . SOA time to live --ttl Sets TTL in seconds for records at zone apex. In zone example.com , for example, all records (A, NS, or SOA) under name example.com are configured, but no other domain names, like test.example.com , are affected. Default time to live --default-ttl Sets the default time to live (TTL) value in seconds for negative caching for all values in a zone that never had an individual TTL value set before. Requires a restart of the named-pkcs11 service on all IdM DNS servers after changes to take effect. BIND update policy --update-policy Sets the permissions allowed to clients in the DNS zone. Dynamic update --dynamic-update =TRUE|FALSE Enables dynamic updates to DNS records for clients. Note that if this is set to false, IdM client machines will not be able to add or update their IP address. Allow transfer --allow-transfer = string Gives a list of IP addresses or network names which are allowed to transfer the given zone, separated by semicolons (;). Zone transfers are disabled by default. The default --allow-transfer value is none . Allow query --allow-query Gives a list of IP addresses or network names which are allowed to issue DNS queries, separated by semicolons (;). Allow PTR sync --allow-sync-ptr =1|0 Sets whether A or AAAA records (forward records) for the zone will be automatically synchronized with the PTR (reverse) records. Zone forwarders --forwarder = IP_address Specifies a forwarder specifically configured for the DNS zone. This is separate from any global forwarders used in the IdM domain. To specify multiple forwarders, use the option multiple times. Forward policy --forward-policy =none|only|first Specifies the forward policy. For information about the supported policies, see DNS forward policies in IdM . 2.8. Editing the configuration of a primary DNS zone in IdM Web UI Follow this procedure to edit the configuration attributes of a primary Identity Management (IdM) DNS using the IdM Web UI. Prerequisites You are logged in as IdM administrator. Procedure In the IdM Web UI, click Network Services DNS DNS Zones . Figure 2.4. DNS primary zones management In the DNS Zones section, click on the zone name in the list of all zones to open the DNS zone page. Figure 2.5. Editing a primary zone Click Settings . Figure 2.6. The Settings tab in the primary zone edit page Change the zone configuration as required. For information about the available settings, see IdM DNS zone attributes . Click Save to confirm the new configuration. Note If you are changing the default time to live (TTL) of a zone, restart the named-pkcs11 service on all IdM DNS servers to make the changes take effect. All other settings are automatically activated immediately. 2.9. Editing the configuration of a primary DNS zone in IdM CLI Follow this procedure to edit the configuration of a primary DNS zone using the Identity Management (IdM) command-line interface (CLI). Prerequisites You are logged in as IdM administrator. Procedure To modify an existing primary DNS zone, use the ipa dnszone-mod command. For example, to set the time to wait before retrying a failed refresh operation to 1800 seconds: For more information about the available settings and their corresponding CLI options, see IdM DNS zone attributes . If a specific setting does not have a value in the DNS zone entry you are modifying, the ipa dnszone-mod command adds the value. If the setting does not have a value, the command overwrites the current value with the specified value. Note If you are changing the default time to live (TTL) of a zone, restart the named-pkcs11 service on all IdM DNS servers to make the changes take effect. All other settings are automatically activated immediately. Additional resources See ipa dnszone-mod --help . 2.10. Zone transfers in IdM In an Identity Management (IdM) deployment that has integrated DNS, you can use zone transfers to copy all resource records from one name server to another. Name servers maintain authoritative data for their zones. If you make changes to the zone on a DNS server that is authoritative for zone A DNS zone, you must distribute the changes among the other name servers in the IdM DNS domain that are outside zone A . Important The IdM-integrated DNS can be written to by different servers simultaneously. The Start of Authority (SOA) serial numbers in IdM zones are not synchronized among the individual IdM DNS servers. For this reason, configure your DNS servers outside the to-be-transferred zone to only use one specific DNS server inside the to-be-transferred zone. This prevents zone transfer failures caused by non-synchronized SOA serial numbers. IdM supports zone transfers according to the RFC 5936 (AXFR) and RFC 1995 (IXFR) standards. Additional resources See Enabling zone transfers in IdM Web UI . See Enabling zone transfers in IdM CLI . 2.11. Enabling zone transfers in IdM Web UI Follow this procedure to enable zone transfers in Identity Management (IdM) using the IdM Web UI. Prerequisites You are logged in as IdM administrator. Procedure In the IdM Web UI, click Network Services DNS DNS Zones . Click Settings . Under Allow transfer , specify the name servers to which you want to transfer the zone records. Figure 2.7. Enabling zone transfers Click Save at the top of the DNS zone page to confirm the new configuration. 2.12. Enabling zone transfers in IdM CLI Follow this procedure to enable zone transfers in Identity Management (IdM) using the IdM command-line interface (CLI). Prerequisites You are logged in as IdM administrator. You have root access to the secondary DNS servers. Procedure To enable zone transfers in the BIND service, enter the ipa dnszone-mod command, and specify the list of name servers that are outside the to-be-transferred zone to which the zone records will be transferred using the --allow-transfer option. For example: Verification SSH to one of the DNS servers to which zone transfer has been enabled: Transfer the IdM DNS zone using a tool such as the dig utility: If the command returns no error, you have successfully enabled zone transfer for zone_name . 2.13. Additional resources See Using Ansible playbooks to manage IdM DNS zones .
[ "ipa dnszone-add newzone.idm.example.com", "ipa dnszone-del idm.example.com", "ipa dnszone-mod --retry 1800", "ipa dnszone-mod --allow-transfer=192.0.2.1;198.51.100.1;203.0.113.1 idm.example.com", "ssh 192.0.2.1", "dig @ipa-server zone_name AXFR" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/working_with_dns_in_identity_management/managing-dns-zones-in-idm_working-with-dns-in-identity-management
Chapter 2. Projects
Chapter 2. Projects 2.1. Working with projects A project allows a community of users to organize and manage their content in isolation from other communities. Note Projects starting with openshift- and kube- are default projects . These projects host cluster components that run as pods and other infrastructure components. As such, OpenShift Container Platform does not allow you to create projects starting with openshift- or kube- using the oc new-project command. Cluster administrators can create these projects using the oc adm new-project command. Important Do not run workloads in or share access to default projects. Default projects are reserved for running core cluster components. The following default projects are considered highly privileged: default , kube-public , kube-system , openshift , openshift-infra , openshift-node , and other system-created projects that have the openshift.io/run-level label set to 0 or 1 . Functionality that relies on admission plugins, such as pod security admission, security context constraints, cluster resource quotas, and image reference resolution, does not work in highly privileged projects. 2.1.1. Creating a project You can use the OpenShift Container Platform web console or the OpenShift CLI ( oc ) to create a project in your cluster. 2.1.1.1. Creating a project by using the web console You can use the OpenShift Container Platform web console to create a project in your cluster. Note Projects starting with openshift- and kube- are considered critical by OpenShift Container Platform. As such, OpenShift Container Platform does not allow you to create projects starting with openshift- using the web console. Prerequisites Ensure that you have the appropriate roles and permissions to create projects, applications, and other workloads in OpenShift Container Platform. Procedure If you are using the Administrator perspective: Navigate to Home Projects . Click Create Project : In the Create Project dialog box, enter a unique name, such as myproject , in the Name field. Optional: Add the Display name and Description details for the project. Click Create . The dashboard for your project is displayed. Optional: Select the Details tab to view the project details. Optional: If you have adequate permissions for a project, you can use the Project Access tab to provide or revoke admin, edit, and view privileges for the project. If you are using the Developer perspective: Click the Project menu and select Create Project : Figure 2.1. Create project In the Create Project dialog box, enter a unique name, such as myproject , in the Name field. Optional: Add the Display name and Description details for the project. Click Create . Optional: Use the left navigation panel to navigate to the Project view and see the dashboard for your project. Optional: In the project dashboard, select the Details tab to view the project details. Optional: If you have adequate permissions for a project, you can use the Project Access tab of the project dashboard to provide or revoke admin, edit, and view privileges for the project. Additional resources Customizing the available cluster roles using the web console 2.1.1.2. Creating a project by using the CLI If allowed by your cluster administrator, you can create a new project. Note Projects starting with openshift- and kube- are considered critical by OpenShift Container Platform. As such, OpenShift Container Platform does not allow you to create Projects starting with openshift- or kube- using the oc new-project command. Cluster administrators can create these projects using the oc adm new-project command. Procedure Run: USD oc new-project <project_name> \ --description="<description>" --display-name="<display_name>" For example: USD oc new-project hello-openshift \ --description="This is an example project" \ --display-name="Hello OpenShift" Note The number of projects you are allowed to create might be limited by the system administrator. After your limit is reached, you might have to delete an existing project in order to create a new one. 2.1.2. Viewing a project You can use the OpenShift Container Platform web console or the OpenShift CLI ( oc ) to view a project in your cluster. 2.1.2.1. Viewing a project by using the web console You can view the projects that you have access to by using the OpenShift Container Platform web console. Procedure If you are using the Administrator perspective: Navigate to Home Projects in the navigation menu. Select a project to view. The Overview tab includes a dashboard for your project. Select the Details tab to view the project details. Select the YAML tab to view and update the YAML configuration for the project resource. Select the Workloads tab to see workloads in the project. Select the RoleBindings tab to view and create role bindings for your project. If you are using the Developer perspective: Navigate to the Project page in the navigation menu. Select All Projects from the Project drop-down menu at the top of the screen to list all of the projects in your cluster. Select a project to view. The Overview tab includes a dashboard for your project. Select the Details tab to view the project details. If you have adequate permissions for a project, select the Project access tab view and update the privileges for the project. 2.1.2.2. Viewing a project using the CLI When viewing projects, you are restricted to seeing only the projects you have access to view based on the authorization policy. Procedure To view a list of projects, run: USD oc get projects You can change from the current project to a different project for CLI operations. The specified project is then used in all subsequent operations that manipulate project-scoped content: USD oc project <project_name> 2.1.3. Providing access permissions to your project using the Developer perspective You can use the Project view in the Developer perspective to grant or revoke access permissions to your project. Prerequisites You have created a project. Procedure To add users to your project and provide Admin , Edit , or View access to them: In the Developer perspective, navigate to the Project page. Select your project from the Project menu. Select the Project Access tab. Click Add access to add a new row of permissions to the default ones. Figure 2.2. Project permissions Enter the user name, click the Select a role drop-down list, and select an appropriate role. Click Save to add the new permissions. You can also use: The Select a role drop-down list, to modify the access permissions of an existing user. The Remove Access icon, to completely remove the access permissions of an existing user to the project. Note Advanced role-based access control is managed in the Roles and Roles Binding views in the Administrator perspective. 2.1.4. Customizing the available cluster roles using the web console In the Developer perspective of the web console, the Project Project access page enables a project administrator to grant roles to users in a project. By default, the available cluster roles that can be granted to users in a project are admin, edit, and view. As a cluster administrator, you can define which cluster roles are available in the Project access page for all projects cluster-wide. You can specify the available roles by customizing the spec.customization.projectAccess.availableClusterRoles object in the Console configuration resource. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure In the Administrator perspective, navigate to Administration Cluster settings . Click the Configuration tab. From the Configuration resource list, select Console operator.openshift.io . Navigate to the YAML tab to view and edit the YAML code. In the YAML code under spec , customize the list of available cluster roles for project access. The following example specifies the default admin , edit , and view roles: apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster # ... spec: customization: projectAccess: availableClusterRoles: - admin - edit - view Click Save to save the changes to the Console configuration resource. Verification In the Developer perspective, navigate to the Project page. Select a project from the Project menu. Select the Project access tab. Click the menu in the Role column and verify that the available roles match the configuration that you applied to the Console resource configuration. 2.1.5. Adding to a project You can add items to your project by using the +Add page in the Developer perspective. Prerequisites You have created a project. Procedure In the Developer perspective, navigate to the +Add page. Select your project from the Project menu. Click on an item on the +Add page and then follow the workflow. Note You can also use the search feature in the Add* page to find additional items to add to your project. Click * under Add at the top of the page and type the name of a component in the search field. 2.1.6. Checking the project status You can use the OpenShift Container Platform web console or the OpenShift CLI ( oc ) to view the status of your project. 2.1.6.1. Checking project status by using the web console You can review the status of your project by using the web console. Prerequisites You have created a project. Procedure If you are using the Administrator perspective: Navigate to Home Projects . Select a project from the list. Review the project status in the Overview page. If you are using the Developer perspective: Navigate to the Project page. Select a project from the Project menu. Review the project status in the Overview page. 2.1.6.2. Checking project status by using the CLI You can review the status of your project by using the OpenShift CLI ( oc ). Prerequisites You have installed the OpenShift CLI ( oc ). You have created a project. Procedure Switch to your project: USD oc project <project_name> 1 1 Replace <project_name> with the name of your project. Obtain a high-level overview of the project: USD oc status 2.1.7. Deleting a project You can use the OpenShift Container Platform web console or the OpenShift CLI ( oc ) to delete a project. When you delete a project, the server updates the project status to Terminating from Active . Then, the server clears all content from a project that is in the Terminating state before finally removing the project. While a project is in Terminating status, you cannot add new content to the project. Projects can be deleted from the CLI or the web console. 2.1.7.1. Deleting a project by using the web console You can delete a project by using the web console. Prerequisites You have created a project. You have the required permissions to delete the project. Procedure If you are using the Administrator perspective: Navigate to Home Projects . Select a project from the list. Click the Actions drop-down menu for the project and select Delete Project . Note The Delete Project option is not available if you do not have the required permissions to delete the project. In the Delete Project? pane, confirm the deletion by entering the name of your project. Click Delete . If you are using the Developer perspective: Navigate to the Project page. Select the project that you want to delete from the Project menu. Click the Actions drop-down menu for the project and select Delete Project . Note If you do not have the required permissions to delete the project, the Delete Project option is not available. In the Delete Project? pane, confirm the deletion by entering the name of your project. Click Delete . 2.1.7.2. Deleting a project by using the CLI You can delete a project by using the OpenShift CLI ( oc ). Prerequisites You have installed the OpenShift CLI ( oc ). You have created a project. You have the required permissions to delete the project. Procedure Delete your project: USD oc delete project <project_name> 1 1 Replace <project_name> with the name of the project that you want to delete. 2.2. Creating a project as another user Impersonation allows you to create a project as a different user. 2.2.1. API impersonation You can configure a request to the OpenShift Container Platform API to act as though it originated from another user. For more information, see User impersonation in the Kubernetes documentation. 2.2.2. Impersonating a user when you create a project You can impersonate a different user when you create a project request. Because system:authenticated:oauth is the only bootstrap group that can create project requests, you must impersonate that group. Procedure To create a project request on behalf of a different user: USD oc new-project <project> --as=<user> \ --as-group=system:authenticated --as-group=system:authenticated:oauth 2.3. Configuring project creation In OpenShift Container Platform, projects are used to group and isolate related objects. When a request is made to create a new project using the web console or oc new-project command, an endpoint in OpenShift Container Platform is used to provision the project according to a template, which can be customized. As a cluster administrator, you can allow and configure how developers and service accounts can create, or self-provision , their own projects. 2.3.1. About project creation The OpenShift Container Platform API server automatically provisions new projects based on the project template that is identified by the projectRequestTemplate parameter in the cluster's project configuration resource. If the parameter is not defined, the API server creates a default template that creates a project with the requested name, and assigns the requesting user to the admin role for that project. When a project request is submitted, the API substitutes the following parameters into the template: Table 2.1. Default project template parameters Parameter Description PROJECT_NAME The name of the project. Required. PROJECT_DISPLAYNAME The display name of the project. May be empty. PROJECT_DESCRIPTION The description of the project. May be empty. PROJECT_ADMIN_USER The user name of the administrating user. PROJECT_REQUESTING_USER The user name of the requesting user. Access to the API is granted to developers with the self-provisioner role and the self-provisioners cluster role binding. This role is available to all authenticated developers by default. 2.3.2. Modifying the template for new projects As a cluster administrator, you can modify the default project template so that new projects are created using your custom requirements. To create your own custom project template: Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. Procedure Log in as a user with cluster-admin privileges. Generate the default project template: USD oc adm create-bootstrap-project-template -o yaml > template.yaml Use a text editor to modify the generated template.yaml file by adding objects or modifying existing objects. The project template must be created in the openshift-config namespace. Load your modified template: USD oc create -f template.yaml -n openshift-config Edit the project configuration resource using the web console or CLI. Using the web console: Navigate to the Administration Cluster Settings page. Click Configuration to view all configuration resources. Find the entry for Project and click Edit YAML . Using the CLI: Edit the project.config.openshift.io/cluster resource: USD oc edit project.config.openshift.io/cluster Update the spec section to include the projectRequestTemplate and name parameters, and set the name of your uploaded project template. The default name is project-request . Project configuration resource with custom project template apiVersion: config.openshift.io/v1 kind: Project metadata: # ... spec: projectRequestTemplate: name: <template_name> # ... After you save your changes, create a new project to verify that your changes were successfully applied. 2.3.3. Disabling project self-provisioning You can prevent an authenticated user group from self-provisioning new projects. Procedure Log in as a user with cluster-admin privileges. View the self-provisioners cluster role binding usage by running the following command: USD oc describe clusterrolebinding.rbac self-provisioners Example output Name: self-provisioners Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate=true Role: Kind: ClusterRole Name: self-provisioner Subjects: Kind Name Namespace ---- ---- --------- Group system:authenticated:oauth Review the subjects in the self-provisioners section. Remove the self-provisioner cluster role from the group system:authenticated:oauth . If the self-provisioners cluster role binding binds only the self-provisioner role to the system:authenticated:oauth group, run the following command: USD oc patch clusterrolebinding.rbac self-provisioners -p '{"subjects": null}' If the self-provisioners cluster role binding binds the self-provisioner role to more users, groups, or service accounts than the system:authenticated:oauth group, run the following command: USD oc adm policy \ remove-cluster-role-from-group self-provisioner \ system:authenticated:oauth Edit the self-provisioners cluster role binding to prevent automatic updates to the role. Automatic updates reset the cluster roles to the default state. To update the role binding using the CLI: Run the following command: USD oc edit clusterrolebinding.rbac self-provisioners In the displayed role binding, set the rbac.authorization.kubernetes.io/autoupdate parameter value to false , as shown in the following example: apiVersion: authorization.openshift.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "false" # ... To update the role binding by using a single command: USD oc patch clusterrolebinding.rbac self-provisioners -p '{ "metadata": { "annotations": { "rbac.authorization.kubernetes.io/autoupdate": "false" } } }' Log in as an authenticated user and verify that it can no longer self-provision a project: USD oc new-project test Example output Error from server (Forbidden): You may not request a new project via this API. Consider customizing this project request message to provide more helpful instructions specific to your organization. 2.3.4. Customizing the project request message When a developer or a service account that is unable to self-provision projects makes a project creation request using the web console or CLI, the following error message is returned by default: You may not request a new project via this API. Cluster administrators can customize this message. Consider updating it to provide further instructions on how to request a new project specific to your organization. For example: To request a project, contact your system administrator at [email protected] . To request a new project, fill out the project request form located at https://internal.example.com/openshift-project-request . To customize the project request message: Procedure Edit the project configuration resource using the web console or CLI. Using the web console: Navigate to the Administration Cluster Settings page. Click Configuration to view all configuration resources. Find the entry for Project and click Edit YAML . Using the CLI: Log in as a user with cluster-admin privileges. Edit the project.config.openshift.io/cluster resource: USD oc edit project.config.openshift.io/cluster Update the spec section to include the projectRequestMessage parameter and set the value to your custom message: Project configuration resource with custom project request message apiVersion: config.openshift.io/v1 kind: Project metadata: # ... spec: projectRequestMessage: <message_string> # ... For example: apiVersion: config.openshift.io/v1 kind: Project metadata: # ... spec: projectRequestMessage: To request a project, contact your system administrator at [email protected]. # ... After you save your changes, attempt to create a new project as a developer or service account that is unable to self-provision projects to verify that your changes were successfully applied.
[ "oc new-project <project_name> --description=\"<description>\" --display-name=\"<display_name>\"", "oc new-project hello-openshift --description=\"This is an example project\" --display-name=\"Hello OpenShift\"", "oc get projects", "oc project <project_name>", "apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster spec: customization: projectAccess: availableClusterRoles: - admin - edit - view", "oc project <project_name> 1", "oc status", "oc delete project <project_name> 1", "oc new-project <project> --as=<user> --as-group=system:authenticated --as-group=system:authenticated:oauth", "oc adm create-bootstrap-project-template -o yaml > template.yaml", "oc create -f template.yaml -n openshift-config", "oc edit project.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestTemplate: name: <template_name>", "oc describe clusterrolebinding.rbac self-provisioners", "Name: self-provisioners Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate=true Role: Kind: ClusterRole Name: self-provisioner Subjects: Kind Name Namespace ---- ---- --------- Group system:authenticated:oauth", "oc patch clusterrolebinding.rbac self-provisioners -p '{\"subjects\": null}'", "oc adm policy remove-cluster-role-from-group self-provisioner system:authenticated:oauth", "oc edit clusterrolebinding.rbac self-provisioners", "apiVersion: authorization.openshift.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: \"false\"", "oc patch clusterrolebinding.rbac self-provisioners -p '{ \"metadata\": { \"annotations\": { \"rbac.authorization.kubernetes.io/autoupdate\": \"false\" } } }'", "oc new-project test", "Error from server (Forbidden): You may not request a new project via this API.", "You may not request a new project via this API.", "oc edit project.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestMessage: <message_string>", "apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestMessage: To request a project, contact your system administrator at [email protected]." ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/building_applications/projects
Chapter 39. Migrating from an LDAP Directory to IdM
Chapter 39. Migrating from an LDAP Directory to IdM As an administrator, you previously deployed an LDAP server for authentication and identity lookups and now you want to migrate the back end to Identity Management. You want to use the IdM migration tool to transfer user accounts, including passwords, and group, without losing data. Additionally you want to avoid expensive configuration updates on the clients. The migration process described here, assumes a simple deployment scenario with one name space in LDAP and one in IdM. For more complex environments, such as multiple name spaces or custom schema, contact the Red Hat support services. 39.1. An Overview of an LDAP to IdM Migration The actual migration part of moving from an LDAP server to Identity Management - the process of moving the data from one server to the other - is fairly straightforward. The process is simple: move data, move passwords, and move clients. The most expensive part of the migration is deciding how clients are going to be configured to use Identity Management. For each client in the infrastructure, you need to decide what services (such as Kerberos and SSSD) are being used and what services can be used in the final IdM deployment. A secondary, but significant, consideration is planning how to migrate passwords. Identity Management requires Kerberos hashes for every user account in addition to passwords. Some of the considerations and migration paths for passwords are covered in Section 39.1.2, "Planning Password Migration" . 39.1.1. Planning the Client Configuration Identity Management can support a number of different client configurations, with varying degrees of functionality, flexibility, and security. Decide which configuration is best for each individual client based on its operating system, functional area (such as development machines, production servers, or user laptops), and your IT maintenance priorities. Important The different client configurations are not mutually exclusive . Most environments will have a mix of different ways that clients use to connect to the IdM domain. Administrators must decide which scenario is best for each individual client. 39.1.1.1. Initial Client Configuration (Pre-Migration) Before deciding where you want to go with the client configuration in Identity Management, first establish where you are before the migration. The initial state for almost all LDAP deployments that will be migrated is that there is an LDAP service providing identity and authentication services. Figure 39.1. Basic LDAP Directory and Client Configuration Linux and Unix clients use PAM_LDAP and NSS_LDAP libraries to connect directly to the LDAP services. These libraries allow clients to retrieve user information from the LDAP directory as if the data were stored in /etc/passwd or /etc/shadow . (In real life, the infrastructure may be more complex if a client uses LDAP for identity lookups and Kerberos for authentication or other configurations.) There are structural differences between an LDAP directory and an IdM server, particularly in schema support and the structure of the directory tree. (For more background on those differences, see Section 1.1.2, "Contrasting Identity Management with a Standard LDAP Directory" .) While those differences may impact data (especially with the directory tree, which affects entry names), they have little impact on the client configuration , so it really has little impact on migrating clients to Identity Management. 39.1.1.2. Recommended Configuration for Red Hat Enterprise Linux Clients Red Hat Enterprise Linux has a service called the System Security Services Daemon (SSSD). SSSD uses special PAM and NSS libraries ( pam_sss and nss_sss , respectively) which allow SSSD to be integrated very closely with Identity Management and leverage the full authentication and identity features in Identity Management. SSSD has a number of useful features, like caching identity information so that users can log in even if the connection is lost to the central server; these are described in the System-Level Authentication Guide . Unlike generic LDAP directory services (using pam_ldap and nss_ldap ), SSSD establishes relationships between identity and authentication information by defining domains . A domain in SSSD defines four back end functions: authentication, identity lookups, access, and password changes. The SSSD domain is then configured to use a provider to supply the information for any one (or all) of those four functions. An identity provider is always required in the domain configuration. The other three providers are optional; if an authentication, access, or password provider is not defined, then the identity provider is used for that function. SSSD can use Identity Management for all of its back end functions. This is the ideal configuration because it provides the full range of Identity Management functionality, unlike generic LDAP identity providers or Kerberos authentication. For example, during daily operation, SSSD enforces host-based access control rules and security features in Identity Management. Note During the migration process from an LDAP directory to Identity Management, SSSD can seamlessly migrate user passwords without additional user interaction. Figure 39.2. Clients and SSSD with an IdM Back End The ipa-client-install script automatically configured SSSD to use IdM for all four of its back end services, so Red Hat Enterprise Linux clients are set up with the recommended configuration by default. Note This client configuration is only supported for Red Hat Enterprise Linux 6.1 and later and Red Hat Enterprise Linux 5.7 later, which support the latest versions of SSSD and ipa-client . Older versions of Red Hat Enterprise Linux can be configured as described in Section 39.1.1.3, "Alternative Supported Configuration" . 39.1.1.3. Alternative Supported Configuration Unix and Linux systems such as Mac, Solaris, HP-UX, AIX, and Scientific Linux support all of the services that IdM manages but do not use SSSD. Likewise, older Red Hat Enterprise Linux versions (6.1 and 5.6) support SSSD but have an older version, which does not support IdM as an identity provider. When it is not possible to use a modern version of SSSD on a system, then clients can be configured to connect to the IdM server as if it were an LDAP directory service for identity lookups (using nss_ldap ) and to IdM as if it were a regular Kerberos KDC (using pam_krb5 ). Figure 39.3. Clients and IdM with LDAP and Kerberos If a Red Hat Enterprise Linux client is using an older version of SSSD, SSSD can still be configured to use the IdM server as its identity provider and its Kerberos authentication domain; this is described in the SSSD configuration section of the System-Level Authentication Guide . Any IdM domain client can be configured to use nss_ldap and pam_krb5 to connect to the IdM server. For some maintenance situations and IT structures, a scenario that fits the lowest common denominator may be required, using LDAP for both identity and authentication ( nss_ldap and pam_ldap ). However, it is generally best practice to use the most secure configuration possible for a client. This means SSSD or LDAP for identities and Kerberos for authentication. 39.1.2. Planning Password Migration Probably the most visible issue that can impact LDAP-to-Identity Management migration is migrating user passwords. Identity Management (by default) uses Kerberos for authentication and requires that each user has Kerberos hashes stored in the Identity Management Directory Server in addition to the standard user passwords. To generate these hashes, the user password needs to be available to the IdM server in clear text. When you create a user, the password is available in clear text before it is hashed and stored in Identity Management. However, when the user is migrated from an LDAP directory, the associated user password is already hashed, so the corresponding Kerberos key cannot be generated. Important Users cannot authenticate to the IdM domain or access IdM resources until they have Kerberos hashes. If a user does not have a Kerberos hash [6] , that user cannot log into the IdM domain even if he has a user account. There are three options for migrating passwords: forcing a password change, using a web page, and using SSSD. Migrating users from an existing system provides a smoother transition but also requires parallel management of LDAP directory and IdM during the migration and transition process. If you do not preserve passwords, the migration can be performed more quickly but it requires more manual work by administrators and users. 39.1.2.1. Method 1: Using Temporary Passwords and Requiring a Change When passwords are changed in Identity Management, they will be created with the appropriate Kerberos hashes. So one alternative for administrators is to force users to change their passwords by resetting all user passwords when user accounts are migrated. The new users are assigned a temporary password which they change at the first login. No passwords are migrated. For details, see Section 22.1.1, "Changing and Resetting User Passwords" . 39.1.2.2. Method 2: Using the Migration Web Page When it is running in migration mode, Identity Management has a special web page in its web UI that will capture a cleartext password and create the appropriate Kerberos hash. Administrators could tell users to authenticate once to this web page, which would properly update their user accounts with their password and corresponding Kerberos hash, without requiring password changes. 39.1.2.3. Method 3: Using SSSD (Recommended) SSSD can work with IdM to mitigate the user impact on migrating by generating the required user keys. For deployments with a lot of users or where users should not be burdened with password changes, this is the best scenario. A user tries to log into a machine with SSSD. SSSD attempts to perform Kerberos authentication against the IdM server. Even though the user exists in the system, the authentication will fail with the error key type is not supported because the Kerberos hashes do not yet exist. SSSD then performs a plain text LDAP bind over a secure connection. IdM intercepts this bind request. If the user has a Kerberos principal but no Kerberos hashes, then the IdM identity provider generates the hashes and stores them in the user entry. If authentication is successful, SSSD disconnects from IdM and tries Kerberos authentication again. This time, the request succeeds because the hash exists in the entry. That entire process is entirely transparent to the user; as far as users know, they simply log into a client service and it works as normal. 39.1.2.4. Migrating Cleartext LDAP Passwords Although in most deployments LDAP passwords are stored encrypted, there may be some users or some environments that use cleartext passwords for user entries. When users are migrated from the LDAP server to the IdM server, their cleartext passwords are not migrated over. Identity Management does not allow cleartext passwords. Instead, a Kerberos principal is created for the user, the keytab is set to true, and the password is set as expired. This means that Identity Management requires the user to reset the password at the login. Note If passwords are hashed, the password is successfully migrated through SSSD and the migration web page, as in Section 39.1.2.2, "Method 2: Using the Migration Web Page" and Section 39.1.2.3, "Method 3: Using SSSD (Recommended)" . 39.1.2.5. Automatically Resetting Passwords That Do Not Meet Requirements If user passwords in the original directory do not meet the password policies defined in Identity Management, then the passwords must be reset after migration. Password resets are done automatically the first time the users attempts to kinit into the IdM domain. 39.1.3. Migration Considerations and Requirements As you are planning a migration from an LDAP server to Identity Management, make sure that your LDAP environment is able to work with the Identity Management migration script. 39.1.3.1. LDAP Servers Supported for Migration The migration process from an LDAP server to Identity Management uses a special script, ipa migrate-ds , to perform the migration. This script has certain expectations about the structure of the LDAP directory and LDAP entries in order to work. Migration is supported only for LDAPv3-compliant directory services, which include several common directories: Sun ONE Directory Server Apache Directory Server OpenLDAP Migration from an LDAP server to Identity Management has been tested with Red Hat Directory Server and OpenLDAP. Note Migration using the migration script is not supported for Microsoft Active Directory because it is not an LDAPv3-compliant directory. For assistance with migrating from Active Directory, contact Red Hat Professional Services. 39.1.3.2. Migration Environment Requirements There are many different possible configuration scenarios for both Red Hat Directory Server and Identity Management, and any of those scenarios may affect the migration process. For the example migration procedures in this chapter, these are the assumptions about the environment: A single LDAP directory domain is being migrated to one IdM realm. No consolidation is involved. User passwords are stored as a hash in the LDAP directory. For a list of supported hashes, see the passwordStorageScheme attribute in the Table 19.2. Password Policy-related Attributes in the Red Hat Directory Server 10 Administration Guide . The LDAP directory instance is both the identity store and the authentication method. Client machines are configured to use pam_ldap or nss_ldap to connect to the LDAP server. Entries use only the standard LDAP schema. Entries that contain custom object classes or attributes are not migrated to Identity Management. 39.1.3.3. Migration - IdM System Requirements With a moderately-sized directory (around 10,000 users and 10 groups), it is necessary to have a powerful enough target system (the IdM system) to allow the migration to proceed. The minimum requirements for a migration are: 4 cores 4GB of RAM 30GB of disk space A SASL buffer size of 2MB (default for an IdM server) In case of migration errors, increase the buffer size: Set the nsslapd-sasl-max-buffer-size value in bytes. 39.1.3.4. Considerations about Sudo Rules If you are using sudo with LDAP already, you must manually migrate the sudo rules stored in LDAP. Red Hat recommends to re-create netgroups in IdM as hostgroups. IdM presents hostgroups automatically as traditional netgroups for sudo configurations which do not use the SSSD sudo provider. 39.1.3.5. Migration Tools Identity Management uses a specific command, ipa migrate-ds , to drive the migration process so that LDAP directory data are properly formatted and imported cleanly into the IdM server. When using ipa migrate-ds , the remote system user, specified by the --bind-dn option, needs to have read access to the userPassword attribute, otherwise passwords will not be migrated. The Identity Management server must be configured to run in migration mode, and then the migration script can be used. For details, see Section 39.3, "Migrating an LDAP Server to Identity Management" . 39.1.3.6. Improving Migration Performance An LDAP migration is essentially a specialized import operation for the 389 Directory Server instance within the IdM server. Tuning the 389 Directory Server instance for better import operation performance can help improve the overall migration performance. There are two parameters that directly affect import performance: The nsslapd-cachememsize attribute, which defines the size allowed for the entry cache. This is a buffer, that is automatically set to 80% of the total cache memory size. For large import operations, this parameter (and possibly the memory cache itself) can be increased to more efficiently handle a large number of entries or entries with larger attributes. For details how to modify the attribute using the ldapmodify , see Setting the Entry Cache Size in the Red Hat Directory Server 10 Performance Tuning Guide . The system ulimit configuration option sets the maximum number of allowed processes for a system user. Processing a large database can exceed the limit. If this happens, increase the value: For further information, see Red Hat Directory Server Performance Tuning Guide at https://access.redhat.com/documentation/en-us/red_hat_directory_server/11/html-single/performance_tuning_guide/index . 39.1.3.7. Migration Sequence There are four major steps when migrating to Identity Management, but the order varies slightly depending on whether you want to migrate the server first or the clients first. With a client-based migration, SSSD is used to change the client configuration while an IdM server is configured: Deploy SSSD. Reconfigure clients to connect to the current LDAP server and then fail over to IdM. Install the IdM server. Migrate the user data using the IdM ipa migrate-ds script. This exports the data from the LDAP directory, formats for the IdM schema, and then imports it into IdM. Take the LDAP server offline and allow clients to fail over to Identity Management transparently. With a server migration, the LDAP to Identity Management migration comes first: Install the IdM server. Migrate the user data using the IdM ipa migrate-ds script. This exports the data from the LDAP directory, formats it for the IdM schema, and then imports it into IdM. Optional. Deploy SSSD. Reconfigure clients to connect to IdM. It is not possible to simply replace the LDAP server. The IdM directory tree - and therefore user entry DNs - is different than the directory tree. While it is required that clients be reconfigured, clients do not need to be reconfigured immediately. Updated clients can point to the IdM server while other clients point to the old LDAP directory, allowing a reasonable testing and transition phase after the data are migrated. Note Do not run both an LDAP directory service and the IdM server for very long in parallel. This introduces the risk of user data being inconsistent between the two services. Both processes provide a general migration procedure, but it may not work in every environment. Set up a test LDAP environment and test the migration process before attempting to migrate the real LDAP environment. [6] It is possible to use LDAP authentication in Identity Management instead of Kerberos authentication, which means that Kerberos hashes are not required for users. However, this limits the capabilities of Identity Management and is not recommended.
[ "https://ipaserver.example.com/ipa/migration", "[jsmith@server ~]USD kinit Password for [email protected]: Password expired. You must change it now. Enter new password: Enter it again:", "ldapmodify -x -D 'cn=directory manager' -w password -h ipaserver.example.com -p 389 dn: cn=config changetype: modify replace: nsslapd-sasl-max-buffer-size nsslapd-sasl-max-buffer-size: 4194304 modifying entry \"cn=config\"", "ulimit -u 4096" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/Migrating_from_a_Directory_Server_to_IPA
Preface
Preface With RHTAP, you embark on a journey that transcends traditional security measures, integrating cutting-edge solutions and a DevSecOps CI/CD framework from inception to deployment. This proactive strategy accelerates developer onboarding, process acceleration, and the embedding of security from the beginning.
null
https://docs.redhat.com/en/documentation/red_hat_trusted_application_pipeline/1.4/html/getting_started_with_red_hat_trusted_application_pipeline/pr01
2.3. Exclusive Activation of a Volume Group in a Cluster
2.3. Exclusive Activation of a Volume Group in a Cluster The following procedure configures the volume group in a way that will ensure that only the cluster is capable of activating the volume group, and that the volume group will not be activated outside of the cluster on startup. If the volume group is activated by a system outside of the cluster, there is a risk of corrupting the volume group's metadata. This procedure modifies the volume_list entry in the /etc/lvm/lvm.conf configuration file. Volume groups listed in the volume_list entry are allowed to automatically activate on the local node outside of the cluster manager's control. Volume groups related to the node's local root and home directories should be included in this list. All volume groups managed by the cluster manager must be excluded from the volume_list entry. Note that this procedure does not require the use of clvmd . Perform the following procedure on each node in the cluster. Execute the following command to ensure that locking_type is set to 1 and that use_lvmetad is set to 0 in the /etc/lvm/lvm.conf file. This command also disables and stops any lvmetad processes immediately. Determine which volume groups are currently configured on your local storage with the following command. This will output a list of the currently-configured volume groups. If you have space allocated in separate volume groups for root and for your home directory on this node, you will see those volumes in the output, as in this example. Add the volume groups other than my_vg (the volume group you have just defined for the cluster) as entries to volume_list in the /etc/lvm/lvm.conf configuration file. For example, if you have space allocated in separate volume groups for root and for your home directory, you would uncomment the volume_list line of the lvm.conf file and add these volume groups as entries to volume_list as follows. Note that the volume group you have just defined for the cluster ( my_vg in this example) is not in this list. Note If no local volume groups are present on a node to be activated outside of the cluster manager, you must still initialize the volume_list entry as volume_list = [] . Rebuild the initramfs boot image to guarantee that the boot image will not try to activate a volume group controlled by the cluster. Update the initramfs device with the following command. This command may take up to a minute to complete. Reboot the node. Note If you have installed a new Linux kernel since booting the node on which you created the boot image, the new initrd image will be for the kernel that was running when you created it and not for the new kernel that is running when you reboot the node. You can ensure that the correct initrd device is in use by running the uname -r command before and after the reboot to determine the kernel release that is running. If the releases are not the same, update the initrd file after rebooting with the new kernel and then reboot the node. When the node has rebooted, check whether the cluster services have started up again on that node by executing the pcs cluster status command on that node. If this yields the message Error: cluster is not currently running on this node then enter the following command. Alternately, you can wait until you have rebooted each node in the cluster and start cluster services on each of the nodes with the following command.
[ "lvmconf --enable-halvm --services --startstopservices", "vgs --noheadings -o vg_name my_vg rhel_home rhel_root", "volume_list = [ \"rhel_root\", \"rhel_home\" ]", "dracut -H -f /boot/initramfs-USD(uname -r).img USD(uname -r)", "pcs cluster start", "pcs cluster start --all" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_administration/s1-exclusiveactive-haaa
Chapter 1. Key features
Chapter 1. Key features Streams for Apache Kafka simplifies the process of running Apache Kafka within an OpenShift cluster. This guide serves as an introduction to Streams for Apache Kafka, outlining key Kafka concepts that are central to operating Streams for Apache Kafka. It briefly explains Kafka's components, their purposes, and configuration points, including security and monitoring options. Streams for Apache Kafka provides the necessary files to deploy and manage a Kafka cluster, along with example configuration files for monitoring your deployment. 1.1. Kafka capabilities Kafka's data stream-processing capabilities and component architecture offer: High-throughput, low-latency data sharing for microservices and other applications Guaranteed message ordering Message rewind/replay from data storage to reconstruct application state Message compaction to remove outdated records in a key-value log Horizontal scalability within a cluster Data replication to enhance fault tolerance High-volume data retention for immediate access 1.2. Kafka use cases Kafka's capabilities make it ideal for: Event-driven architectures Event sourcing to log application state changes Message brokering Website activity tracking Operational monitoring through metrics Log collection and aggregation Commit logs for distributed systems Stream processing for real-time data responses 1.3. How Streams for Apache Kafka supports Kafka Streams for Apache Kafka provides container images and operators for running Kafka on OpenShift. These operators are designed with specialized operational knowledge to efficiently manage Kafka on OpenShift. Streams for Apache Kafka operators simplify: Deploying and running Kafka clusters Deploying and managing Kafka components Configuring Kafka access Securing Kafka access Upgrading Kafka Managing brokers Creating and managing topics Creating and managing users For detailed information and instructions on using operators to perform these operations, see the guide for Deploying and Managing Streams for Apache Kafka .
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_on_openshift_overview/key-features_str
Automating system administration by using RHEL System Roles in RHEL 7.9
Automating system administration by using RHEL System Roles in RHEL 7.9 Red Hat Enterprise Linux 7 Consistent and repeatable configuration of RHEL deployments across multiple hosts with Red Hat Ansible Automation Platform playbooks Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/automating_system_administration_by_using_rhel_system_roles_in_rhel_7.9/index
Chapter 2. Monitoring Camel K integrations
Chapter 2. Monitoring Camel K integrations Red Hat Integration - Camel K monitoring is based on the OpenShift monitoring system . This chapter explains how to use the available options for monitoring Red Hat Integration - Camel K integrations at runtime. You can use the Prometheus Operator that is already deployed as part of OpenShift Monitoring to monitor your own applications. Section 2.1, "Enabling user workload monitoring in OpenShift" Section 2.2, "Configuring Camel K integration metrics" Section 2.3, "Adding custom Camel K integration metrics" 2.1. Enabling user workload monitoring in OpenShift OpenShift 4.3 or higher includes an embedded Prometheus Operator already deployed as part of OpenShift Monitoring. This section explains how to enable monitoring of your own application services in OpenShift Monitoring. This option avoids the additional overhead of installing and managing a separate Prometheus instance. Prerequisites You must have cluster administrator access to an OpenShift cluster on which the Camel K Operator is installed. See Installing Camel K . Procedure Enter the following command to check if the cluster-monitoring-config ConfigMap object exists in the openshift-monitoring project : USD oc -n openshift-monitoring get configmap cluster-monitoring-config Create the cluster-monitoring-config ConfigMap if this does not already exist: USD oc -n openshift-monitoring create configmap cluster-monitoring-config Edit the cluster-monitoring-config ConfigMap: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Under data:config.yaml: , set enableUserWorkload to true : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: true Additional resources Enabling monitoring for user-defined projects 2.2. Configuring Camel K integration metrics You can configure monitoring of Camel K integrations automatically using the Camel K Prometheus trait at runtime. This automates the configuration of dependencies and integration Pods to expose a metrics endpoint, which is then discovered and displayed by Prometheus. The Camel Quarkus MicroProfile Metrics extension automatically collects and exposes the default Camel K metrics in the OpenMetrics format. Prerequisites You must have already enabled monitoring of your own services in OpenShift. See Enabling user workload monitoring in OpenShift . Procedure Enter the following command to run your Camel K integration with the Prometheus trait enabled: kamel run myIntegration.java -t prometheus.enabled=true Alternatively, you can enable the Prometheus trait globally once, by updating the integration platform as follows: USD oc patch itp camel-k --type=merge -p '{"spec":{"traits":{"prometheus":{"configuration":{"enabled":true}}}}}' View monitoring of Camel K integration metrics in Prometheus. For example, for embedded Prometheus, select Monitoring > Metrics in the OpenShift administrator or developer web console. Enter the Camel K metric that you want to view. For example, in the Administrator console, under Insert Metric at Cursor , enter application_camel_context_uptime_seconds , and click Run Queries . Click Add Query to view additional metrics. Default Camel Metrics provided by PROMETHEUS TRAIT Some Camel specific metrics are available out of the box. Name Type Description application_camel_message_history_processing timer Sample of performance of each node in the route when message history is enabled application_camel_route_count gauge Number of routes added application_camel_route_running_count gauge Number of routes runnning application_camel_[route or context]_exchanges_inflight_count gauge Route inflight messages for a CamelContext or a route application_camel_[route or context]_exchanges_total counter Total number of processed exchanges for a CamelContext or a route application_camel_[route or context]_exchanges_completed_total counter Number of successfully completed exchange for a CamelContext or a route application_camel_[route or context]_exchanges_failed_total counter Number of failed exchanges for a CamelContext or a route application_camel_[route or context]_failuresHandled_total counter Number of failures handled for a CamelContext or a route application_camel_[route or context]_externalRedeliveries_total counter Number of external initiated redeliveries (such as from JMS broker) for a CamelContext or a route application_camel_context_status gauge The status of the Camel Context application_camel_context_uptime_seconds gauge The amount of time since the Camel Context was started application_camel_[route or exchange] processing [rate_per_second or one_min_rate_per_second or five_min_rate_per_second or fifteen_min_rate_per_second or min_seconds or max_seconds or mean_second or stddev_seconds] gauge Exchange message or route processing with multiple options application_camel_[route or exchange]_processing_seconds summary Exchange message or route processing metric Additional resources Prometheus Trait Camel Quarkus MicroProfile Metrics 2.3. Adding custom Camel K integration metrics You can add custom metrics to your Camel K integrations by using Camel MicroProfile Metrics component and annotations in your Java code. These custom metrics will then be automatically discovered and displayed by Prometheus. This section shows examples of adding Camel MicroProfile Metrics annotations to Camel K integration and service implementation code. Prerequisites You must have already enabled monitoring of your own services in OpenShift. See Enabling user workload monitoring in OpenShift . Procedure Register the custom metrics in your Camel integration code using Camel MicroProfile Metrics component annotations. The following example shows a Metrics.java integration: // camel-k: language=java trait=prometheus.enabled=true dependency=mvn:org.my/app:1.0 1 import org.apache.camel.Exchange; import org.apache.camel.LoggingLevel; import org.apache.camel.builder.RouteBuilder; import org.apache.camel.component.microprofile.metrics.MicroProfileMetricsConstants; import javax.enterprise.context.ApplicationScoped; @ApplicationScoped public class Metrics extends RouteBuilder { @Override public void configure() { onException() .handled(true) .maximumRedeliveries(2) .logStackTrace(false) .logExhausted(false) .log(LoggingLevel.ERROR, "Failed processing USD{body}") // Register the 'redelivery' meter .to("microprofile-metrics:meter:redelivery?mark=2") // Register the 'error' meter .to("microprofile-metrics:meter:error"); 2 from("timer:stream?period=1000") .routeId("unreliable-service") .setBody(header(Exchange.TIMER_COUNTER).prepend("event #")) .log("Processing USD{body}...") // Register the 'generated' meter .to("microprofile-metrics:meter:generated") 3 // Register the 'attempt' meter via @Metered in Service.java .bean("service") 4 .filter(header(Exchange.REDELIVERED)) .log(LoggingLevel.WARN, "Processed USD{body} after USD{header.CamelRedeliveryCounter} retries") .setHeader(MicroProfileMetricsConstants.HEADER_METER_MARK, header(Exchange.REDELIVERY_COUNTER)) // Register the 'redelivery' meter .to("microprofile-metrics:meter:redelivery") 5 .end() .log("Successfully processed USD{body}") // Register the 'success' meter .to("microprofile-metrics:meter:success"); 6 } } 1 Uses the Camel K modeline to automatically configure the Prometheus trait and Maven dependencies 2 error : Metric for the number of errors corresponding to the number of events that have not been processed 3 generated : Metric for the number of events to be processed 4 attempt : Metric for the number of calls made to the service bean to process incoming events 5 redelivery : Metric for the number of retries made to process the event 6 success : Metric for the number of events successfully processed Add Camel MicroProfile Metrics annotations to any implementation files as needed. The following example shows the service bean called by the Camel K integration, which generates random failures: package com.redhat.integration; import java.util.Random; import org.apache.camel.Exchange; import org.apache.camel.RuntimeExchangeException; import org.eclipse.microprofile.metrics.Meter; import org.eclipse.microprofile.metrics.annotation.Metered; import org.eclipse.microprofile.metrics.annotation.Metric; import javax.inject.Named; import javax.enterprise.context.ApplicationScoped; @Named("service") @ApplicationScoped @io.quarkus.arc.Unremovable public class Service { //Register the attempt meter @Metered(absolute = true) public void attempt(Exchange exchange) { 1 Random rand = new Random(); if (rand.nextDouble() < 0.5) { throw new RuntimeExchangeException("Random failure", exchange); 2 } } } 1 The @Metered MicroProfile Metrics annotation declares the meter and the name is automatically generated based on the metrics method name, in this case, attempt . 2 This example fails randomly to help generate errors for metrics. Follow the steps in Configuring Camel K integration metrics to run the integration and view the custom Camel K metrics in Prometheus. In this case, the example already uses the Camel K modeline in Metrics.java to automatically configure Prometheus and the required Maven dependencies for Service.java . Additional resources Camel MicroProfile Metrics component Camel Quarkus MicroProfile Metrics Extension
[ "oc -n openshift-monitoring get configmap cluster-monitoring-config", "oc -n openshift-monitoring create configmap cluster-monitoring-config", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: true", "kamel run myIntegration.java -t prometheus.enabled=true", "oc patch itp camel-k --type=merge -p '{\"spec\":{\"traits\":{\"prometheus\":{\"configuration\":{\"enabled\":true}}}}}'", "// camel-k: language=java trait=prometheus.enabled=true dependency=mvn:org.my/app:1.0 1 import org.apache.camel.Exchange; import org.apache.camel.LoggingLevel; import org.apache.camel.builder.RouteBuilder; import org.apache.camel.component.microprofile.metrics.MicroProfileMetricsConstants; import javax.enterprise.context.ApplicationScoped; @ApplicationScoped public class Metrics extends RouteBuilder { @Override public void configure() { onException() .handled(true) .maximumRedeliveries(2) .logStackTrace(false) .logExhausted(false) .log(LoggingLevel.ERROR, \"Failed processing USD{body}\") // Register the 'redelivery' meter .to(\"microprofile-metrics:meter:redelivery?mark=2\") // Register the 'error' meter .to(\"microprofile-metrics:meter:error\"); 2 from(\"timer:stream?period=1000\") .routeId(\"unreliable-service\") .setBody(header(Exchange.TIMER_COUNTER).prepend(\"event #\")) .log(\"Processing USD{body}...\") // Register the 'generated' meter .to(\"microprofile-metrics:meter:generated\") 3 // Register the 'attempt' meter via @Metered in Service.java .bean(\"service\") 4 .filter(header(Exchange.REDELIVERED)) .log(LoggingLevel.WARN, \"Processed USD{body} after USD{header.CamelRedeliveryCounter} retries\") .setHeader(MicroProfileMetricsConstants.HEADER_METER_MARK, header(Exchange.REDELIVERY_COUNTER)) // Register the 'redelivery' meter .to(\"microprofile-metrics:meter:redelivery\") 5 .end() .log(\"Successfully processed USD{body}\") // Register the 'success' meter .to(\"microprofile-metrics:meter:success\"); 6 } }", "package com.redhat.integration; import java.util.Random; import org.apache.camel.Exchange; import org.apache.camel.RuntimeExchangeException; import org.eclipse.microprofile.metrics.Meter; import org.eclipse.microprofile.metrics.annotation.Metered; import org.eclipse.microprofile.metrics.annotation.Metric; import javax.inject.Named; import javax.enterprise.context.ApplicationScoped; @Named(\"service\") @ApplicationScoped @io.quarkus.arc.Unremovable public class Service { //Register the attempt meter @Metered(absolute = true) public void attempt(Exchange exchange) { 1 Random rand = new Random(); if (rand.nextDouble() < 0.5) { throw new RuntimeExchangeException(\"Random failure\", exchange); 2 } } }" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.9/html/developing_and_managing_integrations_using_camel_k/monitoring-camel-k
Chapter 138. KafkaMirrorMaker2 schema reference
Chapter 138. KafkaMirrorMaker2 schema reference Property Property type Description spec KafkaMirrorMaker2Spec The specification of the Kafka MirrorMaker 2 cluster. status KafkaMirrorMaker2Status The status of the Kafka MirrorMaker 2 cluster.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-KafkaMirrorMaker2-reference
11.2. Using JSON with Ruby Example
11.2. Using JSON with Ruby Example Prerequisites To use JavaScript Object Notation ( JSON ) with ruby to interact with Red Hat JBoss Data Grid's REST Interface, install the JSON Ruby library (see your platform's package manager or the Ruby documentation) and declare the requirement using the following code: Using JSON with Ruby The following code is an example of how to use JavaScript Object Notation ( JSON ) in conjunction with Ruby to send specific data, in this case the name and age of an individual, using the PUT function. Report a bug
[ "require 'json'", "data = {:name => \"michael\", :age => 42 } http.put('/rest/Users/data/0', data.to_json, {\"Content-Type\" => \"application/json\"})" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/using_json_with_ruby_example
8.8. Activating and Deactivating a Snapshot
8.8. Activating and Deactivating a Snapshot Only activated snapshots are accessible. Check the Accessing Snapshot section for more details. Since each snapshot is a Red Hat Gluster Storage volume it consumes some resources hence if the snapshots are not needed it would be good to deactivate them and activate them when required. To activate a snapshot run the following command: where: snapname : Name of the snap to be activated. force : If some of the bricks of the snapshot volume are down then use the force command to start them. For Example: To deactivate a snapshot, run the following command: where: snapname : Name of the snap to be deactivated. For example:
[ "gluster snapshot activate < snapname > [force]", "gluster snapshot activate snap1", "gluster snapshot deactivate < snapname >", "gluster snapshot deactivate snap1" ]
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/ch08s08
1.3. Creating the Kickstart File
1.3. Creating the Kickstart File The kickstart file is a simple text file, containing a list of items, each identified by a keyword. You can create it by editing a copy of the sample.ks file found in the RH-DOCS directory of the Red Hat Enterprise Linux Documentation CD, using the Kickstart Configurator application, or writing it from scratch. The Red Hat Enterprise Linux installation program also creates a sample kickstart file based on the options that you selected during installation. It is written to the file /root/anaconda-ks.cfg . You should be able to edit it with any text editor or word processor that can save files as ASCII text. First, be aware of the following issues when you are creating your kickstart file: Sections must be specified in order . Items within the sections do not have to be in a specific order unless otherwise specified. The section order is: Command section - Refer to Section 1.4, "Kickstart Options" for a list of kickstart options. You must include the required options. The %packages section - Refer to Section 1.5, "Package Selection" for details. The %pre and %post sections - These two sections can be in any order and are not required. Refer to Section 1.6, "Pre-installation Script" and Section 1.7, "Post-installation Script" for details. Items that are not required can be omitted. Omitting any required item results in the installation program prompting the user for an answer to the related item, just as the user would be prompted during a typical installation. Once the answer is given, the installation continues unattended (unless it finds another missing item). Lines starting with a pound (or hash) sign (#) are treated as comments and are ignored. For kickstart upgrades , the following items are required: Language Language support Installation method Device specification (if device is needed to perform the installation) Keyboard setup The upgrade keyword Boot loader configuration If any other items are specified for an upgrade, those items are ignored (note that this includes package selection).
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/kickstart_installations-creating_the_kickstart_file
Chapter 112. KafkaUser schema reference
Chapter 112. KafkaUser schema reference Property Property type Description spec KafkaUserSpec The specification of the user. status KafkaUserStatus The status of the Kafka User.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-kafkauser-reference
Chapter 4. ImageStreamImage [image.openshift.io/v1]
Chapter 4. ImageStreamImage [image.openshift.io/v1] Description ImageStreamImage represents an Image that is retrieved by image name from an ImageStream. User interfaces and regular users can use this resource to access the metadata details of a tagged image in the image stream history for viewing, since Image resources are not directly accessible to end users. A not found error will be returned if no such image is referenced by a tag within the ImageStream. Images are created when spec tags are set on an image stream that represent an image in an external registry, when pushing to the integrated registry, or when tagging an existing image from one image stream to another. The name of an image stream image is in the form "<STREAM>@<DIGEST>", where the digest is the content addressible identifier for the image (sha256:xxxxx... ). You can use ImageStreamImages as the from.kind of an image stream spec tag to reference an image exactly. The only operations supported on the imagestreamimage endpoint are retrieving the image. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required image 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources image object Image is an immutable representation of a container image and metadata at a point in time. Images are named by taking a hash of their contents (metadata and content) and any change in format, content, or metadata results in a new name. The images resource is primarily for use by cluster administrators and integrations like the cluster image registry - end users instead access images via the imagestreamtags or imagestreamimages resources. While image metadata is stored in the API, any integration that implements the container image registry API must provide its own storage for the raw manifest data, image config, and layer contents. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 4.1.1. .image Description Image is an immutable representation of a container image and metadata at a point in time. Images are named by taking a hash of their contents (metadata and content) and any change in format, content, or metadata results in a new name. The images resource is primarily for use by cluster administrators and integrations like the cluster image registry - end users instead access images via the imagestreamtags or imagestreamimages resources. While image metadata is stored in the API, any integration that implements the container image registry API must provide its own storage for the raw manifest data, image config, and layer contents. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources dockerImageConfig string DockerImageConfig is a JSON blob that the runtime uses to set up the container. This is a part of manifest schema v2. Will not be set when the image represents a manifest list. dockerImageLayers array DockerImageLayers represents the layers in the image. May not be set if the image does not define that data or if the image represents a manifest list. dockerImageLayers[] object ImageLayer represents a single layer of the image. Some images may have multiple layers. Some may have none. dockerImageManifest string DockerImageManifest is the raw JSON of the manifest dockerImageManifestMediaType string DockerImageManifestMediaType specifies the mediaType of manifest. This is a part of manifest schema v2. dockerImageManifests array DockerImageManifests holds information about sub-manifests when the image represents a manifest list. When this field is present, no DockerImageLayers should be specified. dockerImageManifests[] object ImageManifest represents sub-manifests of a manifest list. The Digest field points to a regular Image object. dockerImageMetadata RawExtension DockerImageMetadata contains metadata about this image dockerImageMetadataVersion string DockerImageMetadataVersion conveys the version of the object, which if empty defaults to "1.0" dockerImageReference string DockerImageReference is the string that can be used to pull this image. dockerImageSignatures array (string) DockerImageSignatures provides the signatures as opaque blobs. This is a part of manifest schema v1. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata signatures array Signatures holds all signatures of the image. signatures[] object ImageSignature holds a signature of an image. It allows to verify image identity and possibly other claims as long as the signature is trusted. Based on this information it is possible to restrict runnable images to those matching cluster-wide policy. Mandatory fields should be parsed by clients doing image verification. The others are parsed from signature's content by the server. They serve just an informative purpose. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). 4.1.2. .image.dockerImageLayers Description DockerImageLayers represents the layers in the image. May not be set if the image does not define that data or if the image represents a manifest list. Type array 4.1.3. .image.dockerImageLayers[] Description ImageLayer represents a single layer of the image. Some images may have multiple layers. Some may have none. Type object Required name size mediaType Property Type Description mediaType string MediaType of the referenced object. name string Name of the layer as defined by the underlying store. size integer Size of the layer in bytes as defined by the underlying store. 4.1.4. .image.dockerImageManifests Description DockerImageManifests holds information about sub-manifests when the image represents a manifest list. When this field is present, no DockerImageLayers should be specified. Type array 4.1.5. .image.dockerImageManifests[] Description ImageManifest represents sub-manifests of a manifest list. The Digest field points to a regular Image object. Type object Required digest mediaType manifestSize architecture os Property Type Description architecture string Architecture specifies the supported CPU architecture, for example amd64 or ppc64le . digest string Digest is the unique identifier for the manifest. It refers to an Image object. manifestSize integer ManifestSize represents the size of the raw object contents, in bytes. mediaType string MediaType defines the type of the manifest, possible values are application/vnd.oci.image.manifest.v1+json, application/vnd.docker.distribution.manifest.v2+json or application/vnd.docker.distribution.manifest.v1+json. os string OS specifies the operating system, for example linux . variant string Variant is an optional field repreenting a variant of the CPU, for example v6 to specify a particular CPU variant of the ARM CPU. 4.1.6. .image.signatures Description Signatures holds all signatures of the image. Type array 4.1.7. .image.signatures[] Description ImageSignature holds a signature of an image. It allows to verify image identity and possibly other claims as long as the signature is trusted. Based on this information it is possible to restrict runnable images to those matching cluster-wide policy. Mandatory fields should be parsed by clients doing image verification. The others are parsed from signature's content by the server. They serve just an informative purpose. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required type content Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources conditions array Conditions represent the latest available observations of a signature's current state. conditions[] object SignatureCondition describes an image signature condition of particular kind at particular probe time. content string Required: An opaque binary string which is an image's signature. created Time If specified, it is the time of signature's creation. imageIdentity string A human readable string representing image's identity. It could be a product name and version, or an image pull spec (e.g. "registry.access.redhat.com/rhel7/rhel:7.2"). issuedBy object SignatureIssuer holds information about an issuer of signing certificate or key. issuedTo object SignatureSubject holds information about a person or entity who created the signature. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata signedClaims object (string) Contains claims from the signature. type string Required: Describes a type of stored blob. 4.1.8. .image.signatures[].conditions Description Conditions represent the latest available observations of a signature's current state. Type array 4.1.9. .image.signatures[].conditions[] Description SignatureCondition describes an image signature condition of particular kind at particular probe time. Type object Required type status Property Type Description lastProbeTime Time Last time the condition was checked. lastTransitionTime Time Last time the condition transit from one status to another. message string Human readable message indicating details about last transition. reason string (brief) reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of signature condition, Complete or Failed. 4.1.10. .image.signatures[].issuedBy Description SignatureIssuer holds information about an issuer of signing certificate or key. Type object Property Type Description commonName string Common name (e.g. openshift-signing-service). organization string Organization name. 4.1.11. .image.signatures[].issuedTo Description SignatureSubject holds information about a person or entity who created the signature. Type object Required publicKeyID Property Type Description commonName string Common name (e.g. openshift-signing-service). organization string Organization name. publicKeyID string If present, it is a human readable key id of public key belonging to the subject used to verify image signature. It should contain at least 64 lowest bits of public key's fingerprint (e.g. 0x685ebe62bf278440). 4.2. API endpoints The following API endpoints are available: /apis/image.openshift.io/v1/namespaces/{namespace}/imagestreamimages/{name} GET : read the specified ImageStreamImage 4.2.1. /apis/image.openshift.io/v1/namespaces/{namespace}/imagestreamimages/{name} Table 4.1. Global path parameters Parameter Type Description name string name of the ImageStreamImage HTTP method GET Description read the specified ImageStreamImage Table 4.2. HTTP responses HTTP code Reponse body 200 - OK ImageStreamImage schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/image_apis/imagestreamimage-image-openshift-io-v1
Preface
Preface Red Hat OpenStack Platform (Red Hat OpenStack Platform) provides the foundation to build a private or public Infrastructure-as-a-Service (IaaS) cloud on top of Red Hat Enterprise Linux. It offers a massively scalable, fault-tolerant platform for the development of cloud-enabled workloads. This guide discusses procedures for creating and managing persistent storage. Within OpenStack, this storage is provided by three main services: Block Storage ( openstack-cinder ) Object Storage ( openstack-swift ) Shared File System Storage ( openstack-manila ) These services provide different types of persistent storage, each with its own set of advantages in different use cases. This guide discusses the suitability of each for general enterprise storage requirements. You can manage cloud storage using either the OpenStack dashboard or the command-line clients. Most procedures can be carried out using either method; some of the more advanced procedures can only be executed on the command line. This guide provides procedures for the dashboard where possible. Note For the complete suite of documentation for Red Hat OpenStack Platform, see Red Hat OpenStack Platform Documentation . Important This guide documents the use of crudini to apply some custom service settings. As such, you need to install the crudini package first:
[ "dnf install crudini -y" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/storage_guide/pr01
Quick Start Guide
Quick Start Guide Red Hat Trusted Profile Analyzer 1 Using the Red Hat Trusted Profile Analyzer managed service on Red Hat Hybrid Cloud Console Red Hat Trusted Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_trusted_profile_analyzer/1/html/quick_start_guide/index
Chapter 7. Monitoring brokers for problems
Chapter 7. Monitoring brokers for problems AMQ Broker includes an internal tool called the Critical Analyzer that actively monitors running brokers for problems such as deadlock conditions. In a production environment, a problem such as a deadlock condition can be caused by IO errors, a defective disk, memory shortage, or excess CPU usage caused by other processes. The Critical Analyzer periodically measures the response time for critical operations such as queue delivery (that is, adding of messages to a queue on the broker) and journal operations. If the response time of a checked operation exceeds a configurable timeout value, the broker is considered unstable. In this case, you can configure the Critical Analyzer to simply log a message or take action to protect the broker, such as shutting down the broker or stopping the virtual machine (VM) that is running the broker. 7.1. Configuring the Critical Analyzer The following procedure shows how to configure the Critical Analyzer to monitor the broker for problems. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. The default configuration for the Critical Analyzer is shown below. Specify parameter values, as described below. critical-analyzer Specifies whether to enable or disable the Critical Analyzer tool. The default value is true , which means that the tool is enabled. critical-analyzer-timeout Timeout, in milliseconds, for the checks run by the Critical Analyzer. If the time taken by one of the checked operations exceeds this value, the broker is considered unstable. critical-analyzer-check-period Time period, in milliseconds, between consecutive checks by the Critical Analyzer for each operation. critical-analyzer-policy If the broker fails a check and is considered unstable, this parameter specifies whether the broker logs a message ( LOG ), stops the virtual machine (VM) hosting the broker ( HALT ), or shuts down the broker ( SHUTDOWN ). Based on the policy option that you have configured, if the response time for a critical operation exceeds the configured timeout value, you see output that resembles one of the following: critical-analyzer-policy = LOG critical-analyzer-policy = HALT critical-analyzer-policy = SHUTDOWN You also see a thread dump on the broker that resembles the following: Revised on 2024-11-07 15:46:16 UTC
[ "<critical-analyzer>true</critical-analyzer> <critical-analyzer-timeout>120000</critical-analyzer-timeout> <critical-analyzer-check-period>60000</critical-analyzer-check-period> <critical-analyzer-policy>HALT</critical-analyzer-policy>", "[Artemis Critical Analyzer] 18:11:52,145 ERROR [org.apache.activemq.artemis.core.server] AMQ224081: The component org.apache.activemq.artemis.tests.integration.critical.CriticalSimpleTestUSD2@5af97850 is not responsive", "[Artemis Critical Analyzer] 18:10:00,831 ERROR [org.apache.activemq.artemis.core.server] AMQ224079: The process for the virtual machine will be killed, as component org.apache.activemq.artemis.tests.integration.critical.CriticalSimpleTestUSD2@5af97850 is not responsive", "[Artemis Critical Analyzer] 18:07:53,475 ERROR [org.apache.activemq.artemis.core.server] AMQ224080: The server process will now be stopped, as component org.apache.activemq.artemis.tests.integration.critical.CriticalSimpleTestUSD2@5af97850 is not responsive", "[Artemis Critical Analyzer] 18:10:00,836 ERROR [org.apache.activemq.artemis.core.server] AMQ222199: Thread dump: AMQ119001: Generating thread dump * =============================================================================== AMQ119002: Thread Thread[Thread-1 (ActiveMQ-scheduled-threads),5,main] name = Thread-1 (ActiveMQ-scheduled-threads) id = 19 group = java.lang.ThreadGroup[name=main,maxpri=10] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizerUSDConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.ScheduledThreadPoolExecutorUSDDelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1088) java.util.concurrent.ScheduledThreadPoolExecutorUSDDelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) java.util.concurrent.ThreadPoolExecutorUSDWorker.run(ThreadPoolExecutor.java:617) java.lang.Thread.run(Thread.java:745) =============================================================================== ..... .......... =============================================================================== AMQ119003: End Thread dump *" ]
https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.12/html/managing_amq_broker/assembly-br-monitoring-brokers-for-problems_managing
21.4. Exposing Automount Maps to NIS Clients
21.4. Exposing Automount Maps to NIS Clients If any automount maps are already defined, you must manually add them to the NIS configuration in IdM. This ensures the maps are exposed to NIS clients. The NIS server is managed by a special plug-in entry in the IdM LDAP directory. Each NIS domain and map used by the NIS server is added as a sub-entry in this container. The NIS domain entry contains: the name of the NIS domain the name of the NIS map information on how to find the directory entries to use as the NIS map's contents information on which attributes to use as the NIS map's key and value Most of these settings are the same for every map. 21.4.1. Adding an Automount Map IdM stores the automount maps, grouped by the automount location, in the cn=automount branch of the IdM directory tree. You can add the NIS domain and maps using the LDAP protocol. For example, to add an automount map named auto.example in the default location for the example.com domain: Note Set the nis-domain attribute to the name of your NIS domain. The value set in the nis-base attribute must correspond: To an existing automount map set using the ipa automountmap-* commands. To an existing automount location set using the ipa automountlocation-* commands. After you set the entry, you can verify the automount map:
[ "ldapadd -h server.example.com -x -D \"cn=Directory Manager\" -W dn: nis-domain=example.com+nis-map=auto.example,cn=NIS Server,cn=plugins,cn=config objectClass: extensibleObject nis-domain: example.com nis-map: auto.example nis-filter: (objectclass=automount) nis-key-format: %{automountKey} nis-value-format: %{automountInformation} nis-base: automountmapname=auto.example,cn=default,cn=automount,dc=example,dc=com", "ypcat -k -d example.com -h server.example.com auto.example" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/Exposing_Automount_Maps_to_NIS_Clients
Packaging and distributing software
Packaging and distributing software Red Hat Enterprise Linux 8 Packaging software by using the RPM package management system Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/packaging_and_distributing_software/index
Chapter 6. Security
Chapter 6. Security Treating Matches Authoritatively in Look Ups of sudoers Entries The sudo utility is able to consult the /etc/nsswitch.conf file for sudoers entries and look them up in files or using LDAP. Previously, when a match was found in the first database of sudoers entries, the look up operation still continued in other databases (including files). In Red Hat Enterprise Linux 6.4, an option was added to the /etc/nsswitch.conf file that allows users to specify a database after which a match of a sudoers entry is sufficient. This eliminates the need to query any other databases; thus, improving the performance of sudoers entry look ups in large environments. This behavior is not enabled by default and must be configured by adding the [SUCCESS=return] string after a selected database. When a match is found in a database that directly precedes this string, no other databases are queried. Additional Password Checks for pam_cracklib The pam_cracklib module has been updated to add multiple new password strength checks: Certain authentication policies do not allow passwords which contain long continuous sequences such as "abcd" or "98765". This update introduces the possibility to limit the maximum length of these sequences by using the new maxsequence option. The pam_cracklib module now allows to check whether a new password contains the words from the GECOS field from entries in the /etc/passwd file. The GECOS field is used to store additional information about the user, such as the user's full name or a phone number, which could be used by an attacker for an attempt to crack the password. The pam_cracklib module now allows to specify the maximum allowed number of consecutive characters of the same class (lowercase, uppercase, number and special characters) in a password via the maxrepeatclass option. The pam_cracklib module now supports the enforce_for_root option, which enforces complexity restrictions on new passwords for the root account. Size Option for tmpfs Polyinstantiation On a system with multiple tmpfs mounts, it is necessary to limit their size to prevent them from occupying all of the system memory. PAM has been updated to allow users to specify the maximum size of the tmpfs file system mount when using tmpfs polyinstantiation by using the mntopts=size= <size> option in the /etc/namespace.conf configuration file. Locking Inactive Accounts Certain authentication policies require support for locking of an account that is not used for certain period of time. Red Hat Enterprise Linux 6.4 introduces an additional function to the pam_lastlog module, which allows users to lock accounts after a configurable number of days. New Modes of Operation for libica The libica library, which contains a set of functions and utilities for accessing the IBM eServer Cryptographic Accelerator (ICA) hardware on IBM System z, has been modified to allow usage of new algorithms that support the Message Security Assist Extension 4 instructions in the Central Processor Assist for Cryptographic Function (CPACF). For the DES and 3DES block ciphers, the following modes of operation are now supported: Cipher Block Chaining with Ciphertext Stealing (CBC-CS) Cipher-based Message Authentication Code (CMAC) For the AES block cipher, the following modes of operation are now supported: Cipher Block Chaining with Ciphertext Stealing (CBC-CS) Counter with Cipher Block Chaining Message Authentication Code (CCM) Galois/Counter (GCM) This acceleration of complex cryptographic algorithms significantly improves the performance of IBM System z machines. Optimization of, and Support for, the zlib Compression Library for System z The zlib library, a general-purpose lossless data compression library, has been updated to improve compression performance on IBM System z. Fallback Firewall Configuration The iptables and ip6tables services now provide the ability to assign a fallback firewall configuration if the default configurations cannot be applied. If applying of the firewall rules from /etc/sysconfig/iptables fails, the fallback file is applied if it exists. The fallback file is named /etc/sysconfig/iptables.fallback and uses the iptables-save file format (same as /etc/sysconfig/iptables ). If application of the fallback file also fails, there is no further fallback. To create a fallback file, use the standard firewall configuration tools and rename or copy the file to the fallback file. Use the same process for the ip6tables service, only replace all occurrences of " iptables " with " ip6tables " .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_release_notes/chap-security
Security APIs
Security APIs OpenShift Container Platform 4.15 Reference guide for security APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html-single/security_apis/index
Chapter 9. Cleaning up data with support
Chapter 9. Cleaning up data with support MicroShift provides the microshift-cleanup-data script for various troubleshooting tasks, such as deleting all data, certificates, and container images. Warning Do not run this script without the guidance of product Support. Contact Support by Submitting a support case . 9.1. Data cleanup script overview You can see the usage and list available options of the microshift-cleanup-data script by running the script without arguments. Running the script without arguments does not delete any data or stop the MicroShift service. Procedure See the usage and list the available options of the microshift-cleanup-data script by entering the following command: Warning Some of the options in the following script operations are destructive and can cause data loss. See the procedure of each argument for warnings. USD microshift-cleanup-data Example output Stop all MicroShift services, also cleaning their data Usage: microshift-cleanup-data <--all [--keep-images] | --ovn | --cert> --all Clean all MicroShift and OVN data --keep-images Keep container images when cleaning all data --ovn Clean OVN data only --cert Clean certificates only 9.2. Cleaning all data and configuration You can clean up all the MicroShift data and configuration by running the microshift-cleanup-data script. When you run the script with the --all argument, you perform the following clean up actions: Stop and disable all MicroShift services Delete all MicroShift pods Delete all container image storage Reset network configuration Delete the /var/lib/microshift data directory Delete OVN-K networking configuration Prerequisites You are logged into MicroShift as an administrator with root-user access. You have filed a support case. Procedure Clean up all the MicroShift data and configuration by running the microshift-cleanup-data script with the --all argument, by entering the following command: Warning This option deletes all MicroShift data and user workloads. Use with caution. USD sudo microshift-cleanup-data --all Tip The script prompts you with a message to confirm the operation. Type 1 or Yes to continue. Any other entries cancel the clean up. Example output when you continue the clean up DATA LOSS WARNING: Do you wish to stop and clean ALL MicroShift data AND cri-o container workloads? 1) Yes 2) No #? 1 Stopping MicroShift services Disabling MicroShift services Removing MicroShift pods Removing crio image storage Deleting the br-int interface Killing conmon, pause and OVN processes Removing MicroShift configuration Removing OVN configuration MicroShift service was stopped MicroShift service was disabled Cleanup succeeded Example output when you cancel the clean up DATA LOSS WARNING: Do you wish to stop and clean ALL MicroShift data AND cri-o container workloads? 1) Yes 2) No #? no Aborting cleanup Important The MicroShift service is stopped and disabled after you run the script. Restart the MicroShift service by running the following command: USD sudo systemctl enable --now microshift 9.3. Cleaning all data and keeping the container images You can retain the MicroShift container images while cleaning all data by running the microshift-cleanup-data script with the --all and --keep-images arguments. Keeping the container images helps speed up MicroShift restart after data clean up because the necessary container images are already present locally when you start the service. When you run the script with the --all and --keep-images arguments, you perform the following clean up actions: Stop and disable all MicroShift services Delete all MicroShift pods Reset network configuration Delete the /var/lib/microshift data directory Delete OVN-K networking configuration Warning This option deletes all MicroShift data and user workloads. Use with caution. Prerequisites You are logged into MicroShift as an administrator with root-user access. You have filed a support case. Procedure Clean up all data and user workloads while retaining the MicroShift container images by running the microshift-cleanup-data script with the --all and --keep-images argument, by entering the following command: USD sudo microshift-cleanup-data --all --keep-images Example output DATA LOSS WARNING: Do you wish to stop and clean ALL MicroShift data AND cri-o container workloads? 1) Yes 2) No #? Yes Stopping MicroShift services Disabling MicroShift services Removing MicroShift pods Deleting the br-int interface Killing conmon, pause and OVN processes Removing MicroShift configuration Removing OVN configuration MicroShift service was stopped MicroShift service was disabled Cleanup succeeded Verify that the container images are still present by running the following command: USD sudo crictl images | awk '{print USD1}' Example output IMAGE quay.io/openshift-release-dev/ocp-v4.0-art-dev quay.io/openshift-release-dev/ocp-v4.0-art-dev quay.io/openshift-release-dev/ocp-v4.0-art-dev quay.io/openshift-release-dev/ocp-v4.0-art-dev quay.io/openshift-release-dev/ocp-v4.0-art-dev quay.io/openshift-release-dev/ocp-v4.0-art-dev quay.io/openshift-release-dev/ocp-v4.0-art-dev quay.io/openshift-release-dev/ocp-v4.0-art-dev quay.io/openshift-release-dev/ocp-v4.0-art-dev quay.io/openshift-release-dev/ocp-v4.0-art-dev registry.redhat.io/lvms4/topolvm-rhel9 registry.redhat.io/openshift4/ose-csi-external-provisioner registry.redhat.io/openshift4/ose-csi-external-resizer registry.redhat.io/openshift4/ose-csi-livenessprobe registry.redhat.io/openshift4/ose-csi-node-driver-registrar registry.redhat.io/ubi9 Important The MicroShift service is stopped and disabled after you run the script. Restart the MicroShift service by running the following command: USD sudo systemctl enable --now microshift 9.4. Cleaning the OVN-Kubernetes data You can clean up the OVN-Kubernetes (ONV-K) data by running the microshift-cleanup-data script. Use the script to reset OVN-K network configurations. When you run the script with the --ovn argument, you perform the following clean up actions: Stop all MicroShift services Delete all MicroShift pods Delete OVN-K networking configuration Prerequisites You are logged into MicroShift as an administrator with root-user access. You have filed a support case. Procedure Clean up the OVN-K data by running the microshift-cleanup-data script with the --ovn argument, by entering the following command: USD sudo microshift-cleanup-data --ovn Example output Stopping MicroShift services Removing MicroShift pods Killing conmon, pause and OVN processes Removing OVN configuration MicroShift service was stopped Cleanup succeeded Important The MicroShift service is stopped after you run the script. Restart the MicroShift service by running the following command: USD sudo systemctl start microshift 9.5. Cleaning custom certificates data You can use the microshift-cleanup-data script to reset MicroShift custom certificates so that they are recreated when the MicroShift service restarts. When you run the script with the --cert argument, you perform the following clean up actions: Stop all MicroShift services Delete all MicroShift pods Delete all MicroShift certificates Prerequisites You are logged into MicroShift as an administrator with root-user access. You have filed a support case. Procedure Clean up the MicroShift certificates by running the microshift-cleanup-data script with the --cert argument, by entering the following command: USD sudo microshift-cleanup-data --cert Example output Stopping MicroShift services Removing MicroShift pods Removing MicroShift certificates MicroShift service was stopped Cleanup succeeded Important The MicroShift service is stopped after you run the script. Restart the MicroShift service by running the following command: USD sudo systemctl start microshift
[ "microshift-cleanup-data", "Stop all MicroShift services, also cleaning their data Usage: microshift-cleanup-data <--all [--keep-images] | --ovn | --cert> --all Clean all MicroShift and OVN data --keep-images Keep container images when cleaning all data --ovn Clean OVN data only --cert Clean certificates only", "sudo microshift-cleanup-data --all", "DATA LOSS WARNING: Do you wish to stop and clean ALL MicroShift data AND cri-o container workloads? 1) Yes 2) No #? 1 Stopping MicroShift services Disabling MicroShift services Removing MicroShift pods Removing crio image storage Deleting the br-int interface Killing conmon, pause and OVN processes Removing MicroShift configuration Removing OVN configuration MicroShift service was stopped MicroShift service was disabled Cleanup succeeded", "DATA LOSS WARNING: Do you wish to stop and clean ALL MicroShift data AND cri-o container workloads? 1) Yes 2) No #? no Aborting cleanup", "sudo systemctl enable --now microshift", "sudo microshift-cleanup-data --all --keep-images", "DATA LOSS WARNING: Do you wish to stop and clean ALL MicroShift data AND cri-o container workloads? 1) Yes 2) No #? Yes Stopping MicroShift services Disabling MicroShift services Removing MicroShift pods Deleting the br-int interface Killing conmon, pause and OVN processes Removing MicroShift configuration Removing OVN configuration MicroShift service was stopped MicroShift service was disabled Cleanup succeeded", "sudo crictl images | awk '{print USD1}'", "IMAGE quay.io/openshift-release-dev/ocp-v4.0-art-dev quay.io/openshift-release-dev/ocp-v4.0-art-dev quay.io/openshift-release-dev/ocp-v4.0-art-dev quay.io/openshift-release-dev/ocp-v4.0-art-dev quay.io/openshift-release-dev/ocp-v4.0-art-dev quay.io/openshift-release-dev/ocp-v4.0-art-dev quay.io/openshift-release-dev/ocp-v4.0-art-dev quay.io/openshift-release-dev/ocp-v4.0-art-dev quay.io/openshift-release-dev/ocp-v4.0-art-dev quay.io/openshift-release-dev/ocp-v4.0-art-dev registry.redhat.io/lvms4/topolvm-rhel9 registry.redhat.io/openshift4/ose-csi-external-provisioner registry.redhat.io/openshift4/ose-csi-external-resizer registry.redhat.io/openshift4/ose-csi-livenessprobe registry.redhat.io/openshift4/ose-csi-node-driver-registrar registry.redhat.io/ubi9", "sudo systemctl enable --now microshift", "sudo microshift-cleanup-data --ovn", "Stopping MicroShift services Removing MicroShift pods Killing conmon, pause and OVN processes Removing OVN configuration MicroShift service was stopped Cleanup succeeded", "sudo systemctl start microshift", "sudo microshift-cleanup-data --cert", "Stopping MicroShift services Removing MicroShift pods Removing MicroShift certificates MicroShift service was stopped Cleanup succeeded", "sudo systemctl start microshift" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/troubleshooting/microshift-cleanup-data
Chapter 9. Read-only fields
Chapter 9. Read-only fields Certain fields in the REST API are marked read-only. These usually include the URL of a resource, the ID, and occasionally some internal fields. For example, the 'created\_by' attribute of each object indicates which user created the resource, and you cannot edit this. If you post some values and notice that they are not changing, these fields might be read-only.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/automation_execution_api_overview/controller-api-readonly-fields
Chapter 1. Workload Availability for Red Hat OpenShift 24.4 release notes
Chapter 1. Workload Availability for Red Hat OpenShift 24.4 release notes Workload Availability for Red Hat OpenShift version 24.4 is now available. 1.1. New features and enhancements No new features and enhancements were included in this release. This release can be installed from the 4.16-eus channel and will be supported according to the Extended Update Support policy of OpenShift Container Platform version 4.16. 1.2. Deprecated and removed features No features were deprecated and/or removed in this release. 1.3. Bug fixes Fence Agent command line option can lead to privilege escalation. ( ECOPROJECT-2004 ) Cause: A flaw (CVE-2024-5651) was found in fence agents that rely on SSH/Telnet. This flaw could be abused to obtain a Remote Code Execution (RCE) primitive by supplying an arbitrary command to execute in the --ssh-path/--telnet-path arguments. A low-privilege user, that is, a user with developer access, can create a specifically crafted FenceAgentsRemediation for a fence agent supporting --ssh-path/--telnet-path arguments to execute arbitrary commands on the operator's pod. Consequence: This RCE leads to a privilege escalation, first as the service account running the operator, and then to another service account with cluster-admin privileges. Fix: Fence agents that rely on SSH/Telnet were removed. Result: Creating a specifically crafted FenceAgentsRemediation can no longer lead to privilege escalation. Node Health Check (NHC) remains in a Remediating status with the RemediationSkipped event. ( ECOPROJECT-2057 ) Cause: NHC Operator v0.8.0 introduces a new feature which allows you to use multiple remediation templates of the same kind when escalating a remediation. In order to allow this, the remediation Custom Rescource's (CR) name no longer matches the node's name. However, this was not reflected in all instances. Consequence: Under some circumstances NHC sends false RemediationSkipped events for control plane nodes, misses clean up of remediation CRs, and misses sending metrics for long running remediations. Fix: NHC no longer relies on matching CR names and node names in all instances. Result: Remediation events, clean up of remediations CRs, and metrics for long running remediations all work as expected. Node Health Check (NHC) console plugin will not work with future versions of OpenShift Container Platform (OCP). ( ECOPROJECT-2078 ) Cause: NHC operator uses an old OCP console API. Consequence: The node remediation console plugin will no longer be activated on future versions of OCP. Fix: NHC operator should use a newer OCP console API. Result: The node remediation console plugin will continue to only work on future versions of OCP. 1.4. Technology preview features There are no new Technology preview features in this release. 1.5. Known issues No known issues were identified in this release. 1.6. Mapping the Workload Availability for Red Hat OpenShift release to the Operator releases The RHWA version numbers are based on the year and the iteration in that year. For example, 24.4 is the fourth release in 2024. The Operator version numbers follow semantic versioning. Table 1.1. Release Mapping Workload Availability Version Operator Version 24.4 [a] Fence Agents Remediation (FAR) Operator 0.4.1 Machine Deletion Remediation (MDR) Operator 0.3.1 Node Health Check (NHC) Operator 0.8.2 Node Maintenance Operator (NMO) 5.3.1 [a] This version only contains updated Release Notes. For the relevant documentation for this version, see the 24.3 documentation.
null
https://docs.redhat.com/en/documentation/workload_availability_for_red_hat_openshift/24.4/html/release_notes/workload_availability_for_red_hat_openshift_24_4_release_notes
Part I. Packaging and deploying a Red Hat Decision Manager project
Part I. Packaging and deploying a Red Hat Decision Manager project As a business rules developer, you must build and deploy a developed Red Hat Decision Manager project to a KIE Server in order to begin using the services you have created in Red Hat Decision Manager. You can develop and deploy a project from Business Central, from an independent Maven project, from a Java application, or using a combination of various platforms. For example, you can develop a project in Business Central and deploy it using the KIE Server REST API, or develop a project in Maven configured with Business Central and deploy it using Business Central. Prerequisites The project to be deployed has been developed and tested. For projects in Business Central, consider using test scenarios to test the assets in your project. For example, see Testing a decision service using test scenarios .
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/deploying_and_managing_red_hat_decision_manager_services/assembly-packaging-deploying
27.2. Types
27.2. Types The main permission control method used in SELinux targeted policy to provide advanced process isolation is Type Enforcement. All files and processes are labeled with a type: types define a SELinux domain for processes and a SELinux type for files. SELinux policy rules define how types access each other, whether it be a domain accessing a type, or a domain accessing another domain. Access is only allowed if a specific SELinux policy rule exists that allows it. The following types are used with Red Hat Gluster Storage. Different types allow you to configure flexible access: Process types glusterd_t The Gluster processes are associated with the glusterd_t SELinux type. Types on executables glusterd_initrc_exec_t The SELinux-specific script type context for the Gluster init script files. glusterd_exec_t The SELinux-specific executable type context for the Gluster executable files. Port Types gluster_port_t This type is defined for glusterd . By default, glusterd uses 204007-24027, and 38465-38469 TCP ports. File Contexts glusterd_brick_t This type is used for files threated as glusterd brick data. glusterd_conf_t This type is associated with the glusterd configuration data, usually stored in the /etc directory. glusterd_log_t Files with this type are treated as glusterd log data, usually stored under the /var/log/ directory. glusterd_tmp_t This type is used for storing the glusterd temporary files in the /tmp directory. glusterd_var_lib_t This type allows storing the glusterd files in the /var/lib/ directory. glusterd_var_run_t This type allows storing the glusterd files in the /run/ or /var/run/ directory.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/sect-managing_confined_services-glusterfs-types
Migration Toolkit for Containers
Migration Toolkit for Containers OpenShift Container Platform 4.16 Migrating to OpenShift Container Platform 4 Red Hat OpenShift Documentation Team
[ "status: conditions: - category: Warn lastTransitionTime: 2021-07-15T04:11:44Z message: Failed gathering extended PV usage information for PVs [nginx-logs nginx-html], please see MigAnalytic openshift-migration/ocp-24706-basicvolmig-migplan-1626319591-szwd6 for details reason: FailedRunningDf status: \"True\" type: ExtendedPVAnalysisFailed", "podman login registry.redhat.io", "cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./", "cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./", "oc run test --image registry.redhat.io/ubi9 --command sleep infinity", "oc create -f operator.yml", "namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-builders\" already exists 1 Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-pullers\" already exists", "oc create -f controller.yml", "oc get pods -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress", "apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route]", "spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"", "oc get migrationcontroller <migration_controller> -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2", "oc replace -f migration-controller.yaml -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] migration_rsync_privileged: true", "apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsRoot: true", "apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsUser: 10010001 runAsGroup: 3", "BUCKET=<your_bucket>", "REGION=<your_region>", "aws s3api create-bucket --bucket USDBUCKET --region USDREGION --create-bucket-configuration LocationConstraint=USDREGION 1", "aws iam create-user --user-name velero 1", "cat > velero-policy.json <<EOF { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"ec2:DescribeVolumes\", \"ec2:DescribeSnapshots\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:CreateSnapshot\", \"ec2:DeleteSnapshot\" ], \"Resource\": \"*\" }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:GetObject\", \"s3:DeleteObject\", \"s3:PutObject\", \"s3:AbortMultipartUpload\", \"s3:ListMultipartUploadParts\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}/*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:ListBucket\", \"s3:GetBucketLocation\", \"s3:ListBucketMultipartUploads\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}\" ] } ] } EOF", "aws iam put-user-policy --user-name velero --policy-name velero --policy-document file://velero-policy.json", "aws iam create-access-key --user-name velero", "{ \"AccessKey\": { \"UserName\": \"velero\", \"Status\": \"Active\", \"CreateDate\": \"2017-07-31T22:24:41.576Z\", \"SecretAccessKey\": <AWS_SECRET_ACCESS_KEY>, \"AccessKeyId\": <AWS_ACCESS_KEY_ID> } }", "gcloud auth login", "BUCKET=<bucket> 1", "gsutil mb gs://USDBUCKET/", "PROJECT_ID=USD(gcloud config get-value project)", "gcloud iam service-accounts create velero --display-name \"Velero service account\"", "gcloud iam service-accounts list", "SERVICE_ACCOUNT_EMAIL=USD(gcloud iam service-accounts list --filter=\"displayName:Velero service account\" --format 'value(email)')", "ROLE_PERMISSIONS=( compute.disks.get compute.disks.create compute.disks.createSnapshot compute.snapshots.get compute.snapshots.create compute.snapshots.useReadOnly compute.snapshots.delete compute.zones.get storage.objects.create storage.objects.delete storage.objects.get storage.objects.list iam.serviceAccounts.signBlob )", "gcloud iam roles create velero.server --project USDPROJECT_ID --title \"Velero Server\" --permissions \"USD(IFS=\",\"; echo \"USD{ROLE_PERMISSIONS[*]}\")\"", "gcloud projects add-iam-policy-binding USDPROJECT_ID --member serviceAccount:USDSERVICE_ACCOUNT_EMAIL --role projects/USDPROJECT_ID/roles/velero.server", "gsutil iam ch serviceAccount:USDSERVICE_ACCOUNT_EMAIL:objectAdmin gs://USD{BUCKET}", "gcloud iam service-accounts keys create credentials-velero --iam-account USDSERVICE_ACCOUNT_EMAIL", "az login", "AZURE_RESOURCE_GROUP=Velero_Backups", "az group create -n USDAZURE_RESOURCE_GROUP --location CentralUS 1", "AZURE_STORAGE_ACCOUNT_ID=\"veleroUSD(uuidgen | cut -d '-' -f5 | tr '[A-Z]' '[a-z]')\"", "az storage account create --name USDAZURE_STORAGE_ACCOUNT_ID --resource-group USDAZURE_RESOURCE_GROUP --sku Standard_GRS --encryption-services blob --https-only true --kind BlobStorage --access-tier Hot", "BLOB_CONTAINER=velero", "az storage container create -n USDBLOB_CONTAINER --public-access off --account-name USDAZURE_STORAGE_ACCOUNT_ID", "AZURE_SUBSCRIPTION_ID=`az account list --query '[?isDefault].id' -o tsv` AZURE_TENANT_ID=`az account list --query '[?isDefault].tenantId' -o tsv`", "AZURE_CLIENT_SECRET=`az ad sp create-for-rbac --name \"velero\" --role \"Contributor\" --query 'password' -o tsv --scopes /subscriptions/USDAZURE_SUBSCRIPTION_ID/resourceGroups/USDAZURE_RESOURCE_GROUP`", "AZURE_CLIENT_ID=`az ad app credential list --id <your_app_id>`", "cat << EOF > ./credentials-velero AZURE_SUBSCRIPTION_ID=USD{AZURE_SUBSCRIPTION_ID} AZURE_TENANT_ID=USD{AZURE_TENANT_ID} AZURE_CLIENT_ID=USD{AZURE_CLIENT_ID} AZURE_CLIENT_SECRET=USD{AZURE_CLIENT_SECRET} AZURE_RESOURCE_GROUP=USD{AZURE_RESOURCE_GROUP} AZURE_CLOUD_NAME=AzurePublicCloud EOF", "oc delete migrationcontroller <migration_controller>", "oc delete USD(oc get crds -o name | grep 'migration.openshift.io')", "oc delete USD(oc get crds -o name | grep 'velero')", "oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io')", "oc delete clusterrole migration-operator", "oc delete USD(oc get clusterroles -o name | grep 'velero')", "oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io')", "oc delete clusterrolebindings migration-operator", "oc delete USD(oc get clusterrolebindings -o name | grep 'velero')", "podman login registry.redhat.io", "cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./", "cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./", "grep openshift-migration-legacy-rhel8-operator ./mapping.txt | grep rhmtc", "registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a=<registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator", "containers: - name: ansible image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 1 - name: operator image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 2 env: - name: REGISTRY value: <registry.apps.example.com> 3", "oc create -f operator.yml", "namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-builders\" already exists 1 Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-pullers\" already exists", "oc create -f controller.yml", "oc get pods -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress", "apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route]", "spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"", "oc get migrationcontroller <migration_controller> -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2", "oc replace -f migration-controller.yaml -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] migration_rsync_privileged: true", "apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsRoot: true", "apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsUser: 10010001 runAsGroup: 3", "oc delete migrationcontroller <migration_controller>", "oc delete USD(oc get crds -o name | grep 'migration.openshift.io')", "oc delete USD(oc get crds -o name | grep 'velero')", "oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io')", "oc delete clusterrole migration-operator", "oc delete USD(oc get clusterroles -o name | grep 'velero')", "oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io')", "oc delete clusterrolebindings migration-operator", "oc delete USD(oc get clusterrolebindings -o name | grep 'velero')", "oc -n openshift-migration get sub", "NAME PACKAGE SOURCE CHANNEL mtc-operator mtc-operator mtc-operator-catalog release-v1.7 redhat-oadp-operator-stable-1.0-mtc-operator-catalog-openshift-marketplace redhat-oadp-operator mtc-operator-catalog stable-1.0", "oc -n openshift-migration get sub -o json | jq -r '.items[] | { name: .metadata.name, package: .spec.name, channel: .spec.channel }'", "{ \"name\": \"mtc-operator\", \"package\": \"mtc-operator\", \"channel\": \"release-v1.7\" } { \"name\": \"redhat-oadp-operator-stable-1.0-mtc-operator-catalog-openshift-marketplace\", \"package\": \"redhat-oadp-operator\", \"channel\": \"stable-1.0\" }", "oc -n openshift-migration patch subscription mtc-operator --type merge --patch '{\"spec\": {\"channel\": \"release-v1.8\"}}'", "subscription.operators.coreos.com/mtc-operator patched", "oc -n openshift-migration patch subscription redhat-oadp-operator-stable-1.0-mtc-operator-catalog-openshift-marketplace --type merge --patch '{\"spec\": {\"channel\":\"stable-1.2\"}}'", "subscription.operators.coreos.com/redhat-oadp-operator-stable-1.0-mtc-operator-catalog-openshift-marketplace patched", "oc -n openshift-migration get subscriptions.operators.coreos.com mtc-operator -o json | jq '.status | (.\"state\"==\"AtLatestKnown\")'", "oc -n openshift-migration get sub -o json | jq -r '.items[] | {name: .metadata.name, channel: .spec.channel }'", "{ \"name\": \"mtc-operator\", \"channel\": \"release-v1.8\" } { \"name\": \"redhat-oadp-operator-stable-1.0-mtc-operator-catalog-openshift-marketplace\", \"channel\": \"stable-1.2\" }", "Confirm that the `mtc-operator.v1.8.0` and `oadp-operator.v1.2.x` packages are installed by running the following command:", "oc -n openshift-migration get csv", "NAME DISPLAY VERSION REPLACES PHASE mtc-operator.v1.8.0 Migration Toolkit for Containers Operator 1.8.0 mtc-operator.v1.7.13 Succeeded oadp-operator.v1.2.2 OADP Operator 1.2.2 oadp-operator.v1.0.13 Succeeded", "podman login registry.redhat.io", "podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7:/operator.yml ./", "oc replace --force -f operator.yml", "oc scale -n openshift-migration --replicas=0 deployment/migration-operator", "oc scale -n openshift-migration --replicas=1 deployment/migration-operator", "oc -o yaml -n openshift-migration get deployment/migration-operator | grep image: | awk -F \":\" '{ print USDNF }'", "podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./", "oc create -f controller.yml", "oc get pods -n openshift-migration", "oc get migplan <migplan> -o yaml -n openshift-migration", "spec: indirectImageMigration: true indirectVolumeMigration: true", "oc replace -f migplan.yaml -n openshift-migration", "oc get migplan <migplan> -o yaml -n openshift-migration", "oc get pv", "oc get pods --all-namespaces | egrep -v 'Running | Completed'", "oc get pods --all-namespaces --field-selector=status.phase=Running -o json | jq '.items[]|select(any( .status.containerStatuses[]; .restartCount > 3))|.metadata.name'", "oc get csr -A | grep pending -i", "oc expose svc <app1-svc> --hostname <app1.apps.source.example.com> -n <app1-namespace>", "spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_backoff_limit: 40", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] migration_rsync_privileged: true", "apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsRoot: true", "apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsUser: 10010001 runAsGroup: 3", "kubenswrapper[3879]: W0326 16:30:36.749224 3879 volume_linux.go:49] Setting volume ownership for /var/lib/kubelet/pods/8905d88e-6531-4d65-9c2a-eff11dc7eb29/volumes/kubernetes.io~csi/pvc-287d1988-3fd9-4517-a0c7-22539acd31e6/mount and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699 kubenswrapper[3879]: E0326 16:32:02.706363 3879 kubelet.go:1841] \"Unable to attach or mount volumes for pod; skipping pod\" err=\"unmounted volumes=[8db9d5b032dab17d4ea9495af12e085a], unattached volumes=[crane2-rsync-server-secret 8db9d5b032dab17d4ea9495af12e085a kube-api-access-dlbd2 crane2-stunnel-server-config crane2-stunnel-server-secret crane2-rsync-server-config]: timed out waiting for the condition\" pod=\"caboodle-preprod/rsync-server\" kubenswrapper[3879]: E0326 16:32:02.706496 3879 pod_workers.go:965] \"Error syncing pod, skipping\" err=\"unmounted volumes=[8db9d5b032dab17d4ea9495af12e085a], unattached volumes=[crane2-rsync-server-secret 8db9d5b032dab17d4ea9495af12e085a kube-api-access-dlbd2 crane2-stunnel-server-config crane2-stunnel-server-secret crane2-rsync-server-config]: timed out waiting for the condition\" pod=\"caboodle-preprod/rsync-server\" podUID=8905d88e-6531-4d65-9c2a-eff11dc7eb29", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: migration_rsync_super_privileged: true 1 azure_resource_group: \"\" cluster_name: host mig_namespace_limit: \"10\" mig_pod_limit: \"100\" mig_pv_limit: \"100\" migration_controller: true migration_log_reader: true migration_ui: true migration_velero: true olm_managed: true restic_timeout: 1h version: 1.8.3", "oc get -n openshift-migration route/migration -o go-template='https://{{ .spec.host }}'", "oc create token migration-controller -n openshift-migration", "eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtaWciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoibWlnLXRva2VuLWs4dDJyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im1pZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE1YjFiYWMwLWMxYmYtMTFlOS05Y2NiLTAyOWRmODYwYjMwOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptaWc6bWlnIn0.xqeeAINK7UXpdRqAtOj70qhBJPeMwmgLomV9iFxr5RoqUgKchZRG2J2rkqmPm6vr7K-cm7ibD1IBpdQJCcVDuoHYsFgV4mp9vgOfn9osSDp2TGikwNz4Az95e81xnjVUmzh-NjDsEpw71DH92iHV_xt2sTwtzftS49LpPW2LjrV0evtNBP_t_RfskdArt5VSv25eORl7zScqfe1CiMkcVbf2UqACQjo3LbkpfN26HAioO2oH0ECPiRzT0Xyh-KwFutJLS9Xgghyw-LD9kPKcE_xbbJ9Y4Rqajh7WdPYuB0Jd9DPVrslmzK-F6cgHHYoZEv0SvLQi-PO0rpDrcjOEQQ", "oc create route passthrough --service=docker-registry --port=5000 -n default", "oc create route passthrough --service=image-registry --port=5000 -n openshift-image-registry", "az group list", "{ \"id\": \"/subscriptions/...//resourceGroups/sample-rg-name\", \"location\": \"centralus\", \"name\": \"...\", \"properties\": { \"provisioningState\": \"Succeeded\" }, \"tags\": { \"kubernetes.io_cluster.sample-ld57c\": \"owned\", \"openshift_creationDate\": \"2019-10-25T23:28:57.988208+00:00\" }, \"type\": \"Microsoft.Resources/resourceGroups\" },", "oc create route passthrough --service=image-registry -n openshift-image-registry", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress", "apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route]", "spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"", "oc get migrationcontroller <migration_controller> -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2", "oc replace -f migration-controller.yaml -n openshift-migration", "cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <host_cluster> namespace: openshift-migration spec: isHostCluster: true EOF", "cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: <cluster_secret> namespace: openshift-config type: Opaque data: saToken: <sa_token> 1 EOF", "oc sa get-token migration-controller -n openshift-migration | base64 -w 0", "cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <remote_cluster> 1 namespace: openshift-migration spec: exposedRegistryPath: <exposed_registry_route> 2 insecure: false 3 isHostCluster: false serviceAccountSecretRef: name: <remote_cluster_secret> 4 namespace: openshift-config url: <remote_cluster_url> 5 EOF", "oc describe MigCluster <cluster>", "cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: namespace: openshift-config name: <migstorage_creds> type: Opaque data: aws-access-key-id: <key_id_base64> 1 aws-secret-access-key: <secret_key_base64> 2 EOF", "echo -n \"<key>\" | base64 -w 0 1", "cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigStorage metadata: name: <migstorage> namespace: openshift-migration spec: backupStorageConfig: awsBucketName: <bucket> 1 credsSecretRef: name: <storage_secret> 2 namespace: openshift-config backupStorageProvider: <storage_provider> 3 volumeSnapshotConfig: credsSecretRef: name: <storage_secret> 4 namespace: openshift-config volumeSnapshotProvider: <storage_provider> 5 EOF", "oc describe migstorage <migstorage>", "cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: destMigClusterRef: name: <host_cluster> namespace: openshift-migration indirectImageMigration: true 1 indirectVolumeMigration: true 2 migStorageRef: name: <migstorage> 3 namespace: openshift-migration namespaces: - <source_namespace_1> 4 - <source_namespace_2> - <source_namespace_3>:<destination_namespace> 5 srcMigClusterRef: name: <remote_cluster> 6 namespace: openshift-migration EOF", "oc describe migplan <migplan> -n openshift-migration", "cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: <migmigration> namespace: openshift-migration spec: migPlanRef: name: <migplan> 1 namespace: openshift-migration quiescePods: true 2 stage: false 3 rollback: false 4 EOF", "oc watch migmigration <migmigration> -n openshift-migration", "Name: c8b034c0-6567-11eb-9a4f-0bc004db0fbc Namespace: openshift-migration Labels: migration.openshift.io/migplan-name=django Annotations: openshift.io/touch: e99f9083-6567-11eb-8420-0a580a81020c API Version: migration.openshift.io/v1alpha1 Kind: MigMigration Spec: Mig Plan Ref: Name: migplan Namespace: openshift-migration Stage: false Status: Conditions: Category: Advisory Last Transition Time: 2021-02-02T15:04:09Z Message: Step: 19/47 Reason: InitialBackupCreated Status: True Type: Running Category: Required Last Transition Time: 2021-02-02T15:03:19Z Message: The migration is ready. Status: True Type: Ready Category: Required Durable: true Last Transition Time: 2021-02-02T15:04:05Z Message: The migration registries are healthy. Status: True Type: RegistriesHealthy Itinerary: Final Observed Digest: 7fae9d21f15979c71ddc7dd075cb97061895caac5b936d92fae967019ab616d5 Phase: InitialBackupCreated Pipeline: Completed: 2021-02-02T15:04:07Z Message: Completed Name: Prepare Started: 2021-02-02T15:03:18Z Message: Waiting for initial Velero backup to complete. Name: Backup Phase: InitialBackupCreated Progress: Backup openshift-migration/c8b034c0-6567-11eb-9a4f-0bc004db0fbc-wpc44: 0 out of estimated total of 0 objects backed up (5s) Started: 2021-02-02T15:04:07Z Message: Not started Name: StageBackup Message: Not started Name: StageRestore Message: Not started Name: DirectImage Message: Not started Name: DirectVolume Message: Not started Name: Restore Message: Not started Name: Cleanup Start Timestamp: 2021-02-02T15:03:18Z Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Running 57s migmigration_controller Step: 2/47 Normal Running 57s migmigration_controller Step: 3/47 Normal Running 57s (x3 over 57s) migmigration_controller Step: 4/47 Normal Running 54s migmigration_controller Step: 5/47 Normal Running 54s migmigration_controller Step: 6/47 Normal Running 52s (x2 over 53s) migmigration_controller Step: 7/47 Normal Running 51s (x2 over 51s) migmigration_controller Step: 8/47 Normal Ready 50s (x12 over 57s) migmigration_controller The migration is ready. Normal Running 50s migmigration_controller Step: 9/47 Normal Running 50s migmigration_controller Step: 10/47", "- hosts: localhost gather_facts: false tasks: - name: get pod name shell: oc get po --all-namespaces", "- hosts: localhost gather_facts: false tasks: - name: Get pod k8s_info: kind: pods api: v1 namespace: openshift-migration name: \"{{ lookup( 'env', 'HOSTNAME') }}\" register: pods - name: Print pod name debug: msg: \"{{ pods.resources[0].metadata.name }}\"", "- hosts: localhost gather_facts: false tasks: - name: Set a boolean set_fact: do_fail: true - name: \"fail\" fail: msg: \"Cause a failure\" when: do_fail", "- hosts: localhost gather_facts: false tasks: - set_fact: namespaces: \"{{ (lookup( 'env', 'MIGRATION_NAMESPACES')).split(',') }}\" - debug: msg: \"{{ item }}\" with_items: \"{{ namespaces }}\" - debug: msg: \"{{ lookup( 'env', 'MIGRATION_PLAN_NAME') }}\"", "oc edit migrationcontroller <migration_controller> -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: disable_image_migration: true 1 disable_pv_migration: true 2 additional_excluded_resources: 3 - resource1 - resource2", "oc get deployment -n openshift-migration migration-controller -o yaml | grep EXCLUDED_RESOURCES -A1", "name: EXCLUDED_RESOURCES value: resource1,resource2,imagetags,templateinstances,clusterserviceversions,packagemanifests,subscriptions,servicebrokers,servicebindings,serviceclasses,serviceinstances,serviceplans,imagestreams,persistentvolumes,persistentvolumeclaims", "spec: namespaces: - namespace_2 - namespace_1:namespace_2", "spec: namespaces: - namespace_1:namespace_1", "spec: namespaces: - namespace_1", "apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: selection: action: skip", "apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: name: <source_pvc>:<destination_pvc> 1", "apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: pvc-095a6559-b27f-11eb-b27f-021bddcaf6e4 proposedCapacity: 10Gi pvc: accessModes: - ReadWriteMany hasReference: true name: mysql namespace: mysql-persistent selection: action: <copy> 1 copyMethod: <filesystem> 2 verify: true 3 storageClass: <gp2> 4 accessMode: <ReadWriteMany> 5 storageClass: cephfs", "apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind> 1 group: \"\" - kind: <kind> group: \"\"", "apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind> 1 group: \"\" - kind: <kind> group: \"\" labelSelector: matchLabels: <label> 2", "apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: generateName: <migplan> namespace: openshift-migration spec: migPlanRef: name: <migplan> namespace: openshift-migration stage: false", "oc edit migrationcontroller -n openshift-migration", "mig_controller_limits_cpu: \"1\" 1 mig_controller_limits_memory: \"10Gi\" 2 mig_controller_requests_cpu: \"100m\" 3 mig_controller_requests_memory: \"350Mi\" 4 mig_pv_limit: 100 5 mig_pod_limit: 100 6 mig_namespace_limit: 10 7", "oc patch migrationcontroller migration-controller -p '{\"spec\":{\"enable_dvm_pv_resizing\":true}}' \\ 1 --type='merge' -n openshift-migration", "oc patch migrationcontroller migration-controller -p '{\"spec\":{\"pv_resizing_threshold\":41}}' \\ 1 --type='merge' -n openshift-migration", "status: conditions: - category: Warn durable: true lastTransitionTime: \"2021-06-17T08:57:01Z\" message: 'Capacity of the following volumes will be automatically adjusted to avoid disk capacity issues in the target cluster: [pvc-b800eb7b-cf3b-11eb-a3f7-0eae3e0555f3]' reason: Done status: \"False\" type: PvCapacityAdjustmentRequired", "oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/mig_controller_enable_cache\", \"value\": true}]'", "oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/mig_controller_limits_memory\", \"value\": <10Gi>}]'", "oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/mig_controller_requests_memory\", \"value\": <350Mi>}]'", "apiVersion: migration.openshift.io/v1alpha1 kind: DirectImageMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <direct_image_migration> spec: srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration namespaces: 1 - <source_namespace_1> - <source_namespace_2>:<destination_namespace_3> 2", "apiVersion: migration.openshift.io/v1alpha1 kind: DirectImageStreamMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <direct_image_stream_migration> spec: srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration imageStreamRef: name: <image_stream> namespace: <source_image_stream_namespace> destNamespace: <destination_image_stream_namespace>", "apiVersion: migration.openshift.io/v1alpha1 kind: DirectVolumeMigration metadata: name: <direct_volume_migration> namespace: openshift-migration spec: createDestinationNamespaces: false 1 deleteProgressReportingCRs: false 2 destMigClusterRef: name: <host_cluster> 3 namespace: openshift-migration persistentVolumeClaims: - name: <pvc> 4 namespace: <pvc_namespace> srcMigClusterRef: name: <source_cluster> namespace: openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: DirectVolumeMigrationProgress metadata: labels: controller-tools.k8s.io: \"1.0\" name: <direct_volume_migration_progress> spec: clusterRef: name: <source_cluster> namespace: openshift-migration podRef: name: <rsync_pod> namespace: openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigAnalytic metadata: annotations: migplan: <migplan> name: <miganalytic> namespace: openshift-migration labels: migplan: <migplan> spec: analyzeImageCount: true 1 analyzeK8SResources: true 2 analyzePVCapacity: true 3 listImages: false 4 listImagesLimit: 50 5 migPlanRef: name: <migplan> namespace: openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: labels: controller-tools.k8s.io: \"1.0\" name: <host_cluster> 1 namespace: openshift-migration spec: isHostCluster: true 2 The 'azureResourceGroup' parameter is relevant only for Microsoft Azure. azureResourceGroup: <azure_resource_group> 3 caBundle: <ca_bundle_base64> 4 insecure: false 5 refresh: false 6 The 'restartRestic' parameter is relevant for a source cluster. restartRestic: true 7 The following parameters are relevant for a remote cluster. exposedRegistryPath: <registry_route> 8 url: <destination_cluster_url> 9 serviceAccountSecretRef: name: <source_secret> 10 namespace: openshift-config", "apiVersion: migration.openshift.io/v1alpha1 kind: MigHook metadata: generateName: <hook_name_prefix> 1 name: <mighook> 2 namespace: openshift-migration spec: activeDeadlineSeconds: 1800 3 custom: false 4 image: <hook_image> 5 playbook: <ansible_playbook_base64> 6 targetCluster: source 7", "apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migmigration> namespace: openshift-migration spec: canceled: false 1 rollback: false 2 stage: false 3 quiescePods: true 4 keepAnnotations: true 5 verify: false 6 migPlanRef: name: <migplan> namespace: openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migplan> namespace: openshift-migration spec: closed: false 1 srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration hooks: 2 - executionNamespace: <namespace> 3 phase: <migration_phase> 4 reference: name: <hook> 5 namespace: <hook_namespace> 6 serviceAccount: <service_account> 7 indirectImageMigration: true 8 indirectVolumeMigration: false 9 migStorageRef: name: <migstorage> namespace: openshift-migration namespaces: - <source_namespace_1> 10 - <source_namespace_2> - <source_namespace_3>:<destination_namespace_4> 11 refresh: false 12", "apiVersion: migration.openshift.io/v1alpha1 kind: MigStorage metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migstorage> namespace: openshift-migration spec: backupStorageProvider: <backup_storage_provider> 1 volumeSnapshotProvider: <snapshot_storage_provider> 2 backupStorageConfig: awsBucketName: <bucket> 3 awsRegion: <region> 4 credsSecretRef: namespace: openshift-config name: <storage_secret> 5 awsKmsKeyId: <key_id> 6 awsPublicUrl: <public_url> 7 awsSignatureVersion: <signature_version> 8 volumeSnapshotConfig: awsRegion: <region> 9 credsSecretRef: namespace: openshift-config name: <storage_secret> 10 refresh: false 11", "oc -n openshift-migration get pods | grep log", "oc -n openshift-migration logs -f <mig-log-reader-pod> -c color 1", "oc adm must-gather --image=registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v1.8", "oc adm must-gather --image=registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v1.8 -- /usr/bin/gather_metrics_dump", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero <backup_restore_cr> <command> <cr_name>", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero --help", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero <backup_restore_cr> describe <cr_name>", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero <backup_restore_cr> logs <cr_name>", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf", "oc get migmigration <migmigration> -o yaml", "status: conditions: - category: Warn durable: true lastTransitionTime: \"2021-01-26T20:48:40Z\" message: 'Final Restore openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf: partially failed on destination cluster' status: \"True\" type: VeleroFinalRestorePartiallyFailed - category: Advisory durable: true lastTransitionTime: \"2021-01-26T20:48:42Z\" message: The migration has completed with warnings, please look at `Warn` conditions. reason: Completed status: \"True\" type: SucceededWithWarnings", "oc -n {namespace} exec deployment/velero -c velero -- ./velero restore describe <restore>", "Phase: PartiallyFailed (run 'velero restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf' for more information) Errors: Velero: <none> Cluster: <none> Namespaces: migration-example: error restoring example.com/migration-example/migration-example: the server could not find the requested resource", "oc -n {namespace} exec deployment/velero -c velero -- ./velero restore logs <restore>", "time=\"2021-01-26T20:48:37Z\" level=info msg=\"Attempting to restore migration-example: migration-example\" logSource=\"pkg/restore/restore.go:1107\" restore=openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf time=\"2021-01-26T20:48:37Z\" level=info msg=\"error restoring migration-example: the server could not find the requested resource\" logSource=\"pkg/restore/restore.go:1170\" restore=openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf", "labels: migrationcontroller: ebe13bee-c803-47d0-a9e9-83f380328b93", "labels: migrationcontroller: ebe13bee-c803-47d0-a9e9-83f380328b93", "oc get migmigration -n openshift-migration", "NAME AGE 88435fe0-c9f8-11e9-85e6-5d593ce65e10 6m42s", "oc describe migmigration 88435fe0-c9f8-11e9-85e6-5d593ce65e10 -n openshift-migration", "name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10 namespace: openshift-migration labels: <none> annotations: touch: 3b48b543-b53e-4e44-9d34-33563f0f8147 apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: creationTimestamp: 2019-08-29T01:01:29Z generation: 20 resourceVersion: 88179 selfLink: /apis/migration.openshift.io/v1alpha1/namespaces/openshift-migration/migmigrations/88435fe0-c9f8-11e9-85e6-5d593ce65e10 uid: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 spec: migPlanRef: name: socks-shop-mig-plan namespace: openshift-migration quiescePods: true stage: false status: conditions: category: Advisory durable: True lastTransitionTime: 2019-08-29T01:03:40Z message: The migration has completed successfully. reason: Completed status: True type: Succeeded phase: Completed startTimestamp: 2019-08-29T01:01:29Z events: <none>", "apiVersion: velero.io/v1 kind: Backup metadata: annotations: openshift.io/migrate-copy-phase: final openshift.io/migrate-quiesce-pods: \"true\" openshift.io/migration-registry: 172.30.105.179:5000 openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-44dd3bd5-c9f8-11e9-95ad-0205fe66cbb6 openshift.io/orig-reclaim-policy: delete creationTimestamp: \"2019-08-29T01:03:15Z\" generateName: 88435fe0-c9f8-11e9-85e6-5d593ce65e10- generation: 1 labels: app.kubernetes.io/part-of: migration migmigration: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 migration-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 velero.io/storage-location: myrepo-vpzq9 name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7 namespace: openshift-migration resourceVersion: \"87313\" selfLink: /apis/velero.io/v1/namespaces/openshift-migration/backups/88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7 uid: c80dbbc0-c9f8-11e9-95ad-0205fe66cbb6 spec: excludedNamespaces: [] excludedResources: [] hooks: resources: [] includeClusterResources: null includedNamespaces: - sock-shop includedResources: - persistentvolumes - persistentvolumeclaims - namespaces - imagestreams - imagestreamtags - secrets - configmaps - pods labelSelector: matchLabels: migration-included-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 storageLocation: myrepo-vpzq9 ttl: 720h0m0s volumeSnapshotLocations: - myrepo-wv6fx status: completionTimestamp: \"2019-08-29T01:02:36Z\" errors: 0 expiration: \"2019-09-28T01:02:35Z\" phase: Completed startTimestamp: \"2019-08-29T01:02:35Z\" validationErrors: null version: 1 volumeSnapshotsAttempted: 0 volumeSnapshotsCompleted: 0 warnings: 0", "apiVersion: velero.io/v1 kind: Restore metadata: annotations: openshift.io/migrate-copy-phase: final openshift.io/migrate-quiesce-pods: \"true\" openshift.io/migration-registry: 172.30.90.187:5000 openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-36f54ca7-c925-11e9-825a-06fa9fb68c88 creationTimestamp: \"2019-08-28T00:09:49Z\" generateName: e13a1b60-c927-11e9-9555-d129df7f3b96- generation: 3 labels: app.kubernetes.io/part-of: migration migmigration: e18252c9-c927-11e9-825a-06fa9fb68c88 migration-final-restore: e18252c9-c927-11e9-825a-06fa9fb68c88 name: e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx namespace: openshift-migration resourceVersion: \"82329\" selfLink: /apis/velero.io/v1/namespaces/openshift-migration/restores/e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx uid: 26983ec0-c928-11e9-825a-06fa9fb68c88 spec: backupName: e13a1b60-c927-11e9-9555-d129df7f3b96-sz24f excludedNamespaces: null excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io includedNamespaces: null includedResources: null namespaceMapping: null restorePVs: true status: errors: 0 failureReason: \"\" phase: Completed validationErrors: null warnings: 15", "oc describe migmigration <pod> -n openshift-migration", "Some or all transfer pods are not running for more than 10 mins on destination cluster", "oc get namespace <namespace> -o yaml 1", "oc edit namespace <namespace>", "apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/node-selector: \"region=east\"", "echo -n | openssl s_client -connect <host_FQDN>:<port> \\ 1 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > <ca_bundle.cert> 2", "oc logs <Velero_Pod> -n openshift-migration", "level=error msg=\"Error checking repository for stale locks\" error=\"error getting backup storage location: BackupStorageLocation.velero.io \\\"ts-dpa-1\\\" not found\" error.file=\"/remote-source/src/github.com/vmware-tanzu/velero/pkg/restic/repository_manager.go:259\"", "level=error msg=\"Error backing up item\" backup=velero/monitoring error=\"timed out waiting for all PodVolumeBackups to complete\" error.file=\"/go/src/github.com/heptio/velero/pkg/restic/backupper.go:165\" error.function=\"github.com/heptio/velero/pkg/restic.(*backupper).BackupPodVolumes\" group=v1", "spec: restic_timeout: 1h 1", "status: conditions: - category: Warn durable: true lastTransitionTime: 2020-04-16T20:35:16Z message: There were verify errors found in 1 Restic volume restores. See restore `<registry-example-migration-rvwcm>` for details 1 status: \"True\" type: ResticVerifyErrors 2", "oc describe <registry-example-migration-rvwcm> -n openshift-migration", "status: phase: Completed podVolumeRestoreErrors: - kind: PodVolumeRestore name: <registry-example-migration-rvwcm-98t49> namespace: openshift-migration podVolumeRestoreResticErrors: - kind: PodVolumeRestore name: <registry-example-migration-rvwcm-98t49> namespace: openshift-migration", "oc describe <migration-example-rvwcm-98t49>", "completionTimestamp: 2020-05-01T20:49:12Z errors: 1 resticErrors: 1 resticPod: <restic-nr2v5>", "oc logs -f <restic-nr2v5>", "backup=openshift-migration/<backup_id> controller=pod-volume-backup error=\"fork/exec /usr/bin/restic: permission denied\" error.file=\"/go/src/github.com/vmware-tanzu/velero/pkg/controller/pod_volume_backup_controller.go:280\" error.function=\"github.com/vmware-tanzu/velero/pkg/controller.(*podVolumeBackupController).processBackup\" logSource=\"pkg/controller/pod_volume_backup_controller.go:280\" name=<backup_id> namespace=openshift-migration", "spec: restic_supplemental_groups: <group_id> 1", "kubenswrapper[3879]: W0326 16:30:36.749224 3879 volume_linux.go:49] Setting volume ownership for /var/lib/kubelet/pods/8905d88e-6531-4d65-9c2a-eff11dc7eb29/volumes/kubernetes.io~csi/pvc-287d1988-3fd9-4517-a0c7-22539acd31e6/mount and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699 kubenswrapper[3879]: E0326 16:32:02.706363 3879 kubelet.go:1841] \"Unable to attach or mount volumes for pod; skipping pod\" err=\"unmounted volumes=[8db9d5b032dab17d4ea9495af12e085a], unattached volumes=[crane2-rsync-server-secret 8db9d5b032dab17d4ea9495af12e085a kube-api-access-dlbd2 crane2-stunnel-server-config crane2-stunnel-server-secret crane2-rsync-server-config]: timed out waiting for the condition\" pod=\"caboodle-preprod/rsync-server\" kubenswrapper[3879]: E0326 16:32:02.706496 3879 pod_workers.go:965] \"Error syncing pod, skipping\" err=\"unmounted volumes=[8db9d5b032dab17d4ea9495af12e085a], unattached volumes=[crane2-rsync-server-secret 8db9d5b032dab17d4ea9495af12e085a kube-api-access-dlbd2 crane2-stunnel-server-config crane2-stunnel-server-secret crane2-rsync-server-config]: timed out waiting for the condition\" pod=\"caboodle-preprod/rsync-server\" podUID=8905d88e-6531-4d65-9c2a-eff11dc7eb29", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: migration_rsync_super_privileged: true 1 azure_resource_group: \"\" cluster_name: host mig_namespace_limit: \"10\" mig_pod_limit: \"100\" mig_pv_limit: \"100\" migration_controller: true migration_log_reader: true migration_ui: true migration_velero: true olm_managed: true restic_timeout: 1h version: 1.8.3", "cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migmigration> namespace: openshift-migration spec: rollback: true migPlanRef: name: <migplan> 1 namespace: openshift-migration EOF", "oc delete USD(oc get pods -l migration.openshift.io/is-stage-pod -n <namespace>) 1", "oc scale deployment <deployment> --replicas=<premigration_replicas>", "apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: \"1\" migration.openshift.io/preQuiesceReplicas: \"1\"", "oc get pod -n <namespace>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/migration_toolkit_for_containers/index
4.4. Live KVM Migration with virsh
4.4. Live KVM Migration with virsh A guest virtual machine can be migrated to another host physical machine with the virsh command. The migrate command accepts parameters in the following format: Note that the --live option may be eliminated when live migration is not desired. Additional options are listed in Section 4.4.2, "Additional Options for the virsh migrate Command" . The GuestName parameter represents the name of the guest virtual machine which you want to migrate. The DestinationURL parameter is the connection URL of the destination host physical machine. The destination system must run the same version of Red Hat Enterprise Linux, be using the same hypervisor and have libvirt running. Note The DestinationURL parameter for normal migration and peer-to-peer migration has different semantics: normal migration: the DestinationURL is the URL of the target host physical machine as seen from the source guest virtual machine. peer-to-peer migration: DestinationURL is the URL of the target host physical machine as seen from the source host physical machine. Once the command is entered, you will be prompted for the root password of the destination system. Important An entry for the destination host physical machine, in the /etc/hosts file on the source server is required for migration to succeed. Enter the IP address and host name for the destination host physical machine in this file as shown in the following example, substituting your destination host physical machine's IP address and host name: Example: Live Migration with virsh This example migrates from host1.example.com to host2.example.com . Change the host physical machine names for your environment. This example migrates a virtual machine named guest1-rhel6-64 . This example assumes you have fully configured shared storage and meet all the prerequisites (listed here: Migration requirements ). Verify the guest virtual machine is running From the source system, host1.example.com , verify guest1-rhel6-64 is running: Migrate the guest virtual machine Execute the following command to live migrate the guest virtual machine to the destination, host2.example.com . Append /system to the end of the destination URL to tell libvirt that you need full access. Once the command is entered you will be prompted for the root password of the destination system. Wait The migration may take some time depending on load and the size of the guest virtual machine. virsh only reports errors. The guest virtual machine continues to run on the source host physical machine until fully migrated. Note During the migration, the completion percentage indicator number is likely to decrease multiple times before the process finishes. This is caused by a recalculation of the overall progress, as source memory pages that are changed after the migration starts need to be be copied again. Therefore, this behavior is expected and does not indicate any problems with the migration. Verify the guest virtual machine has arrived at the destination host From the destination system, host2.example.com , verify guest1-rhel6-64 is running: The live migration is now complete. Note libvirt supports a variety of networking methods including TLS/SSL, UNIX sockets, SSH, and unencrypted TCP. Refer to Chapter 5, Remote Management of Guests for more information on using other methods. Note Non-running guest virtual machines cannot be migrated with the virsh migrate command. To migrate a non-running guest virtual machine, the following script should be used: 4.4.1. Additional Tips for Migration with virsh It is possible to perform multiple, concurrent live migrations where each migration runs in a separate command shell. However, this should be done with caution and should involve careful calculations as each migration instance uses one MAX_CLIENT from each side (source and target). As the default setting is 20, there is enough to run 10 instances without changing the settings. Should you need to change the settings, refer to the procedure Procedure 4.1, "Configuring libvirtd.conf" . Open the libvirtd.conf file as described in Procedure 4.1, "Configuring libvirtd.conf" . Look for the Processing controls section. Change the max_clients and max_workers parameters settings. It is recommended that the number be the same in both parameters. The max_clients will use 2 clients per migration (one per side) and max_workers will use 1 worker on the source and 0 workers on the destination during the perform phase and 1 worker on the destination during the finish phase. Important The max_clients and max_workers parameters settings are effected by all guest virtual machine connections to the libvirtd service. This means that any user that is using the same guest virtual machine and is performing a migration at the same time will also beholden to the limits set in the max_clients and max_workers parameters settings. This is why the maximum value needs to be considered carefully before performing a concurrent live migration. Save the file and restart the service. Note There may be cases where a migration connection drops because there are too many ssh sessions that have been started, but not yet authenticated. By default, sshd allows only 10 sessions to be in a "pre-authenticated state" at any time. This setting is controlled by the MaxStartups parameter in the sshd configuration file (located here: /etc/ssh/sshd_config ), which may require some adjustment. Adjusting this parameter should be done with caution as the limitation is put in place to prevent DoS attacks (and over-use of resources in general). Setting this value too high will negate its purpose. To change this parameter, edit the file /etc/ssh/sshd_config , remove the # from the beginning of the MaxStartups line, and change the 10 (default value) to a higher number. Remember to save the file and restart the sshd service. For more information, refer to the sshd_config man page.
[ "virsh migrate --live GuestName DestinationURL", "10.0.0.20 host2.example.com", "virsh list Id Name State ---------------------------------- 10 guest1-rhel6-64 running", "virsh migrate --live guest1-rhel6-64 qemu+ssh://host2.example.com/system", "virsh list Id Name State ---------------------------------- 10 guest1-rhel6-64 running", "virsh dumpxml Guest1 > Guest1.xml virsh -c qemu+ssh://<target-system-FQDN> define Guest1.xml virsh undefine Guest1", "################################################################# # Processing controls # The maximum number of concurrent client connections to allow over all sockets combined. #max_clients = 20 The minimum limit sets the number of workers to start up initially. If the number of active clients exceeds this, then more threads are spawned, upto max_workers limit. Typically you'd want max_workers to equal maximum number of clients allowed #min_workers = 5 #max_workers = 20 The number of priority workers. If all workers from above pool will stuck, some calls marked as high priority (notably domainDestroy) can be executed in this pool. #prio_workers = 5 Total global limit on concurrent RPC calls. Should be at least as large as max_workers. Beyond this, RPC requests will be read into memory and queued. This directly impact memory usage, currently each request requires 256 KB of memory. So by default upto 5 MB of memory is used # XXX this isn't actually enforced yet, only the per-client limit is used so far #max_requests = 20 Limit on concurrent requests from a single client connection. To avoid one client monopolizing the server this should be a small fraction of the global max_requests and max_workers parameter #max_client_requests = 5 #################################################################" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-Virtualization-KVM_live_migration-Live_KVM_migration_with_virsh
3.5.4. Processing Multiple Elements in an Array
3.5.4. Processing Multiple Elements in an Array Once you have collected enough information in an array, you will need to retrieve and process all elements in that array to make it useful. Consider Example 3.14, "vfsreads.stp" : the script collects information about how many VFS reads each process performs, but does not specify what to do with it. The obvious means for making Example 3.14, "vfsreads.stp" useful is to print the key pairs in the array reads , but how? The best way to process all key pairs in an array (as an iteration) is to use the foreach statement. Consider the following example: Example 3.15. cumulative-vfsreads.stp In the second probe of Example 3.15, "cumulative-vfsreads.stp" , the foreach statement uses the variable count to reference each iteration of a unique key in the array reads . The reads[count] array statement in the same probe retrieves the associated value of each unique key. Given what we know about the first probe in Example 3.15, "cumulative-vfsreads.stp" , the script prints VFS-read statistics every 3 seconds, displaying names of processes that performed a VFS-read along with a corresponding VFS-read count. Now, remember that the foreach statement in Example 3.15, "cumulative-vfsreads.stp" prints all iterations of process names in the array, and in no particular order. You can instruct the script to process the iterations in a particular order by using + (ascending) or - (descending). In addition, you can also limit the number of iterations the script needs to process with the limit value option. For example, consider the following replacement probe: This foreach statement instructs the script to process the elements in the array reads in descending order (of associated value). The limit 10 option instructs the foreach to only process the first ten iterations (that is print the first 10, starting with the highest value).
[ "global reads probe vfs.read { reads[execname()] ++ } probe timer.s(3) { foreach (count in reads) printf(\"%s : %d \\n\", count, reads[count]) }", "probe timer.s(3) { foreach (count in reads- limit 10) printf(\"%s : %d \\n\", count, reads[count]) }" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_beginners_guide/arrayops-foreach
Chapter 1. Integrating Service Mesh with OpenShift Serverless
Chapter 1. Integrating Service Mesh with OpenShift Serverless The OpenShift Serverless Operator provides Kourier as the default ingress for Knative. However, you can use Service Mesh with OpenShift Serverless whether Kourier is enabled or not. Integrating with Kourier disabled allows you to configure additional networking and routing options that the Kourier ingress does not support, such as mTLS functionality. Note the following assumptions and limitations: All Knative internal components, as well as Knative Services, are part of the Service Mesh and have sidecars injection enabled. This means that strict mTLS is enforced within the whole mesh. All requests to Knative Services require an mTLS connection, with the client having to send its certificate, except calls coming from OpenShift Routing. OpenShift Serverless with Service Mesh integration can only target one service mesh. Multiple meshes can be present in the cluster, but OpenShift Serverless is only available on one of them. Changing the target ServiceMeshMemberRoll that OpenShift Serverless is part of, meaning moving OpenShift Serverless to another mesh, is not supported. The only way to change the targeted Service mesh is to uninstall and reinstall OpenShift Serverless. 1.1. Prerequisites You have access to an Red Hat OpenShift Serverless account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have installed the Serverless Operator. You have installed the Red Hat OpenShift Service Mesh Operator. The examples in the following procedures use the domain example.com . The example certificate for this domain is used as a certificate authority (CA) that signs the subdomain certificate. To complete and verify these procedures in your deployment, you need either a certificate signed by a widely trusted public CA or a CA provided by your organization. Example commands must be adjusted according to your domain, subdomain, and CA. You must configure the wildcard certificate to match the domain of your OpenShift Container Platform cluster. For example, if your OpenShift Container Platform console address is https://console-openshift-console.apps.openshift.example.com , you must configure the wildcard certificate so that the domain is *.apps.openshift.example.com . For more information about configuring wildcard certificates, see the following topic about Creating a certificate to encrypt incoming external traffic . If you want to use any domain name, including those which are not subdomains of the default OpenShift Container Platform cluster domain, you must set up domain mapping for those domains. For more information, see the OpenShift Serverless documentation about Creating a custom domain mapping . Important OpenShift Serverless only supports the use of Red Hat OpenShift Service Mesh functionality that is explicitly documented in this guide, and does not support other undocumented features. Using Serverless 1.31 with Service Mesh is only supported with Service Mesh version 2.2 or later. For details and information on versions other than 1.31, see the "Red Hat OpenShift Serverless Supported Configurations" page. 1.2. Additional resources Red Hat OpenShift Serverless Supported Configurations Kourier and Istio ingresses 1.3. Creating a certificate to encrypt incoming external traffic By default, the Service Mesh mTLS feature only secures traffic inside of the Service Mesh itself, between the ingress gateway and individual pods that have sidecars. To encrypt traffic as it flows into the OpenShift Container Platform cluster, you must generate a certificate before you enable the OpenShift Serverless and Service Mesh integration. Prerequisites You have cluster administrator permissions on OpenShift Container Platform, or you have cluster or dedicated administrator permissions on Red Hat OpenShift Service on AWS or OpenShift Dedicated. You have installed the OpenShift Serverless Operator and Knative Serving. Install the OpenShift CLI ( oc ). You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads. Procedure Create a root certificate and private key that signs the certificates for your Knative services: USD openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 \ -subj '/O=Example Inc./CN=example.com' \ -keyout root.key \ -out root.crt Create a wildcard certificate: USD openssl req -nodes -newkey rsa:2048 \ -subj "/CN=*.apps.openshift.example.com/O=Example Inc." \ -keyout wildcard.key \ -out wildcard.csr Sign the wildcard certificate: USD openssl x509 -req -days 365 -set_serial 0 \ -CA root.crt \ -CAkey root.key \ -in wildcard.csr \ -out wildcard.crt Create a secret by using the wildcard certificate: USD oc create -n istio-system secret tls wildcard-certs \ --key=wildcard.key \ --cert=wildcard.crt This certificate is picked up by the gateways created when you integrate OpenShift Serverless with Service Mesh, so that the ingress gateway serves traffic with this certificate. 1.4. Integrating Service Mesh with OpenShift Serverless 1.4.1. Verifying installation prerequisites Before installing and configuring the Service Mesh integration with Serverless, verify that the prerequisites have been met. Procedure Check for conflicting gateways: Example command USD oc get gateway -A -o jsonpath='{range .items[*]}{@.metadata.namespace}{"/"}{@.metadata.name}{" "}{@.spec.servers}{"\n"}{end}' | column -t Example output knative-serving/knative-ingress-gateway [{"hosts":["*"],"port":{"name":"https","number":443,"protocol":"HTTPS"},"tls":{"credentialName":"wildcard-certs","mode":"SIMPLE"}}] knative-serving/knative-local-gateway [{"hosts":["*"],"port":{"name":"http","number":8081,"protocol":"HTTP"}}] This command should not return a Gateway that binds port: 443 and hosts: ["*"] , except the Gateways in knative-serving and Gateways that are part of another Service Mesh instance. Note The mesh that Serverless is part of must be distinct and preferably reserved only for Serverless workloads. That is because additional configuration, such as Gateways , might interfere with the Serverless gateways knative-local-gateway and knative-ingress-gateway . Red Hat OpenShift Service Mesh only allows one Gateway to claim a wildcard host binding ( hosts: ["*"] ) on the same port ( port: 443 ). If another Gateway is already binding this configuration, a separate mesh has to be created for Serverless workloads. Check whether Red Hat OpenShift Service Mesh istio-ingressgateway is exposed as type NodePort or LoadBalancer : Example command USD oc get svc -A | grep istio-ingressgateway Example output istio-system istio-ingressgateway ClusterIP 172.30.46.146 none> 15021/TCP,80/TCP,443/TCP 9m50s This command should not return a Service object of type NodePort or LoadBalancer . Note Cluster external Knative Services are expected to be called via OpenShift Ingress using OpenShift Routes. It is not supported to access Service Mesh directly, such as by exposing the istio-ingressgateway using a Service object with type NodePort or LoadBalancer . 1.4.2. Installing and configuring Service Mesh To integrate Serverless with Service Mesh, you need to install Service Mesh with a specific configuration. Procedure Create a ServiceMeshControlPlane resource in the istio-system namespace with the following configuration: Important If you have an existing ServiceMeshControlPlane object, make sure that you have the same configuration applied. apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: profiles: - default security: dataPlane: mtls: true 1 techPreview: meshConfig: defaultConfig: terminationDrainDuration: 35s 2 gateways: ingress: service: metadata: labels: knative: ingressgateway 3 proxy: networking: trafficControl: inbound: excludedPorts: 4 - 8444 # metrics - 8022 # serving: wait-for-drain k8s pre-stop hook 1 Enforce strict mTLS in the mesh. Only calls using a valid client certificate are allowed. 2 Serverless has a graceful termination for Knative Services of 30 seconds. istio-proxy needs to have a longer termination duration to make sure no requests are dropped. 3 Define a specific selector for the ingress gateway to target only the Knative gateway. 4 These ports are called by Kubernetes and cluster monitoring, which are not part of the mesh and cannot be called using mTLS. Therefore, these ports are excluded from the mesh. Add the namespaces that you would like to integrate with Service Mesh to the ServiceMeshMemberRoll object as members: Example servicemesh-member-roll.yaml configuration file apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system spec: members: 1 - knative-serving - knative-eventing - your-OpenShift-projects 1 A list of namespaces to be integrated with Service Mesh. Important This list of namespaces must include the knative-serving and knative-eventing namespaces. Apply the ServiceMeshMemberRoll resource: USD oc apply -f servicemesh-member-roll.yaml Create the necessary gateways so that Service Mesh can accept traffic. The following example uses the knative-local-gateway object with the ISTIO_MUTUAL mode (mTLS): Example istio-knative-gateways.yaml configuration file apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: knative-ingress-gateway namespace: knative-serving spec: selector: knative: ingressgateway servers: - port: number: 443 name: https protocol: HTTPS hosts: - "*" tls: mode: SIMPLE credentialName: <wildcard_certs> 1 --- apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: knative-local-gateway namespace: knative-serving spec: selector: knative: ingressgateway servers: - port: number: 8081 name: https protocol: HTTPS 2 tls: mode: ISTIO_MUTUAL 3 hosts: - "*" --- apiVersion: v1 kind: Service metadata: name: knative-local-gateway namespace: istio-system labels: experimental.istio.io/disable-gateway-port-translation: "true" spec: type: ClusterIP selector: istio: ingressgateway ports: - name: http2 port: 80 targetPort: 8081 1 Name of the secret containing the wildcard certificate. 2 3 The knative-local-gateway object serves HTTPS traffic and expects all clients to send requests using mTLS. This means that only traffic coming from within Service Mesh is possible. Workloads from outside the Service Mesh must use the external domain via OpenShift Routing. Apply the Gateway resources: USD oc apply -f istio-knative-gateways.yaml 1.4.3. Installing and configuring Serverless After installing Service Mesh, you need to install Serverless with a specific configuration. Procedure Install Knative Serving with the following KnativeServing custom resource, which enables the Istio integration: Example knative-serving-config.yaml configuration file apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: ingress: istio: enabled: true 1 deployments: 2 - name: activator annotations: "sidecar.istio.io/inject": "true" "sidecar.istio.io/rewriteAppHTTPProbers": "true" - name: autoscaler annotations: "sidecar.istio.io/inject": "true" "sidecar.istio.io/rewriteAppHTTPProbers": "true" config: istio: 3 gateway.knative-serving.knative-ingress-gateway: istio-ingressgateway.<your-istio-namespace>.svc.cluster.local local-gateway.knative-serving.knative-local-gateway: knative-local-gateway.<your-istio-namespace>.svc.cluster.local 1 Enable Istio integration. 2 Enable sidecar injection for Knative Serving data plane pods. 3 If your istio is not running in the istio-system namespace, you need to set these two flags with the correct namespace. Apply the KnativeServing resource: USD oc apply -f knative-serving-config.yaml Install Knative Eventing with the following KnativeEventing object, which enables the Istio integration: Example knative-eventing-config.yaml configuration file apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing spec: config: features: istio: enabled 1 workloads: 2 - name: pingsource-mt-adapter annotations: "sidecar.istio.io/inject": "true" "sidecar.istio.io/rewriteAppHTTPProbers": "true" - name: imc-dispatcher annotations: "sidecar.istio.io/inject": "true" "sidecar.istio.io/rewriteAppHTTPProbers": "true" - name: mt-broker-ingress annotations: "sidecar.istio.io/inject": "true" "sidecar.istio.io/rewriteAppHTTPProbers": "true" - name: mt-broker-filter annotations: "sidecar.istio.io/inject": "true" "sidecar.istio.io/rewriteAppHTTPProbers": "true" 1 Enable Eventing Istio controller to create a DestinationRule for each InMemoryChannel or KafkaChannel service. 2 Enable sidecar injection for Knative Eventing pods. Apply the KnativeEventing resource: USD oc apply -f knative-eventing-config.yaml Install Knative Kafka with the following KnativeKafka custom resource, which enables the Istio integration: Example knative-kafka-config.yaml configuration file apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: name: knative-kafka namespace: knative-eventing spec: channel: enabled: true bootstrapServers: <bootstrap_servers> 1 source: enabled: true broker: enabled: true defaultConfig: bootstrapServers: <bootstrap_servers> 2 numPartitions: <num_partitions> replicationFactor: <replication_factor> sink: enabled: true workloads: 3 - name: kafka-controller annotations: "sidecar.istio.io/inject": "true" "sidecar.istio.io/rewriteAppHTTPProbers": "true" - name: kafka-broker-receiver annotations: "sidecar.istio.io/inject": "true" "sidecar.istio.io/rewriteAppHTTPProbers": "true" - name: kafka-broker-dispatcher annotations: "sidecar.istio.io/inject": "true" "sidecar.istio.io/rewriteAppHTTPProbers": "true" - name: kafka-channel-receiver annotations: "sidecar.istio.io/inject": "true" "sidecar.istio.io/rewriteAppHTTPProbers": "true" - name: kafka-channel-dispatcher annotations: "sidecar.istio.io/inject": "true" "sidecar.istio.io/rewriteAppHTTPProbers": "true" - name: kafka-source-dispatcher annotations: "sidecar.istio.io/inject": "true" "sidecar.istio.io/rewriteAppHTTPProbers": "true" - name: kafka-sink-receiver annotations: "sidecar.istio.io/inject": "true" "sidecar.istio.io/rewriteAppHTTPProbers": "true" 1 2 The Apache Kafka cluster URL, for example my-cluster-kafka-bootstrap.kafka:9092 . 3 Enable sidecar injection for Knative Kafka pods. Apply the KnativeEventing object: USD oc apply -f knative-kafka-config.yaml Install ServiceEntry to inform Service Mesh of the communication between KnativeKafka components and an Apache Kafka cluster: Example kafka-cluster-serviceentry.yaml configuration file apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: kafka-cluster namespace: knative-eventing spec: hosts: 1 - <bootstrap_servers_without_port> exportTo: - "." ports: 2 - number: 9092 name: tcp-plain protocol: TCP - number: 9093 name: tcp-tls protocol: TCP - number: 9094 name: tcp-sasl-tls protocol: TCP - number: 9095 name: tcp-sasl-tls protocol: TCP - number: 9096 name: tcp-tls protocol: TCP location: MESH_EXTERNAL resolution: NONE 1 The list of Apache Kafka cluster hosts, for example my-cluster-kafka-bootstrap.kafka . 2 Apache Kafka cluster listeners ports. Note The listed ports in spec.ports are example TPC ports. The actual values depend on how the Apache Kafka cluster is configured. Apply the ServiceEntry resource: USD oc apply -f kafka-cluster-serviceentry.yaml 1.4.4. Verifying the integration After installing Service Mesh and Serverless with Istio enabled, you can verify that the integration works. Procedure Create a Knative Service that has sidecar injection enabled and uses a pass-through route: Example knative-service.yaml configuration file apiVersion: serving.knative.dev/v1 kind: Service metadata: name: <service_name> namespace: <namespace> 1 annotations: serving.knative.openshift.io/enablePassthrough: "true" 2 spec: template: metadata: annotations: sidecar.istio.io/inject: "true" 3 sidecar.istio.io/rewriteAppHTTPProbers: "true" spec: containers: - image: <image_url> 1 A namespace that is part of the service mesh member roll. 2 Instruct Knative Serving to generate a pass-through enabled route, so that the certificates you have generated are served through the ingress gateway directly. 3 Inject Service Mesh sidecars into the Knative service pods. Important Always add the annotation from this example to all of your Knative Service to make them work with Service Mesh. Apply the Service resource: USD oc apply -f knative-service.yaml Access your serverless application by using a secure connection that is now trusted by the CA: USD curl --cacert root.crt <service_url> For example, run: Example command USD curl --cacert root.crt https://hello-default.apps.openshift.example.com Example output Hello Openshift! 1.5. Enabling Knative Serving and Knative Eventing metrics when using Service Mesh with mTLS If Service Mesh is enabled with Mutual Transport Layer Security (mTLS), metrics for Knative Serving and Knative Eventing are disabled by default, because Service Mesh prevents Prometheus from scraping metrics. You can enable Knative Serving and Knative Eventing metrics when using Service Mesh and mTLS. Prerequisites You have one of the following permissions to access the cluster: Cluster administrator permissions on OpenShift Container Platform Cluster administrator permissions on Red Hat OpenShift Service on AWS Dedicated administrator permissions on OpenShift Dedicated You have installed the OpenShift CLI ( oc ). You have access to a project with the appropriate roles and permissions to create applications and other workloads. You have installed the OpenShift Serverless Operator, Knative Serving, and Knative Eventing on your cluster. You have installed Red Hat OpenShift Service Mesh with the mTLS functionality enabled. Procedure Specify prometheus as the metrics.backend-destination in the observability spec of the Knative Serving custom resource (CR): apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: config: observability: metrics.backend-destination: "prometheus" ... This step prevents metrics from being disabled by default. Note When you configure ServiceMeshControlPlane with manageNetworkPolicy: false , you must use the annotation on KnativeEventing to ensure proper event delivery. The same mechanism is used for Knative Eventing. To enable metrics for Knative Eventing, you need to specify prometheus as the metrics.backend-destination in the observability spec of the Knative Eventing custom resource (CR) as follows: apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing spec: config: observability: metrics.backend-destination: "prometheus" ... Modify and reapply the default Service Mesh control plane in the istio-system namespace, so that it includes the following spec: ... spec: proxy: networking: trafficControl: inbound: excludedPorts: - 8444 ... 1.6. Disabling the default network policies The OpenShift Serverless Operator generates the network policies by default. To disable the default network policy generation, you can add the serverless.openshift.io/disable-istio-net-policies-generation annotation in the KnativeEventing and KnativeServing custom resources (CRs). Prerequisites You have one of the following permissions to access the cluster: Cluster administrator permissions on OpenShift Container Platform Cluster administrator permissions on Red Hat OpenShift Service on AWS Dedicated administrator permissions on OpenShift Dedicated You have installed the OpenShift CLI ( oc ). You have access to a project with the appropriate roles and permissions to create applications and other workloads. You have installed the OpenShift Serverless Operator, Knative Serving, and Knative Eventing on your cluster. You have installed Red Hat OpenShift Service Mesh with the mTLS functionality enabled. Procedure Add the serverless.openshift.io/disable-istio-net-policies-generation: "true" annotation to your Knative custom resources. Note The OpenShift Serverless Operator generates the required network policies by default. When you configure ServiceMeshControlPlane with manageNetworkPolicy: false , you must disable the default network policy generation to ensure proper event delivery. To disable the default network policy generation, you can add the serverless.openshift.io/disable-istio-net-policies-generation annotation in the KnativeEventing and KnativeServing custom resources (CRs). Annotate the KnativeEventing CR by running the following command: USD oc edit KnativeEventing -n knative-eventing Example KnativeEventing CR apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing annotations: serverless.openshift.io/disable-istio-net-policies-generation: "true" Annotate the KnativeServing CR by running the following command: USD oc edit KnativeServing -n knative-serving Example KnativeServing CR apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving annotations: serverless.openshift.io/disable-istio-net-policies-generation: "true" 1.7. Improving net-istio memory usage by using secret filtering for Service Mesh By default, the informers implementation for the Kubernetes client-go library fetches all resources of a particular type. This can lead to a substantial overhead when many resources are available, which can cause the Knative net-istio ingress controller to fail on large clusters due to memory leaking. However, a filtering mechanism is available for the Knative net-istio ingress controller, which enables the controllers to only fetch Knative related secrets. The secret filtering is enabled by default on the OpenShift Serverless Operator side. An environment variable, ENABLE_SECRET_INFORMER_FILTERING_BY_CERT_UID=true , is added by default to the net-istio controller pods. Important If you enable secret filtering, you must label all of your secrets with networking.internal.knative.dev/certificate-uid: "<id>" . Otherwise, Knative Serving does not detect them, which leads to failures. You must label both new and existing secrets. Prerequisites You have cluster administrator permissions on OpenShift Container Platform, or you have cluster or dedicated administrator permissions on Red Hat OpenShift Service on AWS or OpenShift Dedicated. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads. Install Red Hat OpenShift Service Mesh. OpenShift Serverless with Service Mesh only is supported for use with Red Hat OpenShift Service Mesh version 2.0.5 or later. Install the OpenShift Serverless Operator and Knative Serving. Install the OpenShift CLI ( oc ). You can disable the secret filtering by setting the ENABLE_SECRET_INFORMER_FILTERING_BY_CERT_UID variable to false by using the workloads field in the KnativeServing custom resource (CR). Example KnativeServing CR apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: ... workloads: - env: - container: controller envVars: - name: ENABLE_SECRET_INFORMER_FILTERING_BY_CERT_UID value: 'false' name: net-istio-controller
[ "openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -subj '/O=Example Inc./CN=example.com' -keyout root.key -out root.crt", "openssl req -nodes -newkey rsa:2048 -subj \"/CN=*.apps.openshift.example.com/O=Example Inc.\" -keyout wildcard.key -out wildcard.csr", "openssl x509 -req -days 365 -set_serial 0 -CA root.crt -CAkey root.key -in wildcard.csr -out wildcard.crt", "oc create -n istio-system secret tls wildcard-certs --key=wildcard.key --cert=wildcard.crt", "oc get gateway -A -o jsonpath='{range .items[*]}{@.metadata.namespace}{\"/\"}{@.metadata.name}{\" \"}{@.spec.servers}{\"\\n\"}{end}' | column -t", "knative-serving/knative-ingress-gateway [{\"hosts\":[\"*\"],\"port\":{\"name\":\"https\",\"number\":443,\"protocol\":\"HTTPS\"},\"tls\":{\"credentialName\":\"wildcard-certs\",\"mode\":\"SIMPLE\"}}] knative-serving/knative-local-gateway [{\"hosts\":[\"*\"],\"port\":{\"name\":\"http\",\"number\":8081,\"protocol\":\"HTTP\"}}]", "oc get svc -A | grep istio-ingressgateway", "istio-system istio-ingressgateway ClusterIP 172.30.46.146 none> 15021/TCP,80/TCP,443/TCP 9m50s", "apiVersion: maistra.io/v2 kind: ServiceMeshControlPlane metadata: name: basic namespace: istio-system spec: profiles: - default security: dataPlane: mtls: true 1 techPreview: meshConfig: defaultConfig: terminationDrainDuration: 35s 2 gateways: ingress: service: metadata: labels: knative: ingressgateway 3 proxy: networking: trafficControl: inbound: excludedPorts: 4 - 8444 # metrics - 8022 # serving: wait-for-drain k8s pre-stop hook", "apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system spec: members: 1 - knative-serving - knative-eventing - your-OpenShift-projects", "oc apply -f servicemesh-member-roll.yaml", "apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: knative-ingress-gateway namespace: knative-serving spec: selector: knative: ingressgateway servers: - port: number: 443 name: https protocol: HTTPS hosts: - \"*\" tls: mode: SIMPLE credentialName: <wildcard_certs> 1 --- apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: knative-local-gateway namespace: knative-serving spec: selector: knative: ingressgateway servers: - port: number: 8081 name: https protocol: HTTPS 2 tls: mode: ISTIO_MUTUAL 3 hosts: - \"*\" --- apiVersion: v1 kind: Service metadata: name: knative-local-gateway namespace: istio-system labels: experimental.istio.io/disable-gateway-port-translation: \"true\" spec: type: ClusterIP selector: istio: ingressgateway ports: - name: http2 port: 80 targetPort: 8081", "oc apply -f istio-knative-gateways.yaml", "apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: ingress: istio: enabled: true 1 deployments: 2 - name: activator annotations: \"sidecar.istio.io/inject\": \"true\" \"sidecar.istio.io/rewriteAppHTTPProbers\": \"true\" - name: autoscaler annotations: \"sidecar.istio.io/inject\": \"true\" \"sidecar.istio.io/rewriteAppHTTPProbers\": \"true\" config: istio: 3 gateway.knative-serving.knative-ingress-gateway: istio-ingressgateway.<your-istio-namespace>.svc.cluster.local local-gateway.knative-serving.knative-local-gateway: knative-local-gateway.<your-istio-namespace>.svc.cluster.local", "oc apply -f knative-serving-config.yaml", "apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing spec: config: features: istio: enabled 1 workloads: 2 - name: pingsource-mt-adapter annotations: \"sidecar.istio.io/inject\": \"true\" \"sidecar.istio.io/rewriteAppHTTPProbers\": \"true\" - name: imc-dispatcher annotations: \"sidecar.istio.io/inject\": \"true\" \"sidecar.istio.io/rewriteAppHTTPProbers\": \"true\" - name: mt-broker-ingress annotations: \"sidecar.istio.io/inject\": \"true\" \"sidecar.istio.io/rewriteAppHTTPProbers\": \"true\" - name: mt-broker-filter annotations: \"sidecar.istio.io/inject\": \"true\" \"sidecar.istio.io/rewriteAppHTTPProbers\": \"true\"", "oc apply -f knative-eventing-config.yaml", "apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: name: knative-kafka namespace: knative-eventing spec: channel: enabled: true bootstrapServers: <bootstrap_servers> 1 source: enabled: true broker: enabled: true defaultConfig: bootstrapServers: <bootstrap_servers> 2 numPartitions: <num_partitions> replicationFactor: <replication_factor> sink: enabled: true workloads: 3 - name: kafka-controller annotations: \"sidecar.istio.io/inject\": \"true\" \"sidecar.istio.io/rewriteAppHTTPProbers\": \"true\" - name: kafka-broker-receiver annotations: \"sidecar.istio.io/inject\": \"true\" \"sidecar.istio.io/rewriteAppHTTPProbers\": \"true\" - name: kafka-broker-dispatcher annotations: \"sidecar.istio.io/inject\": \"true\" \"sidecar.istio.io/rewriteAppHTTPProbers\": \"true\" - name: kafka-channel-receiver annotations: \"sidecar.istio.io/inject\": \"true\" \"sidecar.istio.io/rewriteAppHTTPProbers\": \"true\" - name: kafka-channel-dispatcher annotations: \"sidecar.istio.io/inject\": \"true\" \"sidecar.istio.io/rewriteAppHTTPProbers\": \"true\" - name: kafka-source-dispatcher annotations: \"sidecar.istio.io/inject\": \"true\" \"sidecar.istio.io/rewriteAppHTTPProbers\": \"true\" - name: kafka-sink-receiver annotations: \"sidecar.istio.io/inject\": \"true\" \"sidecar.istio.io/rewriteAppHTTPProbers\": \"true\"", "oc apply -f knative-kafka-config.yaml", "apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: kafka-cluster namespace: knative-eventing spec: hosts: 1 - <bootstrap_servers_without_port> exportTo: - \".\" ports: 2 - number: 9092 name: tcp-plain protocol: TCP - number: 9093 name: tcp-tls protocol: TCP - number: 9094 name: tcp-sasl-tls protocol: TCP - number: 9095 name: tcp-sasl-tls protocol: TCP - number: 9096 name: tcp-tls protocol: TCP location: MESH_EXTERNAL resolution: NONE", "oc apply -f kafka-cluster-serviceentry.yaml", "apiVersion: serving.knative.dev/v1 kind: Service metadata: name: <service_name> namespace: <namespace> 1 annotations: serving.knative.openshift.io/enablePassthrough: \"true\" 2 spec: template: metadata: annotations: sidecar.istio.io/inject: \"true\" 3 sidecar.istio.io/rewriteAppHTTPProbers: \"true\" spec: containers: - image: <image_url>", "oc apply -f knative-service.yaml", "curl --cacert root.crt <service_url>", "curl --cacert root.crt https://hello-default.apps.openshift.example.com", "Hello Openshift!", "apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: config: observability: metrics.backend-destination: \"prometheus\"", "apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing spec: config: observability: metrics.backend-destination: \"prometheus\"", "spec: proxy: networking: trafficControl: inbound: excludedPorts: - 8444", "oc edit KnativeEventing -n knative-eventing", "apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing annotations: serverless.openshift.io/disable-istio-net-policies-generation: \"true\"", "oc edit KnativeServing -n knative-serving", "apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving annotations: serverless.openshift.io/disable-istio-net-policies-generation: \"true\"", "apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: workloads: - env: - container: controller envVars: - name: ENABLE_SECRET_INFORMER_FILTERING_BY_CERT_UID value: 'false' name: net-istio-controller" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.33/html/integrations/serverless-ossm-setup
Appendix A. Troubleshooting DNF modules
Appendix A. Troubleshooting DNF modules If DNF modules fails to enable, it can mean an incorrect module is enabled. In that case, you have to resolve dependencies manually as follows. List the enabled modules: A.1. Ruby If Ruby module fails to enable, it can mean an incorrect module is enabled. In that case, you have to resolve dependencies manually as follows: List the enabled modules: If the Ruby 2.5 module has already been enabled, perform a module reset: A.2. PostgreSQL If PostgreSQL module fails to enable, it can mean an incorrect module is enabled. In that case, you have to resolve dependencies manually as follows: List the enabled modules: If the PostgreSQL 10 module has already been enabled, perform a module reset: If you created a PostgreSQL 10 database, perform an upgrade: Enable the DNF modules: Install the PostgreSQL upgrade package: Perform the upgrade:
[ "dnf module list --enabled", "dnf module list --enabled", "dnf module reset ruby", "dnf module list --enabled", "dnf module reset postgresql", "dnf module enable satellite-capsule:el8", "dnf install postgresql-upgrade", "postgresql-setup --upgrade" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/configuring_capsules_with_a_load_balancer/troubleshooting-dnf-modules_load-balancing
Chapter 2. ClusterAutoscaler [autoscaling.openshift.io/v1]
Chapter 2. ClusterAutoscaler [autoscaling.openshift.io/v1] Description ClusterAutoscaler is the Schema for the clusterautoscalers API Type object 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Desired state of ClusterAutoscaler resource status object Most recently observed status of ClusterAutoscaler resource 2.1.1. .spec Description Desired state of ClusterAutoscaler resource Type object Property Type Description balanceSimilarNodeGroups boolean BalanceSimilarNodeGroups enables/disables the --balance-similar-node-groups cluster-autoscaler feature. This feature will automatically identify node groups with the same instance type and the same set of labels and try to keep the respective sizes of those node groups balanced. balancingIgnoredLabels array (string) BalancingIgnoredLabels sets "--balancing-ignore-label <label name>" flag on cluster-autoscaler for each listed label. This option specifies labels that cluster autoscaler should ignore when considering node group similarity. For example, if you have nodes with "topology.ebs.csi.aws.com/zone" label, you can add name of this label here to prevent cluster autoscaler from spliting nodes into different node groups based on its value. ignoreDaemonsetsUtilization boolean Enables/Disables --ignore-daemonsets-utilization CA feature flag. Should CA ignore DaemonSet pods when calculating resource utilization for scaling down. false by default logVerbosity integer Sets the autoscaler log level. Default value is 1, level 4 is recommended for DEBUGGING and level 6 will enable almost everything. This option has priority over log level set by the CLUSTER_AUTOSCALER_VERBOSITY environment variable. maxNodeProvisionTime string Maximum time CA waits for node to be provisioned maxPodGracePeriod integer Gives pods graceful termination time before scaling down podPriorityThreshold integer To allow users to schedule "best-effort" pods, which shouldn't trigger Cluster Autoscaler actions, but only run when there are spare resources available, More info: https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-does-cluster-autoscaler-work-with-pod-priority-and-preemption resourceLimits object Constraints of autoscaling resources scaleDown object Configuration of scale down operation skipNodesWithLocalStorage boolean Enables/Disables --skip-nodes-with-local-storage CA feature flag. If true cluster autoscaler will never delete nodes with pods with local storage, e.g. EmptyDir or HostPath. true by default at autoscaler 2.1.2. .spec.resourceLimits Description Constraints of autoscaling resources Type object Property Type Description cores object Minimum and maximum number of cores in cluster, in the format <min>:<max>. Cluster autoscaler will not scale the cluster beyond these numbers. gpus array Minimum and maximum number of different GPUs in cluster, in the format <gpu_type>:<min>:<max>. Cluster autoscaler will not scale the cluster beyond these numbers. Can be passed multiple times. gpus[] object maxNodesTotal integer Maximum number of nodes in all node groups. Cluster autoscaler will not grow the cluster beyond this number. memory object Minimum and maximum number of gigabytes of memory in cluster, in the format <min>:<max>. Cluster autoscaler will not scale the cluster beyond these numbers. 2.1.3. .spec.resourceLimits.cores Description Minimum and maximum number of cores in cluster, in the format <min>:<max>. Cluster autoscaler will not scale the cluster beyond these numbers. Type object Required max min Property Type Description max integer min integer 2.1.4. .spec.resourceLimits.gpus Description Minimum and maximum number of different GPUs in cluster, in the format <gpu_type>:<min>:<max>. Cluster autoscaler will not scale the cluster beyond these numbers. Can be passed multiple times. Type array 2.1.5. .spec.resourceLimits.gpus[] Description Type object Required max min type Property Type Description max integer min integer type string 2.1.6. .spec.resourceLimits.memory Description Minimum and maximum number of gigabytes of memory in cluster, in the format <min>:<max>. Cluster autoscaler will not scale the cluster beyond these numbers. Type object Required max min Property Type Description max integer min integer 2.1.7. .spec.scaleDown Description Configuration of scale down operation Type object Required enabled Property Type Description delayAfterAdd string How long after scale up that scale down evaluation resumes delayAfterDelete string How long after node deletion that scale down evaluation resumes, defaults to scan-interval delayAfterFailure string How long after scale down failure that scale down evaluation resumes enabled boolean Should CA scale down the cluster unneededTime string How long a node should be unneeded before it is eligible for scale down utilizationThreshold string Node utilization level, defined as sum of requested resources divided by capacity, below which a node can be considered for scale down 2.1.8. .status Description Most recently observed status of ClusterAutoscaler resource Type object 2.2. API endpoints The following API endpoints are available: /apis/autoscaling.openshift.io/v1/clusterautoscalers DELETE : delete collection of ClusterAutoscaler GET : list objects of kind ClusterAutoscaler POST : create a ClusterAutoscaler /apis/autoscaling.openshift.io/v1/clusterautoscalers/{name} DELETE : delete a ClusterAutoscaler GET : read the specified ClusterAutoscaler PATCH : partially update the specified ClusterAutoscaler PUT : replace the specified ClusterAutoscaler /apis/autoscaling.openshift.io/v1/clusterautoscalers/{name}/status GET : read status of the specified ClusterAutoscaler PATCH : partially update status of the specified ClusterAutoscaler PUT : replace status of the specified ClusterAutoscaler 2.2.1. /apis/autoscaling.openshift.io/v1/clusterautoscalers Table 2.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of ClusterAutoscaler Table 2.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 2.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ClusterAutoscaler Table 2.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 2.5. HTTP responses HTTP code Reponse body 200 - OK ClusterAutoscalerList schema 401 - Unauthorized Empty HTTP method POST Description create a ClusterAutoscaler Table 2.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.7. Body parameters Parameter Type Description body ClusterAutoscaler schema Table 2.8. HTTP responses HTTP code Reponse body 200 - OK ClusterAutoscaler schema 201 - Created ClusterAutoscaler schema 202 - Accepted ClusterAutoscaler schema 401 - Unauthorized Empty 2.2.2. /apis/autoscaling.openshift.io/v1/clusterautoscalers/{name} Table 2.9. Global path parameters Parameter Type Description name string name of the ClusterAutoscaler Table 2.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a ClusterAutoscaler Table 2.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 2.12. Body parameters Parameter Type Description body DeleteOptions schema Table 2.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ClusterAutoscaler Table 2.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 2.15. HTTP responses HTTP code Reponse body 200 - OK ClusterAutoscaler schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ClusterAutoscaler Table 2.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.17. Body parameters Parameter Type Description body Patch schema Table 2.18. HTTP responses HTTP code Reponse body 200 - OK ClusterAutoscaler schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ClusterAutoscaler Table 2.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.20. Body parameters Parameter Type Description body ClusterAutoscaler schema Table 2.21. HTTP responses HTTP code Reponse body 200 - OK ClusterAutoscaler schema 201 - Created ClusterAutoscaler schema 401 - Unauthorized Empty 2.2.3. /apis/autoscaling.openshift.io/v1/clusterautoscalers/{name}/status Table 2.22. Global path parameters Parameter Type Description name string name of the ClusterAutoscaler Table 2.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified ClusterAutoscaler Table 2.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 2.25. HTTP responses HTTP code Reponse body 200 - OK ClusterAutoscaler schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ClusterAutoscaler Table 2.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.27. Body parameters Parameter Type Description body Patch schema Table 2.28. HTTP responses HTTP code Reponse body 200 - OK ClusterAutoscaler schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ClusterAutoscaler Table 2.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.30. Body parameters Parameter Type Description body ClusterAutoscaler schema Table 2.31. HTTP responses HTTP code Reponse body 200 - OK ClusterAutoscaler schema 201 - Created ClusterAutoscaler schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/autoscale_apis/clusterautoscaler-autoscaling-openshift-io-v1
Chapter 13. Logging alerts
Chapter 13. Logging alerts 13.1. Default logging alerts Logging alerts are installed as part of the Red Hat OpenShift Logging Operator installation. Alerts depend on metrics exported by the log collection and log storage backends. These metrics are enabled if you selected the option to Enable Operator recommended cluster monitoring on this namespace when installing the Red Hat OpenShift Logging Operator. Default logging alerts are sent to the OpenShift Container Platform monitoring stack Alertmanager in the openshift-monitoring namespace, unless you have disabled the local Alertmanager instance. 13.1.1. Accessing the Alerting UI from the Administrator perspective 13.1.2. Accessing the Alerting UI from the Developer perspective The Alerting UI is accessible through the Developer perspective of the OpenShift Container Platform web console. From the Administrator perspective, go to Observe Alerting . The three main pages in the Alerting UI in this perspective are the Alerts , Silences , and Alerting rules pages. From the Developer perspective, go to Observe and go to the Alerts tab. Select the project that you want to manage alerts for from the Project: list. In this perspective, alerts, silences, and alerting rules are all managed from the Alerts tab. The results shown in the Alerts tab are specific to the selected project. Note In the Developer perspective, you can select from core OpenShift Container Platform and user-defined projects that you have access to in the Project: <project_name> list. However, alerts, silences, and alerting rules relating to core OpenShift Container Platform projects are not displayed if you are not logged in as a cluster administrator. 13.1.3. Logging collector alerts In logging 5.8 and later versions, the following alerts are generated by the Red Hat OpenShift Logging Operator. You can view these alerts in the OpenShift Container Platform web console. Alert Name Message Description Severity CollectorNodeDown Prometheus could not scrape namespace / pod collector component for more than 10m. Collector cannot be scraped. Critical CollectorHighErrorRate value % of records have resulted in an error by namespace / pod collector component. namespace / pod collector component errors are high. Critical CollectorVeryHighErrorRate value % of records have resulted in an error by namespace / pod collector component. namespace / pod collector component errors are very high. Critical 13.1.4. Vector collector alerts In logging 5.7 and later versions, the following alerts are generated by the Vector collector. You can view these alerts in the OpenShift Container Platform web console. Table 13.1. Vector collector alerts Alert Message Description Severity CollectorHighErrorRate <value> of records have resulted in an error by vector <instance>. The number of vector output errors is high, by default more than 10 in the 15 minutes. Warning CollectorNodeDown Prometheus could not scrape vector <instance> for more than 10m. Vector is reporting that Prometheus could not scrape a specific Vector instance. Critical CollectorVeryHighErrorRate <value> of records have resulted in an error by vector <instance>. The number of Vector component errors are very high, by default more than 25 in the 15 minutes. Critical FluentdQueueLengthIncreasing In the last 1h, fluentd <instance> buffer queue length constantly increased more than 1. Current value is <value>. Fluentd is reporting that the queue size is increasing. Warning 13.1.5. Fluentd collector alerts The following alerts are generated by the legacy Fluentd log collector. You can view these alerts in the OpenShift Container Platform web console. Table 13.2. Fluentd collector alerts Alert Message Description Severity FluentDHighErrorRate <value> of records have resulted in an error by fluentd <instance>. The number of FluentD output errors is high, by default more than 10 in the 15 minutes. Warning FluentdNodeDown Prometheus could not scrape fluentd <instance> for more than 10m. Fluentd is reporting that Prometheus could not scrape a specific Fluentd instance. Critical FluentdQueueLengthIncreasing In the last 1h, fluentd <instance> buffer queue length constantly increased more than 1. Current value is <value>. Fluentd is reporting that the queue size is increasing. Warning FluentDVeryHighErrorRate <value> of records have resulted in an error by fluentd <instance>. The number of FluentD output errors is very high, by default more than 25 in the 15 minutes. Critical 13.1.6. Elasticsearch alerting rules You can view these alerting rules in the OpenShift Container Platform web console. Table 13.3. Alerting rules Alert Description Severity ElasticsearchClusterNotHealthy The cluster health status has been RED for at least 2 minutes. The cluster does not accept writes, shards may be missing, or the master node hasn't been elected yet. Critical ElasticsearchClusterNotHealthy The cluster health status has been YELLOW for at least 20 minutes. Some shard replicas are not allocated. Warning ElasticsearchDiskSpaceRunningLow The cluster is expected to be out of disk space within the 6 hours. Critical ElasticsearchHighFileDescriptorUsage The cluster is predicted to be out of file descriptors within the hour. Warning ElasticsearchJVMHeapUseHigh The JVM Heap usage on the specified node is high. Alert ElasticsearchNodeDiskWatermarkReached The specified node has hit the low watermark due to low free disk space. Shards can not be allocated to this node anymore. You should consider adding more disk space to the node. Info ElasticsearchNodeDiskWatermarkReached The specified node has hit the high watermark due to low free disk space. Some shards will be re-allocated to different nodes if possible. Make sure more disk space is added to the node or drop old indices allocated to this node. Warning ElasticsearchNodeDiskWatermarkReached The specified node has hit the flood watermark due to low free disk space. Every index that has a shard allocated on this node is enforced a read-only block. The index block must be manually released when the disk use falls below the high watermark. Critical ElasticsearchJVMHeapUseHigh The JVM Heap usage on the specified node is too high. Alert ElasticsearchWriteRequestsRejectionJumps Elasticsearch is experiencing an increase in write rejections on the specified node. This node might not be keeping up with the indexing speed. Warning AggregatedLoggingSystemCPUHigh The CPU used by the system on the specified node is too high. Alert ElasticsearchProcessCPUHigh The CPU used by Elasticsearch on the specified node is too high. Alert 13.1.7. Additional resources Modifying core platform alerting rules 13.2. Custom logging alerts In logging 5.7 and later versions, users can configure the LokiStack deployment to produce customized alerts and recorded metrics. If you want to use customized alerting and recording rules , you must enable the LokiStack ruler component. LokiStack log-based alerts and recorded metrics are triggered by providing LogQL expressions to the ruler component. The Loki Operator manages a ruler that is optimized for the selected LokiStack size, which can be 1x.extra-small , 1x.small , or 1x.medium . To provide these expressions, you must create an AlertingRule custom resource (CR) containing Prometheus-compatible alerting rules , or a RecordingRule CR containing Prometheus-compatible recording rules . Administrators can configure log-based alerts or recorded metrics for application , audit , or infrastructure tenants. Users without administrator permissions can configure log-based alerts or recorded metrics for application tenants of the applications that they have access to. Application, audit, and infrastructure alerts are sent by default to the OpenShift Container Platform monitoring stack Alertmanager in the openshift-monitoring namespace, unless you have disabled the local Alertmanager instance. If the Alertmanager that is used to monitor user-defined projects in the openshift-user-workload-monitoring namespace is enabled, application alerts are sent to the Alertmanager in this namespace by default. 13.2.1. Configuring the ruler When the LokiStack ruler component is enabled, users can define a group of LogQL expressions that trigger logging alerts or recorded metrics. Administrators can enable the ruler by modifying the LokiStack custom resource (CR). Prerequisites You have installed the Red Hat OpenShift Logging Operator and the Loki Operator. You have created a LokiStack CR. You have administrator permissions. Procedure Enable the ruler by ensuring that the LokiStack CR contains the following spec configuration: apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: <name> namespace: <namespace> spec: # ... rules: enabled: true 1 selector: matchLabels: openshift.io/<label_name>: "true" 2 namespaceSelector: matchLabels: openshift.io/<label_name>: "true" 3 1 Enable Loki alerting and recording rules in your cluster. 2 Add a custom label that can be added to namespaces where you want to enable the use of logging alerts and metrics. 3 Add a custom label that can be added to namespaces where you want to enable the use of logging alerts and metrics. 13.2.2. Authorizing LokiStack rules RBAC permissions Administrators can allow users to create and manage their own alerting and recording rules by binding cluster roles to usernames. Cluster roles are defined as ClusterRole objects that contain necessary role-based access control (RBAC) permissions for users. In logging 5.8 and later, the following cluster roles for alerting and recording rules are available for LokiStack: Rule name Description alertingrules.loki.grafana.com-v1-admin Users with this role have administrative-level access to manage alerting rules. This cluster role grants permissions to create, read, update, delete, list, and watch AlertingRule resources within the loki.grafana.com/v1 API group. alertingrules.loki.grafana.com-v1-crdview Users with this role can view the definitions of Custom Resource Definitions (CRDs) related to AlertingRule resources within the loki.grafana.com/v1 API group, but do not have permissions for modifying or managing these resources. alertingrules.loki.grafana.com-v1-edit Users with this role have permission to create, update, and delete AlertingRule resources. alertingrules.loki.grafana.com-v1-view Users with this role can read AlertingRule resources within the loki.grafana.com/v1 API group. They can inspect configurations, labels, and annotations for existing alerting rules but cannot make any modifications to them. recordingrules.loki.grafana.com-v1-admin Users with this role have administrative-level access to manage recording rules. This cluster role grants permissions to create, read, update, delete, list, and watch RecordingRule resources within the loki.grafana.com/v1 API group. recordingrules.loki.grafana.com-v1-crdview Users with this role can view the definitions of Custom Resource Definitions (CRDs) related to RecordingRule resources within the loki.grafana.com/v1 API group, but do not have permissions for modifying or managing these resources. recordingrules.loki.grafana.com-v1-edit Users with this role have permission to create, update, and delete RecordingRule resources. recordingrules.loki.grafana.com-v1-view Users with this role can read RecordingRule resources within the loki.grafana.com/v1 API group. They can inspect configurations, labels, and annotations for existing alerting rules but cannot make any modifications to them. 13.2.2.1. Examples To apply cluster roles for a user, you must bind an existing cluster role to a specific username. Cluster roles can be cluster or namespace scoped, depending on which type of role binding you use. When a RoleBinding object is used, as when using the oc adm policy add-role-to-user command, the cluster role only applies to the specified namespace. When a ClusterRoleBinding object is used, as when using the oc adm policy add-cluster-role-to-user command, the cluster role applies to all namespaces in the cluster. The following example command gives the specified user create, read, update and delete (CRUD) permissions for alerting rules in a specific namespace in the cluster: Example cluster role binding command for alerting rule CRUD permissions in a specific namespace USD oc adm policy add-role-to-user alertingrules.loki.grafana.com-v1-admin -n <namespace> <username> The following command gives the specified user administrator permissions for alerting rules in all namespaces: Example cluster role binding command for administrator permissions USD oc adm policy add-cluster-role-to-user alertingrules.loki.grafana.com-v1-admin <username> Additional resources Using RBAC to define and apply permissions 13.2.3. Creating a log-based alerting rule with Loki The AlertingRule CR contains a set of specifications and webhook validation definitions to declare groups of alerting rules for a single LokiStack instance. In addition, the webhook validation definition provides support for rule validation conditions: If an AlertingRule CR includes an invalid interval period, it is an invalid alerting rule If an AlertingRule CR includes an invalid for period, it is an invalid alerting rule. If an AlertingRule CR includes an invalid LogQL expr , it is an invalid alerting rule. If an AlertingRule CR includes two groups with the same name, it is an invalid alerting rule. If none of above applies, an alerting rule is considered valid. Tenant type Valid namespaces for AlertingRule CRs application audit openshift-logging infrastructure openshift-/* , kube-/\* , default Prerequisites Red Hat OpenShift Logging Operator 5.7 and later OpenShift Container Platform 4.13 and later Procedure Create an AlertingRule custom resource (CR): Example infrastructure AlertingRule CR apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: loki-operator-alerts namespace: openshift-operators-redhat 1 labels: 2 openshift.io/<label_name>: "true" spec: tenantID: "infrastructure" 3 groups: - name: LokiOperatorHighReconciliationError rules: - alert: HighPercentageError expr: | 4 sum(rate({kubernetes_namespace_name="openshift-operators-redhat", kubernetes_pod_name=~"loki-operator-controller-manager.*"} |= "error" [1m])) by (job) / sum(rate({kubernetes_namespace_name="openshift-operators-redhat", kubernetes_pod_name=~"loki-operator-controller-manager.*"}[1m])) by (job) > 0.01 for: 10s labels: severity: critical 5 annotations: summary: High Loki Operator Reconciliation Errors 6 description: High Loki Operator Reconciliation Errors 7 1 The namespace where this AlertingRule CR is created must have a label matching the LokiStack spec.rules.namespaceSelector definition. 2 The labels block must match the LokiStack spec.rules.selector definition. 3 AlertingRule CRs for infrastructure tenants are only supported in the openshift-* , kube-\* , or default namespaces. 4 The value for kubernetes_namespace_name: must match the value for metadata.namespace . 5 The value of this mandatory field must be critical , warning , or info . 6 This field is mandatory. 7 This field is mandatory. Example application AlertingRule CR apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: app-user-workload namespace: app-ns 1 labels: 2 openshift.io/<label_name>: "true" spec: tenantID: "application" groups: - name: AppUserWorkloadHighError rules: - alert: expr: | 3 sum(rate({kubernetes_namespace_name="app-ns", kubernetes_pod_name=~"podName.*"} |= "error" [1m])) by (job) for: 10s labels: severity: critical 4 annotations: summary: 5 description: 6 1 The namespace where this AlertingRule CR is created must have a label matching the LokiStack spec.rules.namespaceSelector definition. 2 The labels block must match the LokiStack spec.rules.selector definition. 3 Value for kubernetes_namespace_name: must match the value for metadata.namespace . 4 The value of this mandatory field must be critical , warning , or info . 5 The value of this mandatory field is a summary of the rule. 6 The value of this mandatory field is a detailed description of the rule. Apply the AlertingRule CR: USD oc apply -f <filename>.yaml 13.2.4. Additional resources About OpenShift Container Platform monitoring Configuring alert notifications
[ "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: <name> namespace: <namespace> spec: rules: enabled: true 1 selector: matchLabels: openshift.io/<label_name>: \"true\" 2 namespaceSelector: matchLabels: openshift.io/<label_name>: \"true\" 3", "oc adm policy add-role-to-user alertingrules.loki.grafana.com-v1-admin -n <namespace> <username>", "oc adm policy add-cluster-role-to-user alertingrules.loki.grafana.com-v1-admin <username>", "apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: loki-operator-alerts namespace: openshift-operators-redhat 1 labels: 2 openshift.io/<label_name>: \"true\" spec: tenantID: \"infrastructure\" 3 groups: - name: LokiOperatorHighReconciliationError rules: - alert: HighPercentageError expr: | 4 sum(rate({kubernetes_namespace_name=\"openshift-operators-redhat\", kubernetes_pod_name=~\"loki-operator-controller-manager.*\"} |= \"error\" [1m])) by (job) / sum(rate({kubernetes_namespace_name=\"openshift-operators-redhat\", kubernetes_pod_name=~\"loki-operator-controller-manager.*\"}[1m])) by (job) > 0.01 for: 10s labels: severity: critical 5 annotations: summary: High Loki Operator Reconciliation Errors 6 description: High Loki Operator Reconciliation Errors 7", "apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: app-user-workload namespace: app-ns 1 labels: 2 openshift.io/<label_name>: \"true\" spec: tenantID: \"application\" groups: - name: AppUserWorkloadHighError rules: - alert: expr: | 3 sum(rate({kubernetes_namespace_name=\"app-ns\", kubernetes_pod_name=~\"podName.*\"} |= \"error\" [1m])) by (job) for: 10s labels: severity: critical 4 annotations: summary: 5 description: 6", "oc apply -f <filename>.yaml" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/logging/logging-alerts
probe::kprocess.exit
probe::kprocess.exit Name probe::kprocess.exit - Exit from process Synopsis kprocess.exit Values code The exit code of the process Context The process which is terminating. Description Fires when a process terminates. This will always be followed by a kprocess.release, though the latter may be delayed if the process waits in a zombie state.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-kprocess-exit
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar. Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/deduplicating_and_compressing_storage/proc_providing-feedback-on-red-hat-documentation_deduplicating-and-compressing-storage
Configuring cloud integrations for Red Hat services
Configuring cloud integrations for Red Hat services Red Hat Hybrid Cloud Console 1-latest How to link your Red Hat account to a public cloud Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_hybrid_cloud_console/1-latest/html-single/configuring_cloud_integrations_for_red_hat_services/index
Chapter 3. What is deployed with AMQ Streams
Chapter 3. What is deployed with AMQ Streams Apache Kafka components are provided for deployment to OpenShift with the AMQ Streams distribution. The Kafka components are generally run as clusters for availability. A typical deployment incorporating Kafka components might include: Kafka cluster of broker nodes ZooKeeper cluster of replicated ZooKeeper instances Kafka Connect cluster for external data connections Kafka MirrorMaker cluster to mirror the Kafka cluster in a secondary cluster Kafka Exporter to extract additional Kafka metrics data for monitoring Kafka Bridge to make HTTP-based requests to the Kafka cluster Not all of these components are mandatory, though you need Kafka and ZooKeeper as a minimum. Some components can be deployed without Kafka, such as MirrorMaker or Kafka Connect. 3.1. Order of deployment The required order of deployment to an OpenShift cluster is as follows: Deploy the Cluster Operator to manage your Kafka cluster Deploy the Kafka cluster with the ZooKeeper cluster, and include the Topic Operator and User Operator in the deployment Optionally deploy: The Topic Operator and User Operator standalone if you did not deploy them with the Kafka cluster Kafka Connect Kafka MirrorMaker Kafka Bridge Components for the monitoring of metrics The Cluster Operator creates OpenShift resources for the components, such as Deployment , Service , and Pod resources. The names of the OpenShift resources are appended with the name specified for a component when it's deployed. For example, a Kafka cluster named my-kafka-cluster has a service named my-kafka-cluster-kafka .
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.1/html/deploying_and_upgrading_amq_streams_on_openshift/deploy-options_str
10.5. Programming Languages
10.5. Programming Languages Ruby 2.0.0 Red Hat Enterprise Linux 7 provides the latest Ruby version, 2.0.0. The most notable of the changes between version 2.0.0 and 1.8.7 included in Red Hat Enterprise Linux 6 are the following: New interpreter, YARV (yet another Ruby VM), which significantly reduces loading times, especially for applications with large trees or files; New and faster "Lazy Sweep" garbage collector; Ruby now supports string encoding; Ruby now supports native threads instead of green threads. For more information about Ruby 2.0.0, consult the upstream pages of the project: https://www.ruby-lang.org/en/ . Python 2.7.5 Red Hat Enterprise Linux 7 includes Python 2.7.5, which is the latest Python 2.7 series release. This version contains many improvements in performance and provides forward compatibility with Python 3. The most notable of the changes in Python 2.7.5 are the following: An ordered dictionary type; A faster I/O module; Dictionary comprehensions and set comprehensions; The sysconfig module. For the full list of changes, see http://docs.python.org/dev/whatsnew/2.7.html Java 7 and Multiple JDKs Red Hat Enterprise Linux 7 features OpenJDK7 as the default Java Development Kit (JDK) and Java 7 as the default Java version. All Java 7 packages ( java-1.7.0-openjdk , java-1.7.0-oracle , java-1.7.1-ibm ) allow installation of multiple versions in parallel, similarly to the kernel. The ability of parallel installation allows users to try out multiple versions of the same JDK simultaneously, to tune performance and debug problems if needed. The precise JDK is selectable through /etc/alternatives/ as before. Important The Optional channel must be enabled in order to successfully install the java-1.7.1-ibm-jdbc or java-1.7.1-ibm-plugin packages from the Supplementary channel. The Optional channel contains packages that satisfy dependencies of the desired Java packages. Before installing packages from the Optional and Supplementary channels, see Scope of Coverage Details . Information on subscribing to the Optional and Supplementary channels can be found in the Red Hat Knowledgebase solution How to access Optional and Supplementary channels .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.0_release_notes/sect-red_hat_enterprise_linux-7.0_release_notes-compiler_and_tools-programming_languages
Post-installation configuration
Post-installation configuration OpenShift Container Platform 4.10 Day 2 operations for OpenShift Container Platform Red Hat OpenShift Documentation Team
[ "oc get dnses.config.openshift.io/cluster -o yaml", "apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: \"2019-10-25T18:27:09Z\" generation: 2 name: cluster resourceVersion: \"37966\" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>: owned publicZone: id: Z2XXXXXXXXXXA4 status: {}", "oc patch dnses.config.openshift.io/cluster --type=merge --patch='{\"spec\": {\"publicZone\": null}}' dns.config.openshift.io/cluster patched", "oc get dnses.config.openshift.io/cluster -o yaml", "apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: \"2019-10-25T18:27:09Z\" generation: 2 name: cluster resourceVersion: \"37966\" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>-wfpg4: owned status: {}", "oc replace --force --wait --filename - <<EOF apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: default spec: endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: Internal EOF", "ingresscontroller.operator.openshift.io \"default\" deleted ingresscontroller.operator.openshift.io/default replaced", "oc get machine -n openshift-machine-api", "NAME STATE TYPE REGION ZONE AGE lk4pj-master-0 running m4.xlarge us-east-1 us-east-1a 17m lk4pj-master-1 running m4.xlarge us-east-1 us-east-1b 17m lk4pj-master-2 running m4.xlarge us-east-1 us-east-1a 17m lk4pj-worker-us-east-1a-5fzfj running m4.xlarge us-east-1 us-east-1a 15m lk4pj-worker-us-east-1a-vbghs running m4.xlarge us-east-1 us-east-1a 15m lk4pj-worker-us-east-1b-zgpzg running m4.xlarge us-east-1 us-east-1b 15m", "oc edit machines -n openshift-machine-api <master_name> 1", "spec: providerSpec: value: loadBalancers: - name: lk4pj-ext 1 type: network 2 - name: lk4pj-int type: network", "oc -n openshift-ingress-operator patch ingresscontrollers/default --type=merge --patch='{\"spec\":{\"endpointPublishingStrategy\":{\"type\":\"LoadBalancerService\",\"loadBalancer\":{\"scope\":\"Internal\"}}}}'", "oc -n openshift-ingress-operator get ingresscontrollers/default -o yaml", "oc -n openshift-ingress delete services/router-default", "bmc: address: credentialsName: disableCertificateVerification:", "image: url: checksum: checksumType: format:", "raid: hardwareRAIDVolumes: softwareRAIDVolumes:", "spec: raid: hardwareRAIDVolume: []", "rootDeviceHints: deviceName: hctl: model: vendor: serialNumber: minSizeGigabytes: wwn: wwnWithExtension: wwnVendorExtension: rotational:", "hardware: cpu arch: model: clockMegahertz: flags: count:", "hardware: firmware:", "hardware: nics: - ip: name: mac: speedGbps: vlans: vlanId: pxe:", "hardware: ramMebibytes:", "hardware: storage: - name: rotational: sizeBytes: serialNumber:", "hardware: systemVendor: manufacturer: productName: serialNumber:", "provisioning: state: id: image: raid: firmware: rootDeviceHints:", "oc get bmh -n openshift-machine-api -o yaml", "oc get bmh -n openshift-machine-api", "oc get bmh <host_name> -n openshift-machine-api -o yaml", "apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: creationTimestamp: \"2022-06-16T10:48:33Z\" finalizers: - baremetalhost.metal3.io generation: 2 name: openshift-worker-0 namespace: openshift-machine-api resourceVersion: \"30099\" uid: 1513ae9b-e092-409d-be1b-ad08edeb1271 spec: automatedCleaningMode: metadata bmc: address: redfish://10.46.61.19:443/redfish/v1/Systems/1 credentialsName: openshift-worker-0-bmc-secret disableCertificateVerification: true bootMACAddress: 48:df:37:c7:f7:b0 bootMode: UEFI consumerRef: apiVersion: machine.openshift.io/v1beta1 kind: Machine name: ocp-edge-958fk-worker-0-nrfcg namespace: openshift-machine-api customDeploy: method: install_coreos hardwareProfile: unknown online: true rootDeviceHints: deviceName: /dev/sda userData: name: worker-user-data-managed namespace: openshift-machine-api status: errorCount: 0 errorMessage: \"\" goodCredentials: credentials: name: openshift-worker-0-bmc-secret namespace: openshift-machine-api credentialsVersion: \"16120\" hardware: cpu: arch: x86_64 clockMegahertz: 2300 count: 64 flags: - 3dnowprefetch - abm - acpi - adx - aes model: Intel(R) Xeon(R) Gold 5218 CPU @ 2.30GHz firmware: bios: date: 10/26/2020 vendor: HPE version: U30 hostname: openshift-worker-0 nics: - mac: 48:df:37:c7:f7:b3 model: 0x8086 0x1572 name: ens1f3 ramMebibytes: 262144 storage: - hctl: \"0:0:0:0\" model: VK000960GWTTB name: /dev/sda sizeBytes: 960197124096 type: SSD vendor: ATA systemVendor: manufacturer: HPE productName: ProLiant DL380 Gen10 (868703-B21) serialNumber: CZ200606M3 hardwareProfile: unknown lastUpdated: \"2022-06-16T11:41:42Z\" operationalStatus: OK poweredOn: true provisioning: ID: 217baa14-cfcf-4196-b764-744e184a3413 bootMode: UEFI customDeploy: method: install_coreos image: url: \"\" raid: hardwareRAIDVolumes: null softwareRAIDVolumes: [] rootDeviceHints: deviceName: /dev/sda state: provisioned triedCredentials: credentials: name: openshift-worker-0-bmc-secret namespace: openshift-machine-api credentialsVersion: \"16120\"", "spec: settings: ProcTurboMode: Disabled 1", "status: conditions: - lastTransitionTime: message: observedGeneration: reason: status: type:", "status: schema: name: namespace: lastUpdated:", "status: settings:", "oc get hfs -n openshift-machine-api -o yaml", "oc get hfs -n openshift-machine-api", "oc get hfs <host_name> -n openshift-machine-api -o yaml", "oc get hfs -n openshift-machine-api", "oc edit hfs <host_name> -n openshift-machine-api", "spec: settings: name: value 1", "oc get bmh <host_name> -n openshift-machine name", "oc annotate machine <machine_name> machine.openshift.io/cluster-api-delete-machine=yes -n openshift-machine-api", "oc get nodes", "oc get machinesets -n openshift-machine-api", "oc scale machineset <machineset_name> -n openshift-machine-api --replicas=<n-1>", "oc scale machineset <machineset_name> -n openshift-machine-api --replicas=<n>", "oc get hfs -n openshift-machine-api", "oc describe hfs <host_name> -n openshift-machine-api", "Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ValidationFailed 2m49s metal3-hostfirmwaresettings-controller Invalid BIOS setting: Setting ProcTurboMode is invalid, unknown enumeration value - Foo", "<BIOS_setting_name> attribute_type: allowable_values: lower_bound: upper_bound: min_length: max_length: read_only: unique:", "oc get firmwareschema -n openshift-machine-api", "oc get firmwareschema <instance_name> -n openshift-machine-api -o yaml", "oc get mcp worker", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-404caf3180818d8ac1f50c32f14b57c3 False True True 2 1 1 1 5h51m", "oc describe mcp worker", "Last Transition Time: 2021-12-20T18:54:00Z Message: Node ci-ln-j4h8nkb-72292-pxqxz-worker-a-fjks4 is reporting: \"content mismatch for file \\\"/etc/mco-test-file\\\"\" 1 Reason: 1 nodes are reporting degraded status on sync Status: True Type: NodeDegraded 2", "oc describe node/ci-ln-j4h8nkb-72292-pxqxz-worker-a-fjks4", "Annotations: cloud.network.openshift.io/egress-ipconfig: [{\"interface\":\"nic0\",\"ifaddr\":{\"ipv4\":\"10.0.128.0/17\"},\"capacity\":{\"ip\":10}}] csi.volume.kubernetes.io/nodeid: {\"pd.csi.storage.gke.io\":\"projects/openshift-gce-devel-ci/zones/us-central1-a/instances/ci-ln-j4h8nkb-72292-pxqxz-worker-a-fjks4\"} machine.openshift.io/machine: openshift-machine-api/ci-ln-j4h8nkb-72292-pxqxz-worker-a-fjks4 machineconfiguration.openshift.io/controlPlaneTopology: HighlyAvailable machineconfiguration.openshift.io/currentConfig: rendered-worker-67bd55d0b02b0f659aef33680693a9f9 machineconfiguration.openshift.io/desiredConfig: rendered-worker-67bd55d0b02b0f659aef33680693a9f9 machineconfiguration.openshift.io/reason: content mismatch for file \"/etc/mco-test-file\" 1 machineconfiguration.openshift.io/state: Degraded 2", "oc get machineconfigpool", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-06c9c4... True False False 3 3 3 0 4h42m worker rendered-worker-f4b64... False True False 3 2 2 0 4h42m", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-06c9c4... True False False 3 3 3 0 4h42m worker rendered-worker-c1b41a... False True False 3 2 3 0 4h42m", "oc describe mcp worker", "Degraded Machine Count: 0 Machine Count: 3 Observed Generation: 2 Ready Machine Count: 3 Unavailable Machine Count: 0 Updated Machine Count: 3 Events: <none>", "Degraded Machine Count: 0 Machine Count: 3 Observed Generation: 2 Ready Machine Count: 2 Unavailable Machine Count: 1 Updated Machine Count: 3", "oc get machineconfigs", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m 00-worker 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m 01-master-container-runtime 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m 01-master-kubelet 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m rendered-master-dde... 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m rendered-worker-fde... 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m", "oc describe machineconfigs 01-master-kubelet", "Name: 01-master-kubelet Spec: Config: Ignition: Version: 3.2.0 Storage: Files: Contents: Source: data:, Mode: 420 Overwrite: true Path: /etc/kubernetes/cloud.conf Contents: Source: data:,kind%3A%20KubeletConfiguration%0AapiVersion%3A%20kubelet.config.k8s.io%2Fv1beta1%0Aauthentication%3A%0A%20%20x509%3A%0A%20%20%20%20clientCAFile%3A%20%2Fetc%2Fkubernetes%2Fkubelet-ca.crt%0A%20%20anonymous Mode: 420 Overwrite: true Path: /etc/kubernetes/kubelet.conf Systemd: Units: Contents: [Unit] Description=Kubernetes Kubelet Wants=rpc-statd.service network-online.target crio.service After=network-online.target crio.service ExecStart=/usr/bin/hyperkube kubelet --config=/etc/kubernetes/kubelet.conf \\", "oc delete -f ./myconfig.yaml", "variant: openshift version: 4.10.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 storage: files: - path: /etc/chrony.conf mode: 0644 3 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony", "butane 99-worker-chrony.bu -o 99-worker-chrony.yaml", "oc apply -f ./99-worker-chrony.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: <node_role> 1 name: disable-chronyd spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=NTP client/server Documentation=man:chronyd(8) man:chrony.conf(5) After=ntpdate.service sntp.service ntpd.service Conflicts=ntpd.service systemd-timesyncd.service ConditionCapability=CAP_SYS_TIME [Service] Type=forking PIDFile=/run/chrony/chronyd.pid EnvironmentFile=-/etc/sysconfig/chronyd ExecStart=/usr/sbin/chronyd USDOPTIONS ExecStartPost=/usr/libexec/chrony-helper update-daemon PrivateTmp=yes ProtectHome=yes ProtectSystem=full [Install] WantedBy=multi-user.target enabled: false name: \"chronyd.service\"", "oc create -f disable-chronyd.yaml", "oc get MachineConfig", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 05-worker-kernelarg-selinuxpermissive 2 spec: kernelArguments: - enforcing=0 3", "oc create -f 05-worker-kernelarg-selinuxpermissive.yaml", "oc get MachineConfig", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 05-worker-kernelarg-selinuxpermissive 3.2.0 105s 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m", "oc get nodes", "NAME STATUS ROLES AGE VERSION ip-10-0-136-161.ec2.internal Ready worker 28m v1.23.0 ip-10-0-136-243.ec2.internal Ready master 34m v1.23.0 ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.23.0 ip-10-0-142-249.ec2.internal Ready master 34m v1.23.0 ip-10-0-153-11.ec2.internal Ready worker 28m v1.23.0 ip-10-0-153-150.ec2.internal Ready master 34m v1.23.0", "oc debug node/ip-10-0-141-105.ec2.internal", "Starting pod/ip-10-0-141-105ec2internal-debug To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline BOOT_IMAGE=/ostree/rhcos-... console=tty0 console=ttyS0,115200n8 rootflags=defaults,prjquota rw root=UUID=fd0... ostree=/ostree/boot.0/rhcos/16 coreos.oem.id=qemu coreos.oem.id=ec2 ignition.platform.id=ec2 enforcing=0 sh-4.2# exit", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"master\" name: 99-master-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"worker\" name: 99-worker-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'", "oc create -f ./99-worker-kargs-mpath.yaml", "oc get MachineConfig", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-kargs-mpath 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 105s 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m", "oc get nodes", "NAME STATUS ROLES AGE VERSION ip-10-0-136-161.ec2.internal Ready worker 28m v1.23.0 ip-10-0-136-243.ec2.internal Ready master 34m v1.23.0 ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.23.0 ip-10-0-142-249.ec2.internal Ready master 34m v1.23.0 ip-10-0-153-11.ec2.internal Ready worker 28m v1.23.0 ip-10-0-153-150.ec2.internal Ready master 34m v1.23.0", "oc debug node/ip-10-0-141-105.ec2.internal", "Starting pod/ip-10-0-141-105ec2internal-debug To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline rd.multipath=default root=/dev/disk/by-label/dm-mpath-root sh-4.2# exit", "oc describe node <node-name>", "Name: ci-ln-v05w5m2-72292-5s9ht-worker-a-r6fpg Roles: worker Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/instance-type=n1-standard-4 beta.kubernetes.io/os=linux failure-domain.beta.kubernetes.io/region=us-central1 failure-domain.beta.kubernetes.io/zone=us-central1-a kubernetes.io/arch=amd64 kubernetes.io/hostname=ci-ln-v05w5m2-72292-5s9ht-worker-a-r6fpg kubernetes.io/os=linux node-role.kubernetes.io/worker= 1 #", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"worker\" 1 name: worker-enable-cgroups-v2 spec: kernelArguments: - systemd.unified_cgroup_hierarchy=1 2 - cgroup_no_v1=\"all\" 3", "oc create -f worker-enable-cgroups-v2.yaml", "oc get MachineConfig", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m worker-enable-cgroups-v2 3.2.0 10s", "oc get nodes", "NAME STATUS ROLES AGE VERSION ci-ln-fm1qnwt-72292-99kt6-master-0 Ready master 58m v1.23.0 ci-ln-fm1qnwt-72292-99kt6-master-1 Ready master 58m v1.23.0 ci-ln-fm1qnwt-72292-99kt6-master-2 Ready master 58m v1.23.0 ci-ln-fm1qnwt-72292-99kt6-worker-a-h5gt4 Ready,SchedulingDisabled worker 48m v1.23.0 ci-ln-fm1qnwt-72292-99kt6-worker-b-7vtmd Ready worker 48m v1.23.0 ci-ln-fm1qnwt-72292-99kt6-worker-c-rhzkv Ready worker 48m v1.23.0", "oc debug node/<node_name>", "cgroup.controllers cgroup.stat cpuset.cpus.effective io.stat pids cgroup.max.depth cgroup.subtree_control cpuset.mems.effective kubepods.slice system.slice cgroup.max.descendants cgroup.threads init.scope memory.pressure user.slice cgroup.procs cpu.pressure io.pressure memory.stat", "cat << EOF > 99-worker-realtime.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"worker\" name: 99-worker-realtime spec: kernelType: realtime EOF", "oc create -f 99-worker-realtime.yaml", "oc get nodes", "NAME STATUS ROLES AGE VERSION ip-10-0-143-147.us-east-2.compute.internal Ready worker 103m v1.23.0 ip-10-0-146-92.us-east-2.compute.internal Ready worker 101m v1.23.0 ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.23.0", "oc debug node/ip-10-0-143-147.us-east-2.compute.internal", "Starting pod/ip-10-0-143-147us-east-2computeinternal-debug To use host binaries, run `chroot /host` sh-4.4# uname -a Linux <worker_node> 4.18.0-147.3.1.rt24.96.el8_1.x86_64 #1 SMP PREEMPT RT Wed Nov 27 18:29:55 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux", "oc delete -f 99-worker-realtime.yaml", "variant: openshift version: 4.10.0 metadata: name: 40-worker-custom-journald labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/systemd/journald.conf mode: 0644 overwrite: true contents: inline: | # Disable rate limiting RateLimitInterval=1s RateLimitBurst=10000 Storage=volatile Compress=no MaxRetentionSec=30s", "butane 40-worker-custom-journald.bu -o 40-worker-custom-journald.yaml", "oc apply -f 40-worker-custom-journald.yaml", "oc get machineconfigpool NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-35 True False False 3 3 3 0 34m worker rendered-worker-d8 False True False 3 1 1 0 34m", "oc get node | grep worker ip-10-0-0-1.us-east-2.compute.internal Ready worker 39m v0.0.0-master+USDFormat:%hUSD oc debug node/ip-10-0-0-1.us-east-2.compute.internal Starting pod/ip-10-0-141-142us-east-2computeinternal-debug sh-4.2# chroot /host sh-4.4# cat /etc/systemd/journald.conf Disable rate limiting RateLimitInterval=1s RateLimitBurst=10000 Storage=volatile Compress=no MaxRetentionSec=30s sh-4.4# exit", "cat << EOF > 80-extensions.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 80-worker-extensions spec: config: ignition: version: 3.2.0 extensions: - usbguard EOF", "oc create -f 80-extensions.yaml", "oc get machineconfig 80-worker-extensions", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 80-worker-extensions 3.2.0 57s", "oc get machineconfigpool", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-35 True False False 3 3 3 0 34m worker rendered-worker-d8 False True False 3 1 1 0 34m", "oc get node | grep worker", "NAME STATUS ROLES AGE VERSION ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.23.0", "oc debug node/ip-10-0-169-2.us-east-2.compute.internal", "To use host binaries, run `chroot /host` sh-4.4# chroot /host sh-4.4# rpm -q usbguard usbguard-0.7.4-4.el8.x86_64.rpm", "variant: openshift version: 4.10.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-worker-firmware-blob storage: files: - path: /var/lib/firmware/<package_name> 1 contents: local: <package_name> 2 mode: 0644 3 openshift: kernel_arguments: - 'firmware_class.path=/var/lib/firmware' 4", "butane 98-worker-firmware-blob.bu -o 98-worker-firmware-blob.yaml --files-dir <directory_including_package_name>", "oc apply -f 98-worker-firmware-blob.yaml", "oc get kubeletconfig", "NAME AGE set-max-pods 15m", "oc get mc | grep kubelet", "99-worker-generated-kubelet-1 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 26m", "oc describe machineconfigpool <name>", "oc describe machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: 2019-02-08T14:52:39Z generation: 1 labels: custom-kubelet: set-max-pods 1", "oc label machineconfigpool worker custom-kubelet=set-max-pods", "oc get machineconfig", "oc describe node <node_name>", "oc describe node ci-ln-5grqprb-f76d1-ncnqq-worker-a-mdv94", "Allocatable: attachable-volumes-aws-ebs: 25 cpu: 3500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 15341844Ki pods: 250", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods 1 kubeletConfig: maxPods: 500 2", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods kubeletConfig: maxPods: <pod_count> kubeAPIBurst: <burst_rate> kubeAPIQPS: <QPS>", "oc label machineconfigpool worker custom-kubelet=large-pods", "oc create -f change-maxPods-cr.yaml", "oc get kubeletconfig", "NAME AGE set-max-pods 15m", "oc describe node <node_name>", "Allocatable: attachable-volumes-gce-pd: 127 cpu: 3500m ephemeral-storage: 123201474766 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 14225400Ki pods: 500 1", "oc get kubeletconfigs set-max-pods -o yaml", "spec: kubeletConfig: maxPods: 500 machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods status: conditions: - lastTransitionTime: \"2021-06-30T17:04:07Z\" message: Success status: \"True\" type: Success", "oc get ctrcfg", "NAME AGE ctr-pid 24m ctr-overlay 15m ctr-level 5m45s", "oc get mc | grep container", "01-master-container-runtime b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 57m 01-worker-container-runtime b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 57m 99-worker-generated-containerruntime b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 26m 99-worker-generated-containerruntime-1 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 17m 99-worker-generated-containerruntime-2 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 7m26s", "apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: overlay-size spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: '' 1 containerRuntimeConfig: pidsLimit: 2048 2 logLevel: debug 3 overlaySize: 8G 4 logSizeMax: \"-1\" 5", "apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: overlay-size spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: '' 1 containerRuntimeConfig: 2 pidsLimit: 2048 logLevel: debug overlaySize: 8G logSizeMax: \"-1\"", "oc create -f <file_name>.yaml", "oc get ContainerRuntimeConfig", "NAME AGE overlay-size 3m19s", "oc get machineconfigs | grep containerrun", "99-worker-generated-containerruntime 2c9371fbb673b97a6fe8b1c52691999ed3a1bfc2 3.2.0 31s", "oc get mcp worker", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-169 False True False 3 1 1 0 9h", "oc debug node/<node_name>", "sh-4.4# chroot /host", "sh-4.4# crio config | egrep 'log_level|pids_limit|log_size_max'", "pids_limit = 2048 log_size_max = -1 log_level = \"debug\"", "sh-4.4# head -n 7 /etc/containers/storage.conf", "[storage] driver = \"overlay\" runroot = \"/var/run/containers/storage\" graphroot = \"/var/lib/containers/storage\" [storage.options] additionalimagestores = [] size = \"8G\"", "apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: overlay-size spec: machineConfigPoolSelector: matchLabels: custom-crio: overlay-size containerRuntimeConfig: pidsLimit: 2048 logLevel: debug overlaySize: 8G", "oc apply -f overlaysize.yml", "oc edit machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2020-07-09T15:46:34Z\" generation: 3 labels: custom-crio: overlay-size machineconfiguration.openshift.io/mco-built-in: \"\"", "oc get machineconfigs", "99-worker-generated-containerruntime 4173030d89fbf4a7a0976d1665491a4d9a6e54f1 3.2.0 7m42s rendered-worker-xyz 4173030d89fbf4a7a0976d1665491a4d9a6e54f1 3.2.0 7m36s", "oc get mcp worker", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-xyz False True False 3 2 2 0 20h", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-xyz True False False 3 3 3 0 20h", "head -n 7 /etc/containers/storage.conf [storage] driver = \"overlay\" runroot = \"/var/run/containers/storage\" graphroot = \"/var/lib/containers/storage\" [storage.options] additionalimagestores = [] size = \"8G\"", "~ USD df -h Filesystem Size Used Available Use% Mounted on overlay 8.0G 8.0K 8.0G 0% /", "oc get secret/pull-secret -n openshift-config --template='{{index .data \".dockerconfigjson\" | base64decode}}' ><pull_secret_location> 1", "oc registry login --registry=\"<registry>\" \\ 1 --auth-basic=\"<username>:<password>\" \\ 2 --to=<pull_secret_location> 3", "oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1", "oc get machinesets -n openshift-machine-api", "oc get machine -n openshift-machine-api", "oc annotate machine/<machine_name> -n openshift-machine-api machine.openshift.io/cluster-api-delete-machine=\"true\"", "oc scale --replicas=2 machineset <machineset> -n openshift-machine-api", "oc edit machineset <machineset> -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2", "oc get machines", "spec: deletePolicy: <delete_policy> replicas: <desired_replica_count>", "oc edit scheduler cluster", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: type=user-node,region=east 1 mastersSchedulable: false", "oc patch MachineSet <name> --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"<key>\"=\"<value>\",\"<key>\"=\"<value>\"}}]' -n openshift-machine-api 1", "oc patch MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"type\":\"user-node\",\"region\":\"east\"}}]' -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: template: spec: metadata: labels: region: \"east\" type: \"user-node\"", "oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: metadata: spec: metadata: labels: region: east type: user-node", "oc scale --replicas=0 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api", "oc scale --replicas=1 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api", "oc get nodes -l <key>=<value>", "oc get nodes -l type=user-node", "NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.23.0", "oc label nodes <name> <key>=<value>", "oc label nodes ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 type=user-node region=east", "kind: Node apiVersion: v1 metadata: name: <node_name> labels: type: \"user-node\" region: \"east\"", "oc get nodes -l <key>=<value>,<key>=<value>", "oc get nodes -l type=user-node,region=east", "NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.23.0", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc label node <node-name> node-role.kubernetes.io/app=\"\"", "oc label node <node-name> node-role.kubernetes.io/infra=\"\"", "oc get nodes", "oc edit scheduler cluster", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: topology.kubernetes.io/region=us-east-1 1", "oc label node <node_name> <label>", "oc label node ci-ln-n8mqwr2-f76d1-xscn2-worker-c-6fmtx node-role.kubernetes.io/infra=", "cat infra.mcp.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: infra spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]} 1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: \"\" 2", "oc create -f infra.mcp.yaml", "oc get machineconfig", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED 00-master 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 00-worker 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-1ae2a1e0-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-ssh 3.2.0 31d 99-worker-1ae64748-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-worker-ssh 3.2.0 31d rendered-infra-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 23m rendered-master-072d4b2da7f88162636902b074e9e28e 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-master-3e88ec72aed3886dec061df60d16d1af 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-master-419bee7de96134963a15fdf9dd473b25 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-master-53f5c91c7661708adce18739cc0f40fb 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d rendered-master-a6a357ec18e5bce7f5ac426fc7c5ffcd 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-master-dc7f874ec77fc4b969674204332da037 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-1a75960c52ad18ff5dfa6674eb7e533d 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-2640531be11ba43c61d72e82dc634ce6 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-worker-4f110718fe88e5f349987854a1147755 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-worker-afc758e194d6188677eb837842d3b379 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-worker-daa08cc1e8f5fcdeba24de60cd955cc3 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d", "cat infra.mc.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 51-infra labels: machineconfiguration.openshift.io/role: infra 1 spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/infratest mode: 0644 contents: source: data:,infra", "oc create -f infra.mc.yaml", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE infra rendered-infra-60e35c2e99f42d976e084fa94da4d0fc True False False 1 1 1 0 4m20s master rendered-master-9360fdb895d4c131c7c4bebbae099c90 True False False 3 3 3 0 91m worker rendered-worker-60e35c2e99f42d976e084fa94da4d0fc True False False 2 2 2 0 91m", "oc describe nodes <node_name>", "describe node ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Name: ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Roles: worker Taints: node-role.kubernetes.io/infra:NoSchedule", "oc adm taint nodes <node_name> <key>=<value>:<effect>", "oc adm taint nodes node1 node-role.kubernetes.io/infra=reserved:NoExecute", "kind: Node apiVersion: v1 metadata: name: <node_name> labels: spec: taints: - key: node-role.kubernetes.io/infra effect: NoExecute value: reserved", "tolerations: - effect: NoExecute 1 key: node-role.kubernetes.io/infra 2 operator: Exists 3 value: reserved 4", "oc get ingresscontroller default -n openshift-ingress-operator -o yaml", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: 2019-04-18T12:35:39Z finalizers: - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller generation: 1 name: default namespace: openshift-ingress-operator resourceVersion: \"11341\" selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default uid: 79509e05-61d6-11e9-bc55-02ce4781844a spec: {} status: availableReplicas: 2 conditions: - lastTransitionTime: 2019-04-18T12:36:15Z status: \"True\" type: Available domain: apps.<cluster>.example.com endpointPublishingStrategy: type: LoadBalancerService selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default", "oc edit ingresscontroller default -n openshift-ingress-operator", "spec: nodePlacement: nodeSelector: 1 matchLabels: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved", "oc get pod -n openshift-ingress -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none> router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none>", "oc get node <node_name> 1", "NAME STATUS ROLES AGE VERSION ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.23.0", "oc get configs.imageregistry.operator.openshift.io/cluster -o yaml", "apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: creationTimestamp: 2019-02-05T13:52:05Z finalizers: - imageregistry.operator.openshift.io/finalizer generation: 1 name: cluster resourceVersion: \"56174\" selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster uid: 36fd3724-294d-11e9-a524-12ffeee2931b spec: httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623 logging: 2 managementState: Managed proxy: {} replicas: 1 requests: read: {} write: {} storage: s3: bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c region: us-east-1 status:", "oc edit configs.imageregistry.operator.openshift.io/cluster", "spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: namespaces: - openshift-image-registry topologyKey: kubernetes.io/hostname weight: 100 logLevel: Normal managementState: Managed nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved", "oc get pods -o wide -n openshift-image-registry", "oc describe node <node_name>", "oc edit configmap cluster-monitoring-config -n openshift-monitoring", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |+ alertmanagerMain: nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusK8s: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusOperator: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute grafana: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute k8sPrometheusAdapter: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute kubeStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute telemeterClient: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute openshiftStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute thanosQuerier: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute", "watch 'oc get pod -n openshift-monitoring -o wide'", "oc delete pod -n openshift-monitoring <pod>", "oc edit ClusterLogging instance", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging spec: collection: logs: fluentd: resources: null type: fluentd logStore: elasticsearch: nodeCount: 3 nodeSelector: 1 node-role.kubernetes.io/infra: '' tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved redundancyPolicy: SingleRedundancy resources: limits: cpu: 500m memory: 16Gi requests: cpu: 500m memory: 16Gi storage: {} type: elasticsearch managementState: Managed visualization: kibana: nodeSelector: 2 node-role.kubernetes.io/infra: '' tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved proxy: resources: null replicas: 1 resources: null type: kibana", "oc get pod kibana-5b8bdf44f9-ccpq9 -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-5b8bdf44f9-ccpq9 2/2 Running 0 27s 10.129.2.18 ip-10-0-147-79.us-east-2.compute.internal <none> <none>", "oc get nodes", "NAME STATUS ROLES AGE VERSION ip-10-0-133-216.us-east-2.compute.internal Ready master 60m v1.23.0 ip-10-0-139-146.us-east-2.compute.internal Ready master 60m v1.23.0 ip-10-0-139-192.us-east-2.compute.internal Ready worker 51m v1.23.0 ip-10-0-139-241.us-east-2.compute.internal Ready worker 51m v1.23.0 ip-10-0-147-79.us-east-2.compute.internal Ready worker 51m v1.23.0 ip-10-0-152-241.us-east-2.compute.internal Ready master 60m v1.23.0 ip-10-0-139-48.us-east-2.compute.internal Ready infra 51m v1.23.0", "oc get node ip-10-0-139-48.us-east-2.compute.internal -o yaml", "kind: Node apiVersion: v1 metadata: name: ip-10-0-139-48.us-east-2.compute.internal selfLink: /api/v1/nodes/ip-10-0-139-48.us-east-2.compute.internal uid: 62038aa9-661f-41d7-ba93-b5f1b6ef8751 resourceVersion: '39083' creationTimestamp: '2020-04-13T19:07:55Z' labels: node-role.kubernetes.io/infra: ''", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging spec: visualization: kibana: nodeSelector: 1 node-role.kubernetes.io/infra: '' proxy: resources: null replicas: 1 resources: null type: kibana", "oc get pods", "NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 29m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 28m fluentd-42dzz 1/1 Running 0 28m fluentd-d74rq 1/1 Running 0 28m fluentd-m5vr9 1/1 Running 0 28m fluentd-nkxl7 1/1 Running 0 28m fluentd-pdvqb 1/1 Running 0 28m fluentd-tflh6 1/1 Running 0 28m kibana-5b8bdf44f9-ccpq9 2/2 Terminating 0 4m11s kibana-7d85dcffc8-bfpfp 2/2 Running 0 33s", "oc get pod kibana-7d85dcffc8-bfpfp -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-7d85dcffc8-bfpfp 2/2 Running 0 43s 10.131.0.22 ip-10-0-139-48.us-east-2.compute.internal <none> <none>", "oc get pods", "NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 30m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 29m fluentd-42dzz 1/1 Running 0 29m fluentd-d74rq 1/1 Running 0 29m fluentd-m5vr9 1/1 Running 0 29m fluentd-nkxl7 1/1 Running 0 29m fluentd-pdvqb 1/1 Running 0 29m fluentd-tflh6 1/1 Running 0 29m kibana-7d85dcffc8-bfpfp 2/2 Running 0 62s", "apiVersion: \"autoscaling.openshift.io/v1\" kind: \"ClusterAutoscaler\" metadata: name: \"default\" spec: podPriorityThreshold: -10 1 resourceLimits: maxNodesTotal: 24 2 cores: min: 8 3 max: 128 4 memory: min: 4 5 max: 256 6 gpus: - type: nvidia.com/gpu 7 min: 0 8 max: 16 9 - type: amd.com/gpu min: 0 max: 4 scaleDown: 10 enabled: true 11 delayAfterAdd: 10m 12 delayAfterDelete: 5m 13 delayAfterFailure: 30s 14 unneededTime: 5m 15 utilizationThreshold: \"0.4\" 16", "oc create -f <filename>.yaml 1", "apiVersion: \"autoscaling.openshift.io/v1beta1\" kind: \"MachineAutoscaler\" metadata: name: \"worker-us-east-1a\" 1 namespace: \"openshift-machine-api\" spec: minReplicas: 1 2 maxReplicas: 12 3 scaleTargetRef: 4 apiVersion: machine.openshift.io/v1beta1 kind: MachineSet 5 name: worker-us-east-1a 6", "oc create -f <filename>.yaml 1", "apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 spec: featureSet: TechPreviewNoUpgrade 2", "sh-4.2# chroot /host", "sh-4.2# cat /etc/kubernetes/kubelet.conf", "featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false", "oc edit featuregate cluster", "apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 spec: featureSet: TechPreviewNoUpgrade 2", "sh-4.2# chroot /host", "sh-4.2# cat /etc/kubernetes/kubelet.conf", "featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false", "oc edit apiserver", "spec: encryption: type: aescbc 1", "oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "EncryptionCompleted All resources encrypted: routes.route.openshift.io", "oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "EncryptionCompleted All resources encrypted: secrets, configmaps", "oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "EncryptionCompleted All resources encrypted: oauthaccesstokens.oauth.openshift.io, oauthauthorizetokens.oauth.openshift.io", "oc edit apiserver", "spec: encryption: type: identity 1", "oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "DecryptionCompleted Encryption mode set to identity and everything is decrypted", "oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "DecryptionCompleted Encryption mode set to identity and everything is decrypted", "oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "DecryptionCompleted Encryption mode set to identity and everything is decrypted", "oc debug node/<node_name>", "sh-4.2# chroot /host", "sh-4.4# /usr/local/bin/cluster-backup.sh /home/core/assets/backup", "found latest kube-apiserver: /etc/kubernetes/static-pod-resources/kube-apiserver-pod-6 found latest kube-controller-manager: /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-7 found latest kube-scheduler: /etc/kubernetes/static-pod-resources/kube-scheduler-pod-6 found latest etcd: /etc/kubernetes/static-pod-resources/etcd-pod-3 ede95fe6b88b87ba86a03c15e669fb4aa5bf0991c180d3c6895ce72eaade54a1 etcdctl version: 3.4.14 API version: 3.4 {\"level\":\"info\",\"ts\":1624647639.0188997,\"caller\":\"snapshot/v3_snapshot.go:119\",\"msg\":\"created temporary db file\",\"path\":\"/home/core/assets/backup/snapshot_2021-06-25_190035.db.part\"} {\"level\":\"info\",\"ts\":\"2021-06-25T19:00:39.030Z\",\"caller\":\"clientv3/maintenance.go:200\",\"msg\":\"opened snapshot stream; downloading\"} {\"level\":\"info\",\"ts\":1624647639.0301006,\"caller\":\"snapshot/v3_snapshot.go:127\",\"msg\":\"fetching snapshot\",\"endpoint\":\"https://10.0.0.5:2379\"} {\"level\":\"info\",\"ts\":\"2021-06-25T19:00:40.215Z\",\"caller\":\"clientv3/maintenance.go:208\",\"msg\":\"completed snapshot read; closing\"} {\"level\":\"info\",\"ts\":1624647640.6032252,\"caller\":\"snapshot/v3_snapshot.go:142\",\"msg\":\"fetched snapshot\",\"endpoint\":\"https://10.0.0.5:2379\",\"size\":\"114 MB\",\"took\":1.584090459} {\"level\":\"info\",\"ts\":1624647640.6047094,\"caller\":\"snapshot/v3_snapshot.go:152\",\"msg\":\"saved\",\"path\":\"/home/core/assets/backup/snapshot_2021-06-25_190035.db\"} Snapshot saved at /home/core/assets/backup/snapshot_2021-06-25_190035.db {\"hash\":3866667823,\"revision\":31407,\"totalKey\":12828,\"totalSize\":114446336} snapshot db and kube resources are successfully saved to /home/core/assets/backup", "etcd member has been defragmented: <member_name> , memberID: <member_id>", "failed defrag on member: <member_name> , memberID: <member_id> : <error_message>", "oc -n openshift-etcd get pods -l k8s-app=etcd -o wide", "etcd-ip-10-0-159-225.example.redhat.com 3/3 Running 0 175m 10.0.159.225 ip-10-0-159-225.example.redhat.com <none> <none> etcd-ip-10-0-191-37.example.redhat.com 3/3 Running 0 173m 10.0.191.37 ip-10-0-191-37.example.redhat.com <none> <none> etcd-ip-10-0-199-170.example.redhat.com 3/3 Running 0 176m 10.0.199.170 ip-10-0-199-170.example.redhat.com <none> <none>", "oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com etcdctl endpoint status --cluster -w table", "Defaulting container name to etcdctl. Use 'oc describe pod/etcd-ip-10-0-159-225.example.redhat.com -n openshift-etcd' to see all of the containers in this pod. +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.4.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+", "oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com", "sh-4.4# unset ETCDCTL_ENDPOINTS", "sh-4.4# etcdctl --command-timeout=30s --endpoints=https://localhost:2379 defrag", "Finished defragmenting etcd member[https://localhost:2379]", "sh-4.4# etcdctl endpoint status -w table --cluster", "+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.4.9 | 41 MB | false | false | 7 | 91624 | 91624 | | 1 | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.4.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+", "sh-4.4# etcdctl alarm list", "memberID:12345678912345678912 alarm:NOSPACE", "sh-4.4# etcdctl alarm disarm", "sudo mv /etc/kubernetes/manifests/etcd-pod.yaml /tmp", "sudo crictl ps | grep etcd | grep -v operator", "sudo mv /etc/kubernetes/manifests/kube-apiserver-pod.yaml /tmp", "sudo crictl ps | grep kube-apiserver | grep -v operator", "sudo mv /var/lib/etcd/ /tmp", "sudo -E /usr/local/bin/cluster-restore.sh /home/core/backup", "...stopping kube-scheduler-pod.yaml ...stopping kube-controller-manager-pod.yaml ...stopping etcd-pod.yaml ...stopping kube-apiserver-pod.yaml Waiting for container etcd to stop .complete Waiting for container etcdctl to stop .............................complete Waiting for container etcd-metrics to stop complete Waiting for container kube-controller-manager to stop complete Waiting for container kube-apiserver to stop ..........................................................................................complete Waiting for container kube-scheduler to stop complete Moving etcd data-dir /var/lib/etcd/member to /var/lib/etcd-backup starting restore-etcd static pod starting kube-apiserver-pod.yaml static-pod-resources/kube-apiserver-pod-7/kube-apiserver-pod.yaml starting kube-controller-manager-pod.yaml static-pod-resources/kube-controller-manager-pod-7/kube-controller-manager-pod.yaml starting kube-scheduler-pod.yaml static-pod-resources/kube-scheduler-pod-8/kube-scheduler-pod.yaml", "oc get nodes -w", "NAME STATUS ROLES AGE VERSION host-172-25-75-28 Ready master 3d20h v1.23.3+e419edf host-172-25-75-38 Ready infra,worker 3d20h v1.23.3+e419edf host-172-25-75-40 Ready master 3d20h v1.23.3+e419edf host-172-25-75-65 Ready master 3d20h v1.23.3+e419edf host-172-25-75-74 Ready infra,worker 3d20h v1.23.3+e419edf host-172-25-75-79 Ready worker 3d20h v1.23.3+e419edf host-172-25-75-86 Ready worker 3d20h v1.23.3+e419edf host-172-25-75-98 Ready infra,worker 3d20h v1.23.3+e419edf", "ssh -i <ssh-key-path> core@<master-hostname>", "sh-4.4# pwd /var/lib/kubelet/pki sh-4.4# ls kubelet-client-2022-04-28-11-24-09.pem kubelet-server-2022-04-28-11-24-15.pem kubelet-client-current.pem kubelet-server-current.pem", "sudo systemctl restart kubelet.service", "oc get csr", "NAME AGE SIGNERNAME REQUESTOR CONDITION csr-2s94x 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending 1 csr-4bd6t 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending 2 csr-4hl85 13m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 3 csr-zhhhp 3m8s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 4", "oc describe csr <csr_name> 1", "oc adm certificate approve <csr_name>", "oc adm certificate approve <csr_name>", "sudo crictl ps | grep etcd | egrep -v \"operator|etcd-guard\"", "3ad41b7908e32 36f86e2eeaaffe662df0d21041eb22b8198e0e58abeeae8c743c3e6e977e8009 About a minute ago Running etcd 0 7c05f8af362f0", "oc -n openshift-etcd get pods -l k8s-app=etcd", "NAME READY STATUS RESTARTS AGE etcd-ip-10-0-143-125.ec2.internal 1/1 Running 1 2m47s", "sudo rm -f /var/lib/ovn/etc/*.db", "oc delete pods -l app=ovnkube-master -n openshift-ovn-kubernetes", "oc get pods -l app=ovnkube-master -n openshift-ovn-kubernetes", "NAME READY STATUS RESTARTS AGE ovnkube-master-nb24h 4/4 Running 0 48s", "oc get pods -n openshift-ovn-kubernetes -o name | grep ovnkube-node | while read p ; do oc delete USDp -n openshift-ovn-kubernetes ; done", "oc get pods -n openshift-ovn-kubernetes | grep ovnkube-node", "oc get machines -n openshift-machine-api -o wide", "NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-0 Running m4.xlarge us-east-1 us-east-1a 3h37m ip-10-0-131-183.ec2.internal aws:///us-east-1a/i-0ec2782f8287dfb7e stopped 1 clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running", "oc get machine clustername-8qw5l-master-0 \\ 1 -n openshift-machine-api -o yaml > new-master-machine.yaml", "status: addresses: - address: 10.0.131.183 type: InternalIP - address: ip-10-0-131-183.ec2.internal type: InternalDNS - address: ip-10-0-131-183.ec2.internal type: Hostname lastUpdated: \"2020-04-20T17:44:29Z\" nodeRef: kind: Node name: ip-10-0-131-183.ec2.internal uid: acca4411-af0d-4387-b73e-52b2484295ad phase: Running providerStatus: apiVersion: awsproviderconfig.openshift.io/v1beta1 conditions: - lastProbeTime: \"2020-04-20T16:53:50Z\" lastTransitionTime: \"2020-04-20T16:53:50Z\" message: machine successfully created reason: MachineCreationSucceeded status: \"True\" type: MachineCreation instanceId: i-0fdb85790d76d0c3f instanceState: stopped kind: AWSMachineProviderStatus", "apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: name: clustername-8qw5l-master-3", "providerID: aws:///us-east-1a/i-0fdb85790d76d0c3f", "annotations: machine.openshift.io/instance-state: running generation: 2", "resourceVersion: \"13291\" uid: a282eb70-40a2-4e89-8009-d05dd420d31a", "oc delete machine -n openshift-machine-api clustername-8qw5l-master-0 1", "oc get machines -n openshift-machine-api -o wide", "NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running", "oc apply -f new-master-machine.yaml", "oc get machines -n openshift-machine-api -o wide", "NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-master-3 Provisioning m4.xlarge us-east-1 us-east-1a 85s ip-10-0-173-171.ec2.internal aws:///us-east-1a/i-015b0888fe17bc2c8 running 1 clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running", "oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": {\"useUnsupportedUnsafeNonHANonProductionUnstableEtcd\": true}}}'", "export KUBECONFIG=/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost-recovery.kubeconfig", "oc patch etcd cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge 1", "oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 7 1", "oc patch kubeapiserver cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge", "oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 7 1", "oc patch kubecontrollermanager cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge", "oc get kubecontrollermanager -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 7 1", "oc patch kubescheduler cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge", "oc get kubescheduler -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 7 1", "oc -n openshift-etcd get pods -l k8s-app=etcd", "etcd-ip-10-0-143-125.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-154-194.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-173-171.ec2.internal 2/2 Running 0 9h", "export KUBECONFIG=<installation_directory>/auth/kubeconfig", "oc whoami", "oc get poddisruptionbudget --all-namespaces", "NAMESPACE NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE openshift-apiserver openshift-apiserver-pdb N/A 1 1 121m openshift-cloud-controller-manager aws-cloud-controller-manager 1 N/A 1 125m openshift-cloud-credential-operator pod-identity-webhook 1 N/A 1 117m openshift-cluster-csi-drivers aws-ebs-csi-driver-controller-pdb N/A 1 1 121m openshift-cluster-storage-operator csi-snapshot-controller-pdb N/A 1 1 122m openshift-cluster-storage-operator csi-snapshot-webhook-pdb N/A 1 1 122m openshift-console console N/A 1 1 116m #", "apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: minAvailable: 2 2 selector: 3 matchLabels: name: my-pod", "apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: maxUnavailable: 25% 2 selector: 3 matchLabels: name: my-pod", "oc create -f </path/to/file> -n <project_name>", "ccoctl ibmcloud refresh-keys --kubeconfig <openshift_kubeconfig_file> \\ 1 --credentials-requests-dir <path_to_credential_requests_directory> \\ 2 --name <name> 3", "oc patch kubecontrollermanager cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date )\"'\"}}' --type=merge", "oc get co kube-controller-manager", "oc -n openshift-cloud-credential-operator get CredentialsRequest -o json | jq -r '.items[] | select (.spec.providerSpec.kind==\"<provider_spec>\") | .spec.secretRef'", "{ \"name\": \"ebs-cloud-credentials\", \"namespace\": \"openshift-cluster-csi-drivers\" } { \"name\": \"cloud-credential-operator-iam-ro-creds\", \"namespace\": \"openshift-cloud-credential-operator\" }", "oc delete secret <secret_name> \\ 1 -n <secret_namespace> 2", "oc delete secret ebs-cloud-credentials -n openshift-cluster-csi-drivers", "oc get is <imagestream> -n openshift -o json | jq .spec.tags[].from.name | grep registry.redhat.io", "oc image mirror registry.redhat.io/rhscl/ruby-25-rhel7:latest USD{MIRROR_ADDR}/rhscl/ruby-25-rhel7:latest", "oc create configmap registry-config --from-file=USD{MIRROR_ADDR_HOSTNAME}..5000=USDpath/ca.crt -n openshift-config", "oc patch image.config.openshift.io/cluster --patch '{\"spec\":{\"additionalTrustedCA\":{\"name\":\"registry-config\"}}}' --type=merge", "oc edit configs.samples.operator.openshift.io -n openshift-cluster-samples-operator", "oc create configmap registry-config --from-file=USD{MIRROR_ADDR_HOSTNAME}..5000=USDpath/ca.crt -n openshift-config", "oc patch image.config.openshift.io/cluster --patch '{\"spec\":{\"additionalTrustedCA\":{\"name\":\"registry-config\"}}}' --type=merge", "oc import-image is/must-gather -n openshift", "oc adm must-gather --image=USD(oc adm release info --image-for must-gather)", "get imagestreams -nopenshift", "oc get is <image-stream-name> -o jsonpath=\"{range .spec.tags[*]}{.name}{'\\t'}{.from.name}{'\\n'}{end}\" -nopenshift", "oc get is ubi8-openjdk-17 -o jsonpath=\"{range .spec.tags[*]}{.name}{'\\t'}{.from.name}{'\\n'}{end}\" -nopenshift", "1.11 registry.access.redhat.com/ubi8/openjdk-17:1.11 1.12 registry.access.redhat.com/ubi8/openjdk-17:1.12", "oc tag <repository/image> <image-stream-name:tag> --scheduled -nopenshift", "oc tag registry.access.redhat.com/ubi8/openjdk-17:1.11 ubi8-openjdk-17:1.11 --scheduled -nopenshift oc tag registry.access.redhat.com/ubi8/openjdk-17:1.12 ubi8-openjdk-17:1.12 --scheduled -nopenshift", "get imagestream <image-stream-name> -o jsonpath=\"{range .spec.tags[*]}Tag: {.name}{'\\t'}Scheduled: {.importPolicy.scheduled}{'\\n'}{end}\" -nopenshift", "get imagestream ubi8-openjdk-17 -o jsonpath=\"{range .spec.tags[*]}Tag: {.name}{'\\t'}Scheduled: {.importPolicy.scheduled}{'\\n'}{end}\" -nopenshift", "Tag: 1.11 Scheduled: true Tag: 1.12 Scheduled: true", "subscription-manager register --username=<user_name> --password=<password>", "subscription-manager refresh", "subscription-manager list --available --matches '*OpenShift*'", "subscription-manager attach --pool=<pool_id>", "subscription-manager repos --enable=\"rhel-8-for-x86_64-baseos-rpms\" --enable=\"rhel-8-for-x86_64-appstream-rpms\" --enable=\"ansible-2.9-for-rhel-8-x86_64-rpms\" --enable=\"rhocp-4.10-for-rhel-8-x86_64-rpms\"", "subscription-manager repos --enable=\"rhel-7-server-rpms\" --enable=\"rhel-7-server-extras-rpms\" --enable=\"rhel-7-server-ansible-2.9-rpms\" --enable=\"rhel-7-server-ose-4.10-rpms\"", "yum install openshift-ansible openshift-clients jq", "subscription-manager register --username=<user_name> --password=<password>", "subscription-manager refresh", "subscription-manager list --available --matches '*OpenShift*'", "subscription-manager attach --pool=<pool_id>", "subscription-manager repos --disable=\"*\"", "yum repolist", "yum-config-manager --disable <repo_id>", "yum-config-manager --disable \\*", "subscription-manager repos --enable=\"rhel-8-for-x86_64-baseos-rpms\" --enable=\"rhel-8-for-x86_64-appstream-rpms\" --enable=\"rhocp-4.10-for-rhel-8-x86_64-rpms\" --enable=\"fast-datapath-for-rhel-8-x86_64-rpms\"", "systemctl disable --now firewalld.service", "[all:vars] ansible_user=root 1 #ansible_become=True 2 openshift_kubeconfig_path=\"~/.kube/config\" 3 [new_workers] 4 mycluster-rhel8-0.example.com mycluster-rhel8-1.example.com", "cd /usr/share/ansible/openshift-ansible", "ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml 1", "oc get nodes -o wide", "oc adm cordon <node_name> 1", "oc adm drain <node_name> --force --delete-emptydir-data --ignore-daemonsets 1", "oc delete nodes <node_name> 1", "oc get nodes -o wide", "sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2", "sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b", "DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img 2", "kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img 1 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 2", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.23.0 master-1 Ready master 63m v1.23.0 master-2 Ready master 64m v1.23.0", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.23.0 master-1 Ready master 73m v1.23.0 master-2 Ready master 74m v1.23.0 worker-0 Ready worker 11m v1.23.0 worker-1 Ready worker 11m v1.23.0", "oc project openshift-machine-api", "oc get secret worker-user-data --template='{{index .data.userData | base64decode}}' | jq > userData.txt", "{ \"ignition\": { \"config\": { \"merge\": [ { \"source\": \"https:....\" } ] }, \"security\": { \"tls\": { \"certificateAuthorities\": [ { \"source\": \"data:text/plain;charset=utf-8;base64,.....==\" } ] } }, \"version\": \"3.2.0\" }, \"storage\": { \"disks\": [ { \"device\": \"/dev/nvme1n1\", 1 \"partitions\": [ { \"label\": \"var\", \"sizeMiB\": 50000, 2 \"startMiB\": 0 3 } ] } ], \"filesystems\": [ { \"device\": \"/dev/disk/by-partlabel/var\", 4 \"format\": \"xfs\", 5 \"path\": \"/var\" 6 } ] }, \"systemd\": { \"units\": [ 7 { \"contents\": \"[Unit]\\nBefore=local-fs.target\\n[Mount]\\nWhere=/var\\nWhat=/dev/disk/by-partlabel/var\\nOptions=defaults,pquota\\n[Install]\\nWantedBy=local-fs.target\\n\", \"enabled\": true, \"name\": \"var.mount\" } ] } }", "oc get secret worker-user-data --template='{{index .data.disableTemplating | base64decode}}' | jq > disableTemplating.txt", "oc create secret generic worker-user-data-x5 --from-file=userData=userData.txt --from-file=disableTemplating=disableTemplating.txt", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: auto-52-92tf4 name: worker-us-east-2-nvme1n1 1 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: auto-52-92tf4 machine.openshift.io/cluster-api-machineset: auto-52-92tf4-worker-us-east-2b template: metadata: labels: machine.openshift.io/cluster-api-cluster: auto-52-92tf4 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: auto-52-92tf4-worker-us-east-2b spec: metadata: {} providerSpec: value: ami: id: ami-0c2dbd95931a apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - DeviceName: /dev/nvme1n1 2 ebs: encrypted: true iops: 0 volumeSize: 120 volumeType: gp2 - DeviceName: /dev/nvme1n2 3 ebs: encrypted: true iops: 0 volumeSize: 50 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: auto-52-92tf4-worker-profile instanceType: m6i.large kind: AWSMachineProviderConfig metadata: creationTimestamp: null placement: availabilityZone: us-east-2b region: us-east-2 securityGroups: - filters: - name: tag:Name values: - auto-52-92tf4-worker-sg subnet: id: subnet-07a90e5db1 tags: - name: kubernetes.io/cluster/auto-52-92tf4 value: owned userDataSecret: name: worker-user-data-x5 4", "oc create -f <file-name>.yaml", "oc get machineset", "NAME DESIRED CURRENT READY AVAILABLE AGE ci-ln-2675bt2-76ef8-bdgsc-worker-us-east-1a 1 1 1 1 124m ci-ln-2675bt2-76ef8-bdgsc-worker-us-east-1b 2 2 2 2 124m worker-us-east-2-nvme1n1 1 1 1 1 2m35s 1", "oc get nodes", "NAME STATUS ROLES AGE VERSION ip-10-0-128-78.ec2.internal Ready worker 117m v1.23.0+60f5a1c ip-10-0-146-113.ec2.internal Ready master 127m v1.23.0+60f5a1c ip-10-0-153-35.ec2.internal Ready worker 118m v1.23.0+60f5a1c ip-10-0-176-58.ec2.internal Ready master 126m v1.23.0+60f5a1c ip-10-0-217-135.ec2.internal Ready worker 2m57s v1.23.0+60f5a1c 1 ip-10-0-225-248.ec2.internal Ready master 127m v1.23.0+60f5a1c ip-10-0-245-59.ec2.internal Ready worker 116m v1.23.0+60f5a1c", "oc debug node/<node-name> -- chroot /host lsblk", "oc debug node/ip-10-0-217-135.ec2.internal -- chroot /host lsblk", "NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT nvme0n1 202:0 0 120G 0 disk |-nvme0n1p1 202:1 0 1M 0 part |-nvme0n1p2 202:2 0 127M 0 part |-nvme0n1p3 202:3 0 384M 0 part /boot `-nvme0n1p4 202:4 0 119.5G 0 part /sysroot nvme1n1 202:16 0 50G 0 disk `-nvme1n1p1 202:17 0 48.8G 0 part /var 1", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example 1 namespace: openshift-machine-api spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> 4 unhealthyConditions: - type: \"Ready\" timeout: \"300s\" 5 status: \"False\" - type: \"Ready\" timeout: \"300s\" 6 status: \"Unknown\" maxUnhealthy: \"40%\" 7 nodeStartupTimeout: \"10m\" 8", "oc apply -f healthcheck.yml", "oc get machinesets -n openshift-machine-api", "oc get machine -n openshift-machine-api", "oc annotate machine/<machine_name> -n openshift-machine-api machine.openshift.io/cluster-api-delete-machine=\"true\"", "oc scale --replicas=2 machineset <machineset> -n openshift-machine-api", "oc edit machineset <machineset> -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2", "oc get machines", "kubeletConfig: podsPerCore: 10", "kubeletConfig: maxPods: 250", "oc get kubeletconfig", "NAME AGE set-max-pods 15m", "oc get mc | grep kubelet", "99-worker-generated-kubelet-1 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 26m", "oc describe machineconfigpool <name>", "oc describe machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: 2019-02-08T14:52:39Z generation: 1 labels: custom-kubelet: set-max-pods 1", "oc label machineconfigpool worker custom-kubelet=set-max-pods", "oc get machineconfig", "oc describe node <node_name>", "oc describe node ci-ln-5grqprb-f76d1-ncnqq-worker-a-mdv94", "Allocatable: attachable-volumes-aws-ebs: 25 cpu: 3500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 15341844Ki pods: 250", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods 1 kubeletConfig: maxPods: 500 2", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods kubeletConfig: maxPods: <pod_count> kubeAPIBurst: <burst_rate> kubeAPIQPS: <QPS>", "oc label machineconfigpool worker custom-kubelet=large-pods", "oc create -f change-maxPods-cr.yaml", "oc get kubeletconfig", "NAME AGE set-max-pods 15m", "oc describe node <node_name>", "Allocatable: attachable-volumes-gce-pd: 127 cpu: 3500m ephemeral-storage: 123201474766 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 14225400Ki pods: 500 1", "oc get kubeletconfigs set-max-pods -o yaml", "spec: kubeletConfig: maxPods: 500 machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods status: conditions: - lastTransitionTime: \"2021-06-30T17:04:07Z\" message: Success status: \"True\" type: Success", "oc edit machineconfigpool worker", "spec: maxUnavailable: <node_count>", "oc label node perf-node.example.com cpumanager=true", "oc edit machineconfigpool worker", "metadata: creationTimestamp: 2020-xx-xxx generation: 3 labels: custom-kubelet: cpumanager-enabled", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2", "oc create -f cpumanager-kubeletconfig.yaml", "oc get machineconfig 99-worker-XXXXXX-XXXXX-XXXX-XXXXX-kubelet -o json | grep ownerReference -A7", "\"ownerReferences\": [ { \"apiVersion\": \"machineconfiguration.openshift.io/v1\", \"kind\": \"KubeletConfig\", \"name\": \"cpumanager-enabled\", \"uid\": \"7ed5616d-6b72-11e9-aae1-021e1ce18878\" } ]", "oc debug node/perf-node.example.com sh-4.2# cat /host/etc/kubernetes/kubelet.conf | grep cpuManager", "cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2", "cat cpumanager-pod.yaml", "apiVersion: v1 kind: Pod metadata: generateName: cpumanager- spec: containers: - name: cpumanager image: gcr.io/google_containers/pause-amd64:3.0 resources: requests: cpu: 1 memory: \"1G\" limits: cpu: 1 memory: \"1G\" nodeSelector: cpumanager: \"true\"", "oc create -f cpumanager-pod.yaml", "oc describe pod cpumanager", "Name: cpumanager-6cqz7 Namespace: default Priority: 0 PriorityClassName: <none> Node: perf-node.example.com/xxx.xx.xx.xxx Limits: cpu: 1 memory: 1G Requests: cpu: 1 memory: 1G QoS Class: Guaranteed Node-Selectors: cpumanager=true", "β”œβ”€init.scope β”‚ └─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 17 └─kubepods.slice β”œβ”€kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice β”‚ β”œβ”€crio-b5437308f1a574c542bdf08563b865c0345c8f8c0b0a655612c.scope β”‚ └─32706 /pause", "cd /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice/crio-b5437308f1ad1a7db0574c542bdf08563b865c0345c86e9585f8c0b0a655612c.scope for i in `ls cpuset.cpus tasks` ; do echo -n \"USDi \"; cat USDi ; done", "cpuset.cpus 1 tasks 32706", "grep ^Cpus_allowed_list /proc/32706/status", "Cpus_allowed_list: 1", "cat /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc494a073_6b77_11e9_98c0_06bba5c387ea.slice/crio-c56982f57b75a2420947f0afc6cafe7534c5734efc34157525fa9abbf99e3849.scope/cpuset.cpus 0 oc describe node perf-node.example.com", "Capacity: attachable-volumes-aws-ebs: 39 cpu: 2 ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8162900Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7548500Ki pods: 250 ------- ---- ------------ ---------- --------------- ------------- --- default cpumanager-6cqz7 1 (66%) 1 (66%) 1G (12%) 1G (12%) 29m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 1440m (96%) 1 (66%)", "NAME READY STATUS RESTARTS AGE cpumanager-6cqz7 1/1 Running 0 33m cpumanager-7qc2t 0/1 Pending 0 11s", "apiVersion: v1 kind: Pod metadata: generateName: hugepages-volume- spec: containers: - securityContext: privileged: true image: rhel7:latest command: - sleep - inf name: example volumeMounts: - mountPath: /dev/hugepages name: hugepage resources: limits: hugepages-2Mi: 100Mi 1 memory: \"1Gi\" cpu: \"1\" volumes: - name: hugepage emptyDir: medium: HugePages", "oc label node <node_using_hugepages> node-role.kubernetes.io/worker-hp=", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: hugepages 1 namespace: openshift-cluster-node-tuning-operator spec: profile: 2 - data: | [main] summary=Boot time configuration for hugepages include=openshift-node [bootloader] cmdline_openshift_node_hugepages=hugepagesz=2M hugepages=50 3 name: openshift-node-hugepages recommend: - machineConfigLabels: 4 machineconfiguration.openshift.io/role: \"worker-hp\" priority: 30 profile: openshift-node-hugepages", "oc create -f hugepages-tuned-boottime.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-hp labels: worker-hp: \"\" spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker-hp]} nodeSelector: matchLabels: node-role.kubernetes.io/worker-hp: \"\"", "oc create -f hugepages-mcp.yaml", "oc get node <node_using_hugepages> -o jsonpath=\"{.status.allocatable.hugepages-2Mi}\" 100Mi", "service DevicePlugin { // GetDevicePluginOptions returns options to be communicated with Device // Manager rpc GetDevicePluginOptions(Empty) returns (DevicePluginOptions) {} // ListAndWatch returns a stream of List of Devices // Whenever a Device state change or a Device disappears, ListAndWatch // returns the new list rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {} // Allocate is called during container creation so that the Device // Plug-in can run device specific operations and instruct Kubelet // of the steps to make the Device available in the container rpc Allocate(AllocateRequest) returns (AllocateResponse) {} // PreStartcontainer is called, if indicated by Device Plug-in during // registration phase, before each container start. Device plug-in // can run device specific operations such as reseting the device // before making devices available to the container rpc PreStartcontainer(PreStartcontainerRequest) returns (PreStartcontainerResponse) {} }", "oc describe machineconfig <name>", "oc describe machineconfig 00-worker", "Name: 00-worker Namespace: Labels: machineconfiguration.openshift.io/role=worker 1", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: devicemgr 1 spec: machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io: devicemgr 2 kubeletConfig: feature-gates: - DevicePlugins=true 3", "oc create -f devicemgr.yaml", "kubeletconfig.machineconfiguration.openshift.io/devicemgr created", "apiVersion: v1 kind: Node metadata: name: my-node # spec: taints: - effect: NoExecute key: key1 value: value1 #", "apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\" tolerationSeconds: 3600 #", "apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c name: my-node # spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master #", "apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\" tolerationSeconds: 3600 #", "oc adm taint nodes node1 key1=value1:NoSchedule", "oc adm taint nodes node1 key1=value1:NoExecute", "oc adm taint nodes node1 key2=value2:NoSchedule", "apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoSchedule\" - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\" #", "apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: node.kubernetes.io/not-ready operator: Exists effect: NoExecute tolerationSeconds: 300 1 - key: node.kubernetes.io/unreachable operator: Exists effect: NoExecute tolerationSeconds: 300 #", "apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - operator: \"Exists\" #", "apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" 1 value: \"value1\" operator: \"Equal\" effect: \"NoExecute\" tolerationSeconds: 3600 2 #", "apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Exists\" 1 effect: \"NoExecute\" tolerationSeconds: 3600 #", "oc adm taint nodes <node_name> <key>=<value>:<effect>", "oc adm taint nodes node1 key1=value1:NoExecute", "apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c name: my-node # spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master #", "apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" 1 value: \"value1\" operator: \"Equal\" effect: \"NoExecute\" tolerationSeconds: 3600 2 #", "apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 3600 #", "oc edit machineset <machineset>", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: my-machineset # spec: # template: # spec: taints: - effect: NoExecute key: key1 value: value1 #", "oc scale --replicas=0 machineset <machineset> -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 0", "oc scale --replicas=2 machineset <machineset> -n openshift-machine-api", "oc edit machineset <machineset> -n openshift-machine-api", "oc adm taint nodes node1 dedicated=groupName:NoSchedule", "kind: Node apiVersion: v1 metadata: name: my-node # spec: taints: - key: dedicated value: groupName effect: NoSchedule #", "apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"disktype\" value: \"ssd\" operator: \"Equal\" effect: \"NoSchedule\" tolerationSeconds: 3600 #", "oc adm taint nodes <node-name> disktype=ssd:NoSchedule", "oc adm taint nodes <node-name> disktype=ssd:PreferNoSchedule", "kind: Node apiVersion: v1 metadata: name: my_node # spec: taints: - key: disktype value: ssd effect: PreferNoSchedule #", "oc adm taint nodes <node-name> <key>-", "oc adm taint nodes ip-10-0-132-248.ec2.internal key1-", "node/ip-10-0-132-248.ec2.internal untainted", "apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key2\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 3600 #", "oc edit KubeletConfig cpumanager-enabled", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s topologyManagerPolicy: single-numa-node 2", "spec: containers: - name: nginx image: nginx", "spec: containers: - name: nginx image: nginx resources: limits: memory: \"200Mi\" requests: memory: \"100Mi\"", "spec: containers: - name: nginx image: nginx resources: limits: memory: \"200Mi\" cpu: \"2\" example.com/device: \"1\" requests: memory: \"200Mi\" cpu: \"2\" example.com/device: \"1\"", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4", "apiVersion: v1 kind: Namespace metadata: labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: \"true\"", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operator.autoscaling.openshift.io/v1\",\"kind\":\"ClusterResourceOverride\",\"metadata\":{\"annotations\":{},\"name\":\"cluster\"},\"spec\":{\"podResourceOverride\":{\"spec\":{\"cpuRequestToLimitPercent\":25,\"limitCPUToMemoryPercent\":200,\"memoryRequestToLimitPercent\":50}}}} creationTimestamp: \"2019-12-18T22:35:02Z\" generation: 1 name: cluster resourceVersion: \"127622\" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: \"127621\" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3", "apiVersion: v1 kind: Namespace metadata: name: clusterresourceoverride-operator", "oc create -f <file-name>.yaml", "oc create -f cro-namespace.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: clusterresourceoverride-operator namespace: clusterresourceoverride-operator spec: targetNamespaces: - clusterresourceoverride-operator", "oc create -f <file-name>.yaml", "oc create -f cro-og.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: clusterresourceoverride namespace: clusterresourceoverride-operator spec: channel: \"4.10\" name: clusterresourceoverride source: redhat-operators sourceNamespace: openshift-marketplace", "oc create -f <file-name>.yaml", "oc create -f cro-sub.yaml", "oc project clusterresourceoverride-operator", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4", "oc create -f <file-name>.yaml", "oc create -f cro-cr.yaml", "oc get clusterresourceoverride cluster -n clusterresourceoverride-operator -o yaml", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operator.autoscaling.openshift.io/v1\",\"kind\":\"ClusterResourceOverride\",\"metadata\":{\"annotations\":{},\"name\":\"cluster\"},\"spec\":{\"podResourceOverride\":{\"spec\":{\"cpuRequestToLimitPercent\":25,\"limitCPUToMemoryPercent\":200,\"memoryRequestToLimitPercent\":50}}}} creationTimestamp: \"2019-12-18T22:35:02Z\" generation: 1 name: cluster resourceVersion: \"127622\" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: \"127621\" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 1 cpuRequestToLimitPercent: 25 2 limitCPUToMemoryPercent: 200 3", "apiVersion: v1 kind: Namespace metadata: labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: \"true\" 1", "sysctl -a |grep commit", "# vm.overcommit_memory = 0 #", "sysctl -a |grep panic", "# vm.panic_on_oom = 0 #", "oc edit machineconfigpool <name>", "oc edit machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker", "oc label machineconfigpool worker custom-kubelet=small-pods", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: disable-cpu-units 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: cpuCfsQuota: false 3", "oc create -f <file_name>.yaml", "sysctl -w vm.overcommit_memory=0", "apiVersion: v1 kind: Namespace metadata: annotations: quota.openshift.io/cluster-resource-override-enabled: \"false\" 1", "oc edit machineconfigpool <name>", "oc edit machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #", "oc label machineconfigpool worker custom-kubelet=small-pods", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: worker-kubeconfig 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: evictionSoft: 3 memory.available: \"500Mi\" 4 nodefs.available: \"10%\" nodefs.inodesFree: \"5%\" imagefs.available: \"15%\" imagefs.inodesFree: \"10%\" evictionSoftGracePeriod: 5 memory.available: \"1m30s\" nodefs.available: \"1m30s\" nodefs.inodesFree: \"1m30s\" imagefs.available: \"1m30s\" imagefs.inodesFree: \"1m30s\" evictionHard: 6 memory.available: \"200Mi\" nodefs.available: \"5%\" nodefs.inodesFree: \"4%\" imagefs.available: \"10%\" imagefs.inodesFree: \"5%\" evictionPressureTransitionPeriod: 0s 7 imageMinimumGCAge: 5m 8 imageGCHighThresholdPercent: 80 9 imageGCLowThresholdPercent: 75 10 #", "oc create -f <file_name>.yaml", "oc create -f gc-container.yaml", "kubeletconfig.machineconfiguration.openshift.io/gc-container created", "oc get machineconfigpool", "NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True", "oc get Tuned/default -o yaml -n openshift-cluster-node-tuning-operator", "profile: - name: tuned_profile_1 data: | # TuneD profile specification [main] summary=Description of tuned_profile_1 profile [sysctl] net.ipv4.ip_forward=1 # ... other sysctl's or other TuneD daemon plugins supported by the containerized TuneD - name: tuned_profile_n data: | # TuneD profile specification [main] summary=Description of tuned_profile_n profile # tuned_profile_n profile settings", "recommend: <recommend-item-1> <recommend-item-n>", "- machineConfigLabels: 1 <mcLabels> 2 match: 3 <match> 4 priority: <priority> 5 profile: <tuned_profile_name> 6 operand: 7 debug: <bool> 8", "- label: <label_name> 1 value: <label_value> 2 type: <label_type> 3 <match> 4", "- match: - label: tuned.openshift.io/elasticsearch match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra type: pod priority: 10 profile: openshift-control-plane-es - match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra priority: 20 profile: openshift-control-plane - priority: 30 profile: openshift-node", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-custom namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile with an additional kernel parameter include=openshift-node [bootloader] cmdline_openshift_node_custom=+skew_tick=1 name: openshift-node-custom recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: \"worker-custom\" priority: 20 profile: openshift-node-custom", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: default namespace: openshift-cluster-node-tuning-operator spec: recommend: - profile: \"openshift-control-plane\" priority: 30 match: - label: \"node-role.kubernetes.io/master\" - label: \"node-role.kubernetes.io/infra\" - profile: \"openshift-node\" priority: 40", "oc exec USDtuned_pod -n openshift-cluster-node-tuning-operator -- find /usr/lib/tuned/openshift{,-control-plane,-node} -name tuned.conf -exec grep -H ^ {} \\;", "oc edit machineconfigpool <name>", "oc edit machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #", "oc label machineconfigpool worker custom-kubelet=small-pods", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: podsPerCore: 10 3 maxPods: 250 4 #", "oc create -f <file_name>.yaml", "oc get machineconfigpools", "NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False False False worker worker-8cecd1236b33ee3f8a5e False True False", "oc get machineconfigpools", "NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False True False worker worker-8cecd1236b33ee3f8a5e True False False", "apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: trustedCA: name: \"\" status:", "apiVersion: v1 data: ca-bundle.crt: | 1 <MY_PEM_ENCODED_CERTS> 2 kind: ConfigMap metadata: name: user-ca-bundle 3 namespace: openshift-config 4", "oc create -f user-ca-bundle.yaml", "oc edit proxy/cluster", "apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 readinessEndpoints: - http://www.google.com 4 - https://www.google.com trustedCA: name: user-ca-bundle 5", "oc get dnses.config.openshift.io/cluster -o yaml", "apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: \"2019-10-25T18:27:09Z\" generation: 2 name: cluster resourceVersion: \"37966\" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>: owned publicZone: id: Z2XXXXXXXXXXA4 status: {}", "oc patch dnses.config.openshift.io/cluster --type=merge --patch='{\"spec\": {\"publicZone\": null}}' dns.config.openshift.io/cluster patched", "oc get dnses.config.openshift.io/cluster -o yaml", "apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: \"2019-10-25T18:27:09Z\" generation: 2 name: cluster resourceVersion: \"37966\" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>-wfpg4: owned status: {}", "oc patch network.config.openshift.io cluster --type=merge -p '{ \"spec\": { \"serviceNodePortRange\": \"30000-<port>\" } }'", "apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: serviceNodePortRange: \"30000-<port>\"", "network.config.openshift.io/cluster patched", "oc get configmaps -n openshift-kube-apiserver config -o jsonpath=\"{.data['config\\.yaml']}\" | grep -Eo '\"service-node-port-range\":[\"[[:digit:]]+-[[:digit:]]+\"]'", "\"service-node-port-range\":[\"30000-33000\"]", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: {} ingress: []", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {}", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-http-and-https spec: podSelector: matchLabels: role: frontend ingress: - ports: - protocol: TCP port: 80 - protocol: TCP port: 443", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-pod-and-namespace-both spec: podSelector: matchLabels: name: test-pods ingress: - from: - namespaceSelector: matchLabels: project: project_name podSelector: matchLabels: name: test-pods", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017", "touch <policy_name>.yaml", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: ingress: []", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {}", "oc apply -f <policy_name>.yaml -n <namespace>", "networkpolicy.networking.k8s.io/default-deny created", "cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/ingress: \"\" podSelector: {} policyTypes: - Ingress EOF", "cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-monitoring spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: monitoring podSelector: {} policyTypes: - Ingress EOF", "cat << EOF| oc create -f - kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {} EOF", "cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-kube-apiserver-operator spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-kube-apiserver-operator podSelector: matchLabels: app: kube-apiserver-operator policyTypes: - Ingress EOF", "oc describe networkpolicy", "Name: allow-from-openshift-ingress Namespace: example1 Created on: 2020-06-09 00:28:17 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: ingress Not affecting egress traffic Policy Types: Ingress Name: allow-from-openshift-monitoring Namespace: example1 Created on: 2020-06-09 00:29:57 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: monitoring Not affecting egress traffic Policy Types: Ingress", "oc adm create-bootstrap-project-template -o yaml > template.yaml", "oc create -f template.yaml -n openshift-config", "oc edit project.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestTemplate: name: <template_name>", "oc edit template <project_template> -n openshift-config", "objects: - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {} - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-kube-apiserver-operator spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-kube-apiserver-operator podSelector: matchLabels: app: kube-apiserver-operator policyTypes: - Ingress", "oc new-project <project> 1", "oc get networkpolicy NAME POD-SELECTOR AGE allow-from-openshift-ingress <none> 7s allow-from-same-namespace <none> 7s", "openstack port show <cluster_name>-<cluster_ID>-ingress-port", "openstack floating ip set --port <ingress_port_ID> <apps_FIP>", "*.apps.<cluster_name>.<base_domain> IN A <apps_FIP>", "<apps_FIP> console-openshift-console.apps.<cluster name>.<base domain> <apps_FIP> integrated-oauth-server-openshift-authentication.apps.<cluster name>.<base domain> <apps_FIP> oauth-openshift.apps.<cluster name>.<base domain> <apps_FIP> prometheus-k8s-openshift-monitoring.apps.<cluster name>.<base domain> <apps_FIP> grafana-openshift-monitoring.apps.<cluster name>.<base domain> <apps_FIP> <app name>.apps.<cluster name>.<base domain>", "oc edit networks.operator.openshift.io cluster", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 defaultNetwork: type: Kuryr kuryrConfig: enablePortPoolsPrepopulation: false 1 poolMinPorts: 1 2 poolBatchPorts: 3 3 poolMaxPorts: 5 4", "oc patch sriovoperatorconfig default --type=merge -n openshift-sriov-network-operator --patch '{ \"spec\": { \"enableOperatorWebhook\": false } }'", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy 1 metadata: name: \"hwoffload9\" namespace: openshift-sriov-network-operator spec: deviceType: netdevice isRdma: true nicSelector: pfNames: 2 - ens6 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: 'true' numVfs: 1 priority: 99 resourceName: \"hwoffload9\"", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy 1 metadata: name: \"hwoffload10\" namespace: openshift-sriov-network-operator spec: deviceType: netdevice isRdma: true nicSelector: pfNames: 2 - ens5 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: 'true' numVfs: 1 priority: 99 resourceName: \"hwoffload10\"", "apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: annotations: k8s.v1.cni.cncf.io/resourceName: openshift.io/hwoffload9 name: hwoffload9 namespace: default spec: config: '{ \"cniVersion\":\"0.3.1\", \"name\":\"hwoffload9\",\"type\":\"host-device\",\"device\":\"ens6\" }'", "apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: annotations: k8s.v1.cni.cncf.io/resourceName: openshift.io/hwoffload10 name: hwoffload10 namespace: default spec: config: '{ \"cniVersion\":\"0.3.1\", \"name\":\"hwoffload10\",\"type\":\"host-device\",\"device\":\"ens5\" }'", "apiVersion: v1 kind: Pod metadata: name: dpdk-testpmd namespace: default annotations: irq-load-balancing.crio.io: disable cpu-quota.crio.io: disable k8s.v1.cni.cncf.io/networks: '[ { \"name\": \"hwoffload9\", \"namespace\": \"default\" }, { \"name\": \"hwoffload10\", \"namespace\": \"default\" } ]' spec: restartPolicy: Never containers: - name: dpdk-testpmd image: quay.io/krister/centos8_nfv-container-dpdk-testpmd:latest", "kind: StorageClass 1 apiVersion: storage.k8s.io/v1 2 metadata: name: <storage-class-name> 3 annotations: 4 storageclass.kubernetes.io/is-default-class: 'true' provisioner: kubernetes.io/aws-ebs 5 parameters: 6 type: gp2", "storageclass.kubernetes.io/is-default-class: \"true\"", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: \"true\"", "kubernetes.io/description: My Storage Class Description", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: kubernetes.io/description: My Storage Class Description", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/cinder parameters: type: fast 2 availability: nova 3 fsType: ext4 4", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/aws-ebs parameters: type: io1 2 iopsPerGB: \"10\" 3 encrypted: \"true\" 4 kmsKeyId: keyvalue 5 fsType: ext4 6", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/azure-disk volumeBindingMode: WaitForFirstConsumer 2 allowVolumeExpansion: true parameters: kind: Managed 3 storageaccounttype: Premium_LRS 4 reclaimPolicy: Delete", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: system:azure-cloud-provider name: <persistent-volume-binder-role> 1 rules: - apiGroups: [''] resources: ['secrets'] verbs: ['get','create']", "oc adm policy add-cluster-role-to-user <persistent-volume-binder-role> system:serviceaccount:kube-system:persistent-volume-binder", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <azure-file> 1 provisioner: kubernetes.io/azure-file parameters: location: eastus 2 skuName: Standard_LRS 3 storageAccount: <storage-account> 4 reclaimPolicy: Delete volumeBindingMode: Immediate", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: azure-file mountOptions: - uid=1500 1 - gid=1500 2 - mfsymlinks 3 provisioner: kubernetes.io/azure-file parameters: location: eastus skuName: Standard_LRS reclaimPolicy: Delete volumeBindingMode: Immediate", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/gce-pd parameters: type: pd-standard 2 replication-type: none volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true reclaimPolicy: Delete", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/vsphere-volume 2 parameters: diskformat: thin 3", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage_class_name> 1 annotations: storageclass.kubernetes.io/is-default-class: \"<boolean>\" 2 provisioner: csi.ovirt.org allowVolumeExpansion: <boolean> 3 reclaimPolicy: Delete 4 volumeBindingMode: Immediate 5 parameters: storageDomainName: <rhv-storage-domain-name> 6 thinProvisioning: \"<boolean>\" 7 csi.storage.k8s.io/fstype: <file_system_type> 8", "oc get storageclass", "NAME TYPE gp2 (default) kubernetes.io/aws-ebs 1 standard kubernetes.io/aws-ebs", "oc patch storageclass gp2 -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"false\"}}}'", "oc patch storageclass standard -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"true\"}}}'", "oc get storageclass", "NAME TYPE gp2 kubernetes.io/aws-ebs standard (default) kubernetes.io/aws-ebs", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: my_identity_provider 1 mappingMethod: claim 2 type: HTPasswd htpasswd: fileData: name: htpass-secret 3", "oc describe clusterrole.rbac", "Name: admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- .packages.apps.redhat.com [] [] [* create update patch delete get list watch] imagestreams [] [] [create delete deletecollection get list patch update watch create get list watch] imagestreams.image.openshift.io [] [] [create delete deletecollection get list patch update watch create get list watch] secrets [] [] [create delete deletecollection get list patch update watch get list watch create delete deletecollection patch update] buildconfigs/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates [] [] [create delete deletecollection get list patch update watch get list watch] routes [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances [] [] [create delete deletecollection get list patch update watch get list watch] templates [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] routes.route.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] serviceaccounts [] [] [create delete deletecollection get list patch update watch impersonate create delete deletecollection patch update get list watch] imagestreams/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings [] [] [create delete deletecollection get list patch update watch] roles [] [] [create delete deletecollection get list patch update watch] rolebindings.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] roles.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] imagestreams.image.openshift.io/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] roles.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] networkpolicies.extensions [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] networkpolicies.networking.k8s.io [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] configmaps [] [] [create delete deletecollection patch update get list watch] endpoints [] [] [create delete deletecollection patch update get list watch] persistentvolumeclaims [] [] [create delete deletecollection patch update get list watch] pods [] [] [create delete deletecollection patch update get list watch] replicationcontrollers/scale [] [] [create delete deletecollection patch update get list watch] replicationcontrollers [] [] [create delete deletecollection patch update get list watch] services [] [] [create delete deletecollection patch update get list watch] daemonsets.apps [] [] [create delete deletecollection patch update get list watch] deployments.apps/scale [] [] [create delete deletecollection patch update get list watch] deployments.apps [] [] [create delete deletecollection patch update get list watch] replicasets.apps/scale [] [] [create delete deletecollection patch update get list watch] replicasets.apps [] [] [create delete deletecollection patch update get list watch] statefulsets.apps/scale [] [] [create delete deletecollection patch update get list watch] statefulsets.apps [] [] [create delete deletecollection patch update get list watch] horizontalpodautoscalers.autoscaling [] [] [create delete deletecollection patch update get list watch] cronjobs.batch [] [] [create delete deletecollection patch update get list watch] jobs.batch [] [] [create delete deletecollection patch update get list watch] daemonsets.extensions [] [] [create delete deletecollection patch update get list watch] deployments.extensions/scale [] [] [create delete deletecollection patch update get list watch] deployments.extensions [] [] [create delete deletecollection patch update get list watch] ingresses.extensions [] [] [create delete deletecollection patch update get list watch] replicasets.extensions/scale [] [] [create delete deletecollection patch update get list watch] replicasets.extensions [] [] [create delete deletecollection patch update get list watch] replicationcontrollers.extensions/scale [] [] [create delete deletecollection patch update get list watch] poddisruptionbudgets.policy [] [] [create delete deletecollection patch update get list watch] deployments.apps/rollback [] [] [create delete deletecollection patch update] deployments.extensions/rollback [] [] [create delete deletecollection patch update] catalogsources.operators.coreos.com [] [] [create update patch delete get list watch] clusterserviceversions.operators.coreos.com [] [] [create update patch delete get list watch] installplans.operators.coreos.com [] [] [create update patch delete get list watch] packagemanifests.operators.coreos.com [] [] [create update patch delete get list watch] subscriptions.operators.coreos.com [] [] [create update patch delete get list watch] buildconfigs/instantiate [] [] [create] buildconfigs/instantiatebinary [] [] [create] builds/clone [] [] [create] deploymentconfigrollbacks [] [] [create] deploymentconfigs/instantiate [] [] [create] deploymentconfigs/rollback [] [] [create] imagestreamimports [] [] [create] localresourceaccessreviews [] [] [create] localsubjectaccessreviews [] [] [create] podsecuritypolicyreviews [] [] [create] podsecuritypolicyselfsubjectreviews [] [] [create] podsecuritypolicysubjectreviews [] [] [create] resourceaccessreviews [] [] [create] routes/custom-host [] [] [create] subjectaccessreviews [] [] [create] subjectrulesreviews [] [] [create] deploymentconfigrollbacks.apps.openshift.io [] [] [create] deploymentconfigs.apps.openshift.io/instantiate [] [] [create] deploymentconfigs.apps.openshift.io/rollback [] [] [create] localsubjectaccessreviews.authorization.k8s.io [] [] [create] localresourceaccessreviews.authorization.openshift.io [] [] [create] localsubjectaccessreviews.authorization.openshift.io [] [] [create] resourceaccessreviews.authorization.openshift.io [] [] [create] subjectaccessreviews.authorization.openshift.io [] [] [create] subjectrulesreviews.authorization.openshift.io [] [] [create] buildconfigs.build.openshift.io/instantiate [] [] [create] buildconfigs.build.openshift.io/instantiatebinary [] [] [create] builds.build.openshift.io/clone [] [] [create] imagestreamimports.image.openshift.io [] [] [create] routes.route.openshift.io/custom-host [] [] [create] podsecuritypolicyreviews.security.openshift.io [] [] [create] podsecuritypolicyselfsubjectreviews.security.openshift.io [] [] [create] podsecuritypolicysubjectreviews.security.openshift.io [] [] [create] jenkins.build.openshift.io [] [] [edit view view admin edit view] builds [] [] [get create delete deletecollection get list patch update watch get list watch] builds.build.openshift.io [] [] [get create delete deletecollection get list patch update watch get list watch] projects [] [] [get delete get delete get patch update] projects.project.openshift.io [] [] [get delete get delete get patch update] namespaces [] [] [get get list watch] pods/attach [] [] [get list watch create delete deletecollection patch update] pods/exec [] [] [get list watch create delete deletecollection patch update] pods/portforward [] [] [get list watch create delete deletecollection patch update] pods/proxy [] [] [get list watch create delete deletecollection patch update] services/proxy [] [] [get list watch create delete deletecollection patch update] routes/status [] [] [get list watch update] routes.route.openshift.io/status [] [] [get list watch update] appliedclusterresourcequotas [] [] [get list watch] bindings [] [] [get list watch] builds/log [] [] [get list watch] deploymentconfigs/log [] [] [get list watch] deploymentconfigs/status [] [] [get list watch] events [] [] [get list watch] imagestreams/status [] [] [get list watch] limitranges [] [] [get list watch] namespaces/status [] [] [get list watch] pods/log [] [] [get list watch] pods/status [] [] [get list watch] replicationcontrollers/status [] [] [get list watch] resourcequotas/status [] [] [get list watch] resourcequotas [] [] [get list watch] resourcequotausages [] [] [get list watch] rolebindingrestrictions [] [] [get list watch] deploymentconfigs.apps.openshift.io/log [] [] [get list watch] deploymentconfigs.apps.openshift.io/status [] [] [get list watch] controllerrevisions.apps [] [] [get list watch] rolebindingrestrictions.authorization.openshift.io [] [] [get list watch] builds.build.openshift.io/log [] [] [get list watch] imagestreams.image.openshift.io/status [] [] [get list watch] appliedclusterresourcequotas.quota.openshift.io [] [] [get list watch] imagestreams/layers [] [] [get update get] imagestreams.image.openshift.io/layers [] [] [get update get] builds/details [] [] [update] builds.build.openshift.io/details [] [] [update] Name: basic-user Labels: <none> Annotations: openshift.io/description: A user that can get basic information about projects. rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- selfsubjectrulesreviews [] [] [create] selfsubjectaccessreviews.authorization.k8s.io [] [] [create] selfsubjectrulesreviews.authorization.openshift.io [] [] [create] clusterroles.rbac.authorization.k8s.io [] [] [get list watch] clusterroles [] [] [get list] clusterroles.authorization.openshift.io [] [] [get list] storageclasses.storage.k8s.io [] [] [get list] users [] [~] [get] users.user.openshift.io [] [~] [get] projects [] [] [list watch] projects.project.openshift.io [] [] [list watch] projectrequests [] [] [list] projectrequests.project.openshift.io [] [] [list] Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- *.* [] [] [*] [*] [] [*]", "oc describe clusterrolebinding.rbac", "Name: alertmanager-main Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: alertmanager-main Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount alertmanager-main openshift-monitoring Name: basic-users Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: basic-user Subjects: Kind Name Namespace ---- ---- --------- Group system:authenticated Name: cloud-credential-operator-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cloud-credential-operator-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-cloud-credential-operator Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:masters Name: cluster-admins Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:cluster-admins User system:admin Name: cluster-api-manager-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cluster-api-manager-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-machine-api", "oc describe rolebinding.rbac", "oc describe rolebinding.rbac -n joe-project", "Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe-project Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe-project Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe-project", "oc adm policy add-role-to-user <role> <user> -n <project>", "oc adm policy add-role-to-user admin alice -n joe", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: admin-0 namespace: joe roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: admin subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: alice", "oc describe rolebinding.rbac -n <project>", "oc describe rolebinding.rbac -n joe", "Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: admin-0 Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User alice 1 Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe", "oc create role <name> --verb=<verb> --resource=<resource> -n <project>", "oc create role podview --verb=get --resource=pod -n blue", "oc adm policy add-role-to-user podview user2 --role-namespace=blue -n blue", "oc create clusterrole <name> --verb=<verb> --resource=<resource>", "oc create clusterrole podviewonly --verb=get --resource=pod", "oc adm policy add-cluster-role-to-user cluster-admin <user>", "INFO Install complete! INFO Run 'export KUBECONFIG=<your working directory>/auth/kubeconfig' to manage the cluster with 'oc', the OpenShift CLI. INFO The cluster is ready when 'oc login -u kubeadmin -p <provided>' succeeds (wait a few minutes). INFO Access the OpenShift web-console here: https://console-openshift-console.apps.demo1.openshift4-beta-abcorp.com INFO Login to the console with user: kubeadmin, password: <provided>", "oc delete secrets kubeadmin -n kube-system", "oc edit image.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Image 1 metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 1 name: cluster resourceVersion: \"8302\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: allowedRegistriesForImport: 2 - domainName: quay.io insecure: false additionalTrustedCA: 3 name: myconfigmap registrySources: 4 allowedRegistries: - example.com - quay.io - registry.redhat.io - image-registry.openshift-image-registry.svc:5000 - reg1.io/myrepo/myapp:latest insecureRegistries: - insecure.com status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000", "oc get nodes", "NAME STATUS ROLES AGE VERSION ip-10-0-137-182.us-east-2.compute.internal Ready,SchedulingDisabled worker 65m v1.25.4+77bec7a ip-10-0-139-120.us-east-2.compute.internal Ready,SchedulingDisabled control-plane 74m v1.25.4+77bec7a ip-10-0-176-102.us-east-2.compute.internal Ready control-plane 75m v1.25.4+77bec7a ip-10-0-188-96.us-east-2.compute.internal Ready worker 65m v1.25.4+77bec7a ip-10-0-200-59.us-east-2.compute.internal Ready worker 63m v1.25.4+77bec7a ip-10-0-223-123.us-east-2.compute.internal Ready control-plane 73m v1.25.4+77bec7a", "apiVersion: v1 kind: ConfigMap metadata: name: my-registry-ca data: registry.example.com: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- registry-with-port.example.com..5000: | 1 -----BEGIN CERTIFICATE----- -----END CERTIFICATE-----", "oc create configmap registry-config --from-file=<external_registry_address>=ca.crt -n openshift-config", "oc edit image.config.openshift.io cluster", "spec: additionalTrustedCA: name: registry-config", "skopeo copy docker://registry.access.redhat.com/ubi8/ubi-minimal@sha256:5cfbaf45ca96806917830c183e9f37df2e913b187adb32e89fd83fa455ebaa6 docker://example.io/example/ubi-minimal", "apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: ubi8repo spec: repositoryDigestMirrors: - mirrors: - example.io/example/ubi-minimal 1 - example.com/example/ubi-minimal 2 source: registry.access.redhat.com/ubi8/ubi-minimal 3 - mirrors: - mirror.example.com/redhat source: registry.redhat.io/openshift4 4 - mirrors: - mirror.example.com source: registry.redhat.io 5 - mirrors: - mirror.example.net/image source: registry.example.com/example/myimage 6 - mirrors: - mirror.example.net source: registry.example.com/example 7 - mirrors: - mirror.example.net/registry-example-com source: registry.example.com 8", "oc create -f registryrepomirror.yaml", "oc get node", "NAME STATUS ROLES AGE VERSION ip-10-0-137-44.ec2.internal Ready worker 7m v1.24.0 ip-10-0-138-148.ec2.internal Ready master 11m v1.24.0 ip-10-0-139-122.ec2.internal Ready master 11m v1.24.0 ip-10-0-147-35.ec2.internal Ready worker 7m v1.24.0 ip-10-0-153-12.ec2.internal Ready worker 7m v1.24.0 ip-10-0-154-10.ec2.internal Ready master 11m v1.24.0", "oc debug node/ip-10-0-147-35.ec2.internal", "Starting pod/ip-10-0-147-35ec2internal-debug To use host binaries, run `chroot /host`", "sh-4.2# chroot /host", "sh-4.2# cat /etc/containers/registries.conf", "unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] short-name-mode = \"\" [[registry]] prefix = \"\" location = \"registry.access.redhat.com/ubi8/ubi-minimal\" mirror-by-digest-only = true [[registry.mirror]] location = \"example.io/example/ubi-minimal\" [[registry.mirror]] location = \"example.com/example/ubi-minimal\" [[registry]] prefix = \"\" location = \"registry.example.com\" mirror-by-digest-only = true [[registry.mirror]] location = \"mirror.example.net/registry-example-com\" [[registry]] prefix = \"\" location = \"registry.example.com/example\" mirror-by-digest-only = true [[registry.mirror]] location = \"mirror.example.net\" [[registry]] prefix = \"\" location = \"registry.example.com/example/myimage\" mirror-by-digest-only = true [[registry.mirror]] location = \"mirror.example.net/image\" [[registry]] prefix = \"\" location = \"registry.redhat.io\" mirror-by-digest-only = true [[registry.mirror]] location = \"mirror.example.com\" [[registry]] prefix = \"\" location = \"registry.redhat.io/openshift4\" mirror-by-digest-only = true [[registry.mirror]] location = \"mirror.example.com/redhat\"", "sh-4.2# podman pull --log-level=debug registry.access.redhat.com/ubi8/ubi-minimal@sha256:5cfbaf45ca96806917830c183e9f37df2e913b187adb32e89fd83fa455ebaa6", "oc create -f <path/to/manifests/dir>/imageContentSourcePolicy.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog 1 namespace: openshift-marketplace 2 spec: sourceType: grpc image: <registry>/<namespace>/redhat-operator-index:v4.10 3 displayName: My Operator Catalog publisher: <publisher_name> 4 updateStrategy: registryPoll: 5 interval: 30m", "oc apply -f catalogSource.yaml", "oc get pods -n openshift-marketplace", "NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h", "oc get catalogsource -n openshift-marketplace", "NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5s", "oc get packagemanifest -n openshift-marketplace", "NAME CATALOG AGE jaeger-product My Operator Catalog 93s", "oc get packagemanifests -n openshift-marketplace", "NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m", "oc describe packagemanifests <operator_name> -n openshift-marketplace", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> spec: targetNamespaces: - <namespace>", "oc apply -f operatorgroup.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: openshift-operators 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace 5 config: env: 6 - name: ARGS value: \"-v=10\" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: \"Exists\" resources: 11 requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\" nodeSelector: 12 foo: bar", "oc apply -f sub.yaml", "cp </path/to/cert.crt> /usr/share/pki/ca-trust-source/anchors/", "update-ca-trust", "oc extract secret/pull-secret -n openshift-config --confirm --to=.", ".dockerconfigjson", "{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}},\"<registry>:<port>/<namespace>/\":{\"auth\":\"<token>\"}}}", "{\"auths\":{\"cloud.openshift.com\":{\"auth\":\"b3BlbnNoaWZ0Y3UjhGOVZPT0lOMEFaUjdPUzRGTA==\",\"email\":\"[email protected]\"}, \"quay.io\":{\"auth\":\"b3BlbnNoaWZ0LXJlbGVhc2UtZGOVZPT0lOMEFaUGSTd4VGVGVUjdPUzRGTA==\",\"email\":\"[email protected]\"}, \"registry.connect.redhat.com\"{\"auth\":\"NTE3MTMwNDB8dWhjLTFEZlN3VHkxOSTd4VGVGVU1MdTpleUpoYkdjaUailA==\",\"email\":\"[email protected]\"}, \"registry.redhat.io\":{\"auth\":\"NTE3MTMwNDB8dWhjLTFEZlN3VH3BGSTd4VGVGVU1MdTpleUpoYkdjaU9fZw==\",\"email\":\"[email protected]\"}, \"registry.svc.ci.openshift.org\":{\"auth\":\"dXNlcjpyWjAwWVFjSEJiT2RKVW1pSmg4dW92dGp1SXRxQ3RGN1pwajJhN1ZXeTRV\"},\"my-registry:5000/my-namespace/\":{\"auth\":\"dXNlcm5hbWU6cGFzc3dvcmQ=\"}}}", "oc adm catalog mirror registry.redhat.io/redhat/redhat-operator-index:v{product-version} <mirror_registry>:<port>/olm -a <reg_creds>", "oc adm catalog mirror registry.redhat.io/redhat/redhat-operator-index:v4.8 mirror.registry.com:443/olm -a ./.dockerconfigjson --index-filter-by-os='.*'", "oc adm catalog mirror <index_image> <mirror_registry>:<port>/<namespace> -a <reg_creds>", "oc adm catalog mirror registry.redhat.io/redhat/community-operator-index:v4.8 mirror.registry.com:443/olm -a ./.dockerconfigjson --index-filter-by-os='.*'", "oc adm release mirror -a .dockerconfigjson --from=quay.io/openshift-release-dev/ocp-release:v<product-version>-<architecture> --to=<local_registry>/<local_repository> --to-release-image=<local_registry>/<local_repository>:v<product-version>-<architecture>", "oc adm release mirror -a .dockerconfigjson --from=quay.io/openshift-release-dev/ocp-release:4.8.15-x86_64 --to=mirror.registry.com:443/ocp/release --to-release-image=mirror.registry.com:443/ocp/release:4.8.15-x86_64", "info: Mirroring 109 images to mirror.registry.com/ocp/release mirror.registry.com:443/ ocp/release manifests: sha256:086224cadce475029065a0efc5244923f43fb9bb3bb47637e0aaf1f32b9cad47 -> 4.8.15-x86_64-thanos sha256:0a214f12737cb1cfbec473cc301aa2c289d4837224c9603e99d1e90fc00328db -> 4.8.15-x86_64-kuryr-controller sha256:0cf5fd36ac4b95f9de506623b902118a90ff17a07b663aad5d57c425ca44038c -> 4.8.15-x86_64-pod sha256:0d1c356c26d6e5945a488ab2b050b75a8b838fc948a75c0fa13a9084974680cb -> 4.8.15-x86_64-kube-client-agent ..... sha256:66e37d2532607e6c91eedf23b9600b4db904ce68e92b43c43d5b417ca6c8e63c mirror.registry.com:443/ocp/release:4.5.41-multus-admission-controller sha256:d36efdbf8d5b2cbc4dcdbd64297107d88a31ef6b0ec4a39695915c10db4973f1 mirror.registry.com:443/ocp/release:4.5.41-cluster-kube-scheduler-operator sha256:bd1baa5c8239b23ecdf76819ddb63cd1cd6091119fecdbf1a0db1fb3760321a2 mirror.registry.com:443/ocp/release:4.5.41-aws-machine-controllers info: Mirroring completed in 2.02s (0B/s) Success Update image: mirror.registry.com:443/ocp/release:4.5.41-x86_64 Mirror prefix: mirror.registry.com:443/ocp/release", "oc image mirror <online_registry>/my/image:latest <mirror_registry>", "oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1", "oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=.mirrorsecretconfigjson", "oc create configmap <config_map_name> --from-file=<mirror_address_host>..<port>=USDpath/ca.crt -n openshift-config", "S oc create configmap registry-config --from-file=mirror.registry.com..443=/root/certs/ca-chain.cert.pem -n openshift-config", "oc patch image.config.openshift.io/cluster --patch '{\"spec\":{\"additionalTrustedCA\":{\"name\":\"<config_map_name>\"}}}' --type=merge", "oc patch image.config.openshift.io/cluster --patch '{\"spec\":{\"additionalTrustedCA\":{\"name\":\"registry-config\"}}}' --type=merge", "apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: mirror-ocp spec: repositoryDigestMirrors: - mirrors: - mirror.registry.com:443/ocp/release 1 source: quay.io/openshift-release-dev/ocp-release 2 - mirrors: - mirror.registry.com:443/ocp/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "oc create -f registryrepomirror.yaml", "imagecontentsourcepolicy.operator.openshift.io/mirror-ocp created", "oc debug node/<node_name>", "sh-4.4# chroot /host", "sh-4.4# cat /var/lib/kubelet/config.json", "{\"auths\":{\"brew.registry.redhat.io\":{\"xx==\"},\"brewregistry.stage.redhat.io\":{\"auth\":\"xxx==\"},\"mirror.registry.com:443\":{\"auth\":\"xx=\"}}} 1", "sh-4.4# cd /etc/docker/certs.d/", "sh-4.4# ls", "image-registry.openshift-image-registry.svc.cluster.local:5000 image-registry.openshift-image-registry.svc:5000 mirror.registry.com:443 1", "sh-4.4# cat /etc/containers/registries.conf", "unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] [[registry]] prefix = \"\" location = \"quay.io/openshift-release-dev/ocp-release\" mirror-by-digest-only = true [[registry.mirror]] location = \"mirror.registry.com:443/ocp/release\" [[registry]] prefix = \"\" location = \"quay.io/openshift-release-dev/ocp-v4.0-art-dev\" mirror-by-digest-only = true [[registry.mirror]] location = \"mirror.registry.com:443/ocp/release\"", "sh-4.4# exit", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE kube-system apiserver-watcher-ci-ln-47ltxtb-f76d1-mrffg-master-0 1/1 Running 0 39m kube-system apiserver-watcher-ci-ln-47ltxtb-f76d1-mrffg-master-1 1/1 Running 0 39m kube-system apiserver-watcher-ci-ln-47ltxtb-f76d1-mrffg-master-2 1/1 Running 0 39m openshift-apiserver-operator openshift-apiserver-operator-79c7c646fd-5rvr5 1/1 Running 3 45m openshift-apiserver apiserver-b944c4645-q694g 2/2 Running 0 29m openshift-apiserver apiserver-b944c4645-shdxb 2/2 Running 0 31m openshift-apiserver apiserver-b944c4645-x7rf2 2/2 Running 0 33m", "oc get nodes", "NAME STATUS ROLES AGE VERSION ci-ln-47ltxtb-f76d1-mrffg-master-0 Ready master 42m v1.23.0 ci-ln-47ltxtb-f76d1-mrffg-master-1 Ready master 42m v1.23.0 ci-ln-47ltxtb-f76d1-mrffg-master-2 Ready master 42m v1.23.0 ci-ln-47ltxtb-f76d1-mrffg-worker-a-gsxbz Ready worker 35m v1.23.0 ci-ln-47ltxtb-f76d1-mrffg-worker-b-5qqdx Ready worker 35m v1.23.0 ci-ln-47ltxtb-f76d1-mrffg-worker-c-rjkpq Ready worker 34m v1.23.0", "\"cloud.openshift.com\":{\"auth\":\"<hash>\",\"email\":\"[email protected]\"}", "oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=./.dockerconfigjson", "oc get co insights", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE insights 4.5.41 True False False 3d", "oc get imagecontentsourcepolicy", "NAME AGE mirror-ocp 6d20h ocp4-index-0 6d18h qe45-index-0 6d15h", "oc delete imagecontentsourcepolicy <icsp_name> <icsp_name> <icsp_name>", "oc delete imagecontentsourcepolicy mirror-ocp ocp4-index-0 qe45-index-0", "imagecontentsourcepolicy.operator.openshift.io \"mirror-ocp\" deleted imagecontentsourcepolicy.operator.openshift.io \"ocp4-index-0\" deleted imagecontentsourcepolicy.operator.openshift.io \"qe45-index-0\" deleted", "oc debug node/<node_name>", "sh-4.4# chroot /host", "sh-4.4# cat /etc/containers/registries.conf", "unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] 1", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker0 spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker0]} nodeSelector: matchLabels: node-role.kubernetes.io/worker0: \"\"", "ACTION==\"add\", SUBSYSTEM==\"ccw\", KERNEL==\"0.0.8000\", DRIVER==\"zfcp\", GOTO=\"cfg_zfcp_host_0.0.8000\" ACTION==\"add\", SUBSYSTEM==\"drivers\", KERNEL==\"zfcp\", TEST==\"[ccw/0.0.8000]\", GOTO=\"cfg_zfcp_host_0.0.8000\" GOTO=\"end_zfcp_host_0.0.8000\" LABEL=\"cfg_zfcp_host_0.0.8000\" ATTR{[ccw/0.0.8000]online}=\"1\" LABEL=\"end_zfcp_host_0.0.8000\"", "base64 /path/to/file/", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker0 1 name: 99-worker0-devices spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;base64,<encoded_base64_string> 2 filesystem: root mode: 420 path: /etc/udev/rules.d/41-zfcp-host-0.0.8000.rules 3", "ACTION==\"add\", SUBSYSTEMS==\"ccw\", KERNELS==\"0.0.8000\", GOTO=\"start_zfcp_lun_0.0.8207\" GOTO=\"end_zfcp_lun_0.0.8000\" LABEL=\"start_zfcp_lun_0.0.8000\" SUBSYSTEM==\"fc_remote_ports\", ATTR{port_name}==\"0x500507680d760026\", GOTO=\"cfg_fc_0.0.8000_0x500507680d760026\" GOTO=\"end_zfcp_lun_0.0.8000\" LABEL=\"cfg_fc_0.0.8000_0x500507680d760026\" ATTR{[ccw/0.0.8000]0x500507680d760026/unit_add}=\"0x00bc000000000000\" GOTO=\"end_zfcp_lun_0.0.8000\" LABEL=\"end_zfcp_lun_0.0.8000\"", "base64 /path/to/file/", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker0 1 name: 99-worker0-devices spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;base64,<encoded_base64_string> 2 filesystem: root mode: 420 path: /etc/udev/rules.d/41-zfcp-lun-0.0.8000:0x500507680d760026:0x00bc000000000000.rules 3", "ACTION==\"add\", SUBSYSTEM==\"ccw\", KERNEL==\"0.0.4444\", DRIVER==\"dasd-eckd\", GOTO=\"cfg_dasd_eckd_0.0.4444\" ACTION==\"add\", SUBSYSTEM==\"drivers\", KERNEL==\"dasd-eckd\", TEST==\"[ccw/0.0.4444]\", GOTO=\"cfg_dasd_eckd_0.0.4444\" GOTO=\"end_dasd_eckd_0.0.4444\" LABEL=\"cfg_dasd_eckd_0.0.4444\" ATTR{[ccw/0.0.4444]online}=\"1\" LABEL=\"end_dasd_eckd_0.0.4444\"", "base64 /path/to/file/", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker0 1 name: 99-worker0-devices spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;base64,<encoded_base64_string> 2 filesystem: root mode: 420 path: /etc/udev/rules.d/41-dasd-eckd-0.0.4444.rules 3", "ACTION==\"add\", SUBSYSTEM==\"drivers\", KERNEL==\"qeth\", GOTO=\"group_qeth_0.0.1000\" ACTION==\"add\", SUBSYSTEM==\"ccw\", KERNEL==\"0.0.1000\", DRIVER==\"qeth\", GOTO=\"group_qeth_0.0.1000\" ACTION==\"add\", SUBSYSTEM==\"ccw\", KERNEL==\"0.0.1001\", DRIVER==\"qeth\", GOTO=\"group_qeth_0.0.1000\" ACTION==\"add\", SUBSYSTEM==\"ccw\", KERNEL==\"0.0.1002\", DRIVER==\"qeth\", GOTO=\"group_qeth_0.0.1000\" ACTION==\"add\", SUBSYSTEM==\"ccwgroup\", KERNEL==\"0.0.1000\", DRIVER==\"qeth\", GOTO=\"cfg_qeth_0.0.1000\" GOTO=\"end_qeth_0.0.1000\" LABEL=\"group_qeth_0.0.1000\" TEST==\"[ccwgroup/0.0.1000]\", GOTO=\"end_qeth_0.0.1000\" TEST!=\"[ccw/0.0.1000]\", GOTO=\"end_qeth_0.0.1000\" TEST!=\"[ccw/0.0.1001]\", GOTO=\"end_qeth_0.0.1000\" TEST!=\"[ccw/0.0.1002]\", GOTO=\"end_qeth_0.0.1000\" ATTR{[drivers/ccwgroup:qeth]group}=\"0.0.1000,0.0.1001,0.0.1002\" GOTO=\"end_qeth_0.0.1000\" LABEL=\"cfg_qeth_0.0.1000\" ATTR{[ccwgroup/0.0.1000]online}=\"1\" LABEL=\"end_qeth_0.0.1000\"", "base64 /path/to/file/", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker0 1 name: 99-worker0-devices spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;base64,<encoded_base64_string> 2 filesystem: root mode: 420 path: /etc/udev/rules.d/41-dasd-eckd-0.0.4444.rules 3", "ssh <user>@<node_ip_address>", "oc debug node/<node_name>", "sudo chzdev -e 0.0.8000 sudo chzdev -e 1000-1002 sude chzdev -e 4444 sudo chzdev -e 0.0.8000:0x500507680d760026:0x00bc000000000000", "ssh <user>@<node_ip_address>", "oc debug node/<node_name>", "sudo /sbin/mpathconf --enable", "sudo multipath", "sudo fdisk /dev/mapper/mpatha", "sudo multipath -II", "mpatha (20017380030290197) dm-1 IBM,2810XIV size=512G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw -+- policy='service-time 0' prio=50 status=enabled |- 1:0:0:6 sde 68:16 active ready running |- 1:0:1:6 sdf 69:24 active ready running |- 0:0:0:6 sdg 8:80 active ready running `- 0:0:1:6 sdh 66:48 active ready running" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html-single/post-installation_configuration/index
Chapter 1. About Amazon EC2
Chapter 1. About Amazon EC2 Amazon Elastic Compute Cloud (Amazon EC2), a service operated by amazon.com, provides customers with a customizable virtual computing environment. With this service, an Amazon Machine Image (AMI) can be booted to create a virtual machine or instance. Users can install the software they require on an instance and are charged according to the capacity used. Amazon EC2 is designed to be flexible and allows users to quickly scale their deployed applications. See the Amazon Web Services website for more information. About Amazon Machine Images An Amazon Machine Image (AMI) is a template for an EC2 virtual machine instance. Users create EC2 instances by selecting an appropriate AMI to create the instance from. The primary component of an AMI is a read-only filesystem that contains an installed operating system as well as other software. Each AMI has different software installed for different use cases. Amazon EC2 includes many AMIs that both Amazon Web Services and third parties provide. Users can also create their own custom AMIs. Types of JBoss EAP Amazon Machine Images Use JBoss EAP on Amazon Elastic Compute Cloud (Amazon EC2) by deploying a public or private Amazon Machine Image (AMI). Important Red Hat does not currently provide support for the full-ha profile, in either standalone instances or a managed domain. JBoss EAP public AMI Access JBoss EAP public AMIs through the AWS marketplace https://aws.amazon.com/marketplace . The public AMIs are offered with the pay-as-you-go (PAYG) model. With a PAYG model, you only pay based on the number of computing resources you used. JBoss EAP private AMI You can use your existing subscription to access JBoss EAP private AMIs through Red Hat Cloud Access. For information about Red Hat Cloud Access, see About Red Hat Cloud Access . About Red Hat Cloud Access If you have an existing Red Hat subscription, Red Hat Cloud Access provides support for JBoss EAP on Red Hat certified cloud infrastructure providers, such as Amazon EC2 and Microsoft Azure. Red Hat Cloud Access allows you to cost-effectively move your subscriptions between traditional servers and public cloud-based resources. You can find more information about Red Hat Cloud Access on the Customer Portal . Red Hat Cloud Access Features Membership in the Red Hat Cloud Access program provides access to supported private Amazon Machine Images (AMIs) created by Red Hat. The Red Hat AMIs have the following software pre-installed and fully supported by Red Hat: Red Hat Enterprise Linux JBoss EAP Product updates with RPMs using Red Hat Update Infrastructure Each of the Red Hat AMIs is only a starting point, requiring further configuration to the requirements of your application. Supported Amazon EC2 Instance Types Red Hat Cloud Access supports the following Amazon EC2 instance types. See Amazon Elastic Compute Cloud User Guide for Linux Instances for more information about each instance. The minimum virtual hardware requirements for an AMI to deploy JBoss EAP are the following: Virtual CPU: 2 Memory: 4 GB However, depending on the applications you deploy on JBoss EAP you might require additional processors and memory. Supported Red Hat AMIs The supported Red Hat AMIs can be identified by their names, as shown in the following examples: Private image example Public image example RHEL- x is the version number of Red Hat Enterprise Linux installed in the AMI. Example 7 . JBEAP- x . y . z is the version number of JBoss EAP installed in the AMI. Example 7.4.0 . 20220804 is the date that the AMI was created in the format of YYYYMMDD. x86_64 is the architecture of the AMI. This can be x86_64 or i386 . Access2 or Marketplace denote whether the AMI is private or public as follows: Private image contains Access2 . Public image contains Marketplace .
[ "RHEL-7-JBEAP-7.4.0_HVM_GA-20210909-x86_64-0-Access2-GP2", "RHEL-7-JBEAP-7.4.0_HVM_GA-20220804-x86_64-0-Marketplace-GP2" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/deploying_jboss_eap_on_amazon_web_services/about_amazon_ec2
Chapter 13. Expanding single-node OpenShift clusters with GitOps ZTP
Chapter 13. Expanding single-node OpenShift clusters with GitOps ZTP You can expand single-node OpenShift clusters with GitOps Zero Touch Provisioning (ZTP). When you add worker nodes to single-node OpenShift clusters, the original single-node OpenShift cluster retains the control plane node role. Adding worker nodes does not require any downtime for the existing single-node OpenShift cluster. Note Although there is no specified limit on the number of worker nodes that you can add to a single-node OpenShift cluster, you must revaluate the reserved CPU allocation on the control plane node for the additional worker nodes. If you require workload partitioning on the worker node, you must deploy and remediate the managed cluster policies on the hub cluster before installing the node. This way, the workload partitioning MachineConfig objects are rendered and associated with the worker machine config pool before the GitOps ZTP workflow applies the MachineConfig ignition file to the worker node. It is recommended that you first remediate the policies, and then install the worker node. If you create the workload partitioning manifests after installing the worker node, you must drain the node manually and delete all the pods managed by daemon sets. When the managing daemon sets create the new pods, the new pods undergo the workload partitioning process. Important Adding worker nodes to single-node OpenShift clusters with GitOps ZTP is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Additional resources For more information about single-node OpenShift clusters tuned for vDU application deployments, see Reference configuration for deploying vDUs on single-node OpenShift . For more information about worker nodes, see Adding worker nodes to single-node OpenShift clusters . For information about removing a worker node from an expanded single-node OpenShift cluster, see Removing managed cluster nodes by using the command line interface . 13.1. Applying profiles to the worker node You can configure the additional worker node with a DU profile. You can apply a RAN distributed unit (DU) profile to the worker node cluster using the GitOps Zero Touch Provisioning (ZTP) common, group, and site-specific PolicyGenTemplate resources. The GitOps ZTP pipeline that is linked to the ArgoCD policies application includes the following CRs that you can find in the out/argocd/example/policygentemplates folder when you extract the ztp-site-generate container: common-ranGen.yaml group-du-sno-ranGen.yaml example-sno-site.yaml ns.yaml kustomization.yaml Configuring the DU profile on the worker node is considered an upgrade. To initiate the upgrade flow, you must update the existing policies or create additional ones. Then, you must create a ClusterGroupUpgrade CR to reconcile the policies in the group of clusters. 13.2. (Optional) Ensuring PTP and SR-IOV daemon selector compatibility If the DU profile was deployed using the GitOps Zero Touch Provisioning (ZTP) plugin version 4.11 or earlier, the PTP and SR-IOV Operators might be configured to place the daemons only on nodes labelled as master . This configuration prevents the PTP and SR-IOV daemons from operating on the worker node. If the PTP and SR-IOV daemon node selectors are incorrectly configured on your system, you must change the daemons before proceeding with the worker DU profile configuration. Procedure Check the daemon node selector settings of the PTP Operator on one of the spoke clusters: USD oc get ptpoperatorconfig/default -n openshift-ptp -ojsonpath='{.spec}' | jq Example output for PTP Operator {"daemonNodeSelector":{"node-role.kubernetes.io/master":""}} 1 1 If the node selector is set to master , the spoke was deployed with the version of the GitOps ZTP plugin that requires changes. Check the daemon node selector settings of the SR-IOV Operator on one of the spoke clusters: USD oc get sriovoperatorconfig/default -n \ openshift-sriov-network-operator -ojsonpath='{.spec}' | jq Example output for SR-IOV Operator {"configDaemonNodeSelector":{"node-role.kubernetes.io/worker":""},"disableDrain":false,"enableInjector":true,"enableOperatorWebhook":true} 1 1 If the node selector is set to master , the spoke was deployed with the version of the GitOps ZTP plugin that requires changes. In the group policy, add the following complianceType and spec entries: spec: - fileName: PtpOperatorConfig.yaml policyName: "config-policy" complianceType: mustonlyhave spec: daemonNodeSelector: node-role.kubernetes.io/worker: "" - fileName: SriovOperatorConfig.yaml policyName: "config-policy" complianceType: mustonlyhave spec: configDaemonNodeSelector: node-role.kubernetes.io/worker: "" Important Changing the daemonNodeSelector field causes temporary PTP synchronization loss and SR-IOV connectivity loss. Commit the changes in Git, and then push to the Git repository being monitored by the GitOps ZTP ArgoCD application. 13.3. PTP and SR-IOV node selector compatibility The PTP configuration resources and SR-IOV network node policies use node-role.kubernetes.io/master: "" as the node selector. If the additional worker nodes have the same NIC configuration as the control plane node, the policies used to configure the control plane node can be reused for the worker nodes. However, the node selector must be changed to select both node types, for example with the "node-role.kubernetes.io/worker" label. 13.4. Using PolicyGenTemplate CRs to apply worker node policies to worker nodes You can create policies for worker nodes. Procedure Create the following policy template: apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: "example-sno-workers" namespace: "example-sno" spec: bindingRules: sites: "example-sno" 1 mcp: "worker" 2 sourceFiles: - fileName: MachineConfigGeneric.yaml 3 policyName: "config-policy" metadata: labels: machineconfiguration.openshift.io/role: worker name: enable-workload-partitioning spec: config: storage: files: - contents: source: data:text/plain;charset=utf-8;base64,W2NyaW8ucnVudGltZS53b3JrbG9hZHMubWFuYWdlbWVudF0KYWN0aXZhdGlvbl9hbm5vdGF0aW9uID0gInRhcmdldC53b3JrbG9hZC5vcGVuc2hpZnQuaW8vbWFuYWdlbWVudCIKYW5ub3RhdGlvbl9wcmVmaXggPSAicmVzb3VyY2VzLndvcmtsb2FkLm9wZW5zaGlmdC5pbyIKcmVzb3VyY2VzID0geyAiY3B1c2hhcmVzIiA9IDAsICJjcHVzZXQiID0gIjAtMyIgfQo= mode: 420 overwrite: true path: /etc/crio/crio.conf.d/01-workload-partitioning user: name: root - contents: source: data:text/plain;charset=utf-8;base64,ewogICJtYW5hZ2VtZW50IjogewogICAgImNwdXNldCI6ICIwLTMiCiAgfQp9Cg== mode: 420 overwrite: true path: /etc/kubernetes/openshift-workload-pinning user: name: root - fileName: PerformanceProfile.yaml policyName: "config-policy" metadata: name: openshift-worker-node-performance-profile spec: cpu: 4 isolated: "4-47" reserved: "0-3" hugepages: defaultHugepagesSize: 1G pages: - size: 1G count: 32 realTimeKernel: enabled: true - fileName: TunedPerformancePatch.yaml policyName: "config-policy" metadata: name: performance-patch-worker spec: profile: - name: performance-patch-worker data: | [main] summary=Configuration changes profile inherited from performance created tuned include=openshift-node-performance-openshift-worker-node-performance-profile [bootloader] cmdline_crash=nohz_full=4-47 5 [sysctl] kernel.timer_migration=1 [scheduler] group.ice-ptp=0:f:10:*:ice-ptp.* [service] service.stalld=start,enable service.chronyd=stop,disable recommend: - profile: performance-patch-worker 1 The policies are applied to all clusters with this label. 2 The MCP field must be set to worker . 3 This generic MachineConfig CR is used to configure workload partitioning on the worker node. 4 The cpu.isolated and cpu.reserved fields must be configured for each particular hardware platform. 5 The cmdline_crash CPU set must match the cpu.isolated set in the PerformanceProfile section. A generic MachineConfig CR is used to configure workload partitioning on the worker node. You can generate the content of crio and kubelet configuration files. Add the created policy template to the Git repository monitored by the ArgoCD policies application. Add the policy in the kustomization.yaml file. Commit the changes in Git, and then push to the Git repository being monitored by the GitOps ZTP ArgoCD application. To remediate the new policies to your spoke cluster, create a TALM custom resource: USD cat <<EOF | oc apply -f - apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: example-sno-worker-policies namespace: default spec: backup: false clusters: - example-sno enable: true managedPolicies: - group-du-sno-config-policy - example-sno-workers-config-policy - example-sno-config-policy preCaching: false remediationStrategy: maxConcurrency: 1 EOF 13.5. Adding worker nodes to single-node OpenShift clusters with GitOps ZTP You can add one or more worker nodes to existing single-node OpenShift clusters to increase available CPU resources in the cluster. Prerequisites Install and configure RHACM 2.6 or later in an OpenShift Container Platform 4.11 or later bare-metal hub cluster Install Topology Aware Lifecycle Manager in the hub cluster Install Red Hat OpenShift GitOps in the hub cluster Use the GitOps ZTP ztp-site-generate container image version 4.12 or later Deploy a managed single-node OpenShift cluster with GitOps ZTP Configure the Central Infrastructure Management as described in the RHACM documentation Configure the DNS serving the cluster to resolve the internal API endpoint api-int.<cluster_name>.<base_domain> Procedure If you deployed your cluster by using the example-sno.yaml SiteConfig manifest, add your new worker node to the spec.clusters['example-sno'].nodes list: nodes: - hostName: "example-node2.example.com" role: "worker" bmcAddress: "idrac-virtualmedia+https://[1111:2222:3333:4444::bbbb:1]/redfish/v1/Systems/System.Embedded.1" bmcCredentialsName: name: "example-node2-bmh-secret" bootMACAddress: "AA:BB:CC:DD:EE:11" bootMode: "UEFI" nodeNetwork: interfaces: - name: eno1 macAddress: "AA:BB:CC:DD:EE:11" config: interfaces: - name: eno1 type: ethernet state: up macAddress: "AA:BB:CC:DD:EE:11" ipv4: enabled: false ipv6: enabled: true address: - ip: 1111:2222:3333:4444::1 prefix-length: 64 dns-resolver: config: search: - example.com server: - 1111:2222:3333:4444::2 routes: config: - destination: ::/0 -hop-interface: eno1 -hop-address: 1111:2222:3333:4444::1 table-id: 254 Create a BMC authentication secret for the new host, as referenced by the bmcCredentialsName field in the spec.nodes section of your SiteConfig file: apiVersion: v1 data: password: "password" username: "username" kind: Secret metadata: name: "example-node2-bmh-secret" namespace: example-sno type: Opaque Commit the changes in Git, and then push to the Git repository that is being monitored by the GitOps ZTP ArgoCD application. When the ArgoCD cluster application synchronizes, two new manifests appear on the hub cluster generated by the GitOps ZTP plugin: BareMetalHost NMStateConfig Important The cpuset field should not be configured for the worker node. Workload partitioning for worker nodes is added through management policies after the node installation is complete. Verification You can monitor the installation process in several ways. Check if the preprovisioning images are created by running the following command: USD oc get ppimg -n example-sno Example output NAMESPACE NAME READY REASON example-sno example-sno True ImageCreated example-sno example-node2 True ImageCreated Check the state of the bare-metal hosts: USD oc get bmh -n example-sno Example output NAME STATE CONSUMER ONLINE ERROR AGE example-sno provisioned true 69m example-node2 provisioning true 4m50s 1 1 The provisioning state indicates that node booting from the installation media is in progress. Continuously monitor the installation process: Watch the agent install process by running the following command: USD oc get agent -n example-sno --watch Example output NAME CLUSTER APPROVED ROLE STAGE 671bc05d-5358-8940-ec12-d9ad22804faa example-sno true master Done [...] 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Starting installation 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Installing 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Writing image to disk [...] 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Waiting for control plane [...] 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Rebooting 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Done When the worker node installation is finished, the worker node certificates are approved automatically. At this point, the worker appears in the ManagedClusterInfo status. Run the following command to see the status: USD oc get managedclusterinfo/example-sno -n example-sno -o \ jsonpath='{range .status.nodeList[*]}{.name}{"\t"}{.conditions}{"\t"}{.labels}{"\n"}{end}' Example output example-sno [{"status":"True","type":"Ready"}] {"node-role.kubernetes.io/master":"","node-role.kubernetes.io/worker":""} example-node2 [{"status":"True","type":"Ready"}] {"node-role.kubernetes.io/worker":""}
[ "oc get ptpoperatorconfig/default -n openshift-ptp -ojsonpath='{.spec}' | jq", "{\"daemonNodeSelector\":{\"node-role.kubernetes.io/master\":\"\"}} 1", "oc get sriovoperatorconfig/default -n openshift-sriov-network-operator -ojsonpath='{.spec}' | jq", "{\"configDaemonNodeSelector\":{\"node-role.kubernetes.io/worker\":\"\"},\"disableDrain\":false,\"enableInjector\":true,\"enableOperatorWebhook\":true} 1", "spec: - fileName: PtpOperatorConfig.yaml policyName: \"config-policy\" complianceType: mustonlyhave spec: daemonNodeSelector: node-role.kubernetes.io/worker: \"\" - fileName: SriovOperatorConfig.yaml policyName: \"config-policy\" complianceType: mustonlyhave spec: configDaemonNodeSelector: node-role.kubernetes.io/worker: \"\"", "apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"example-sno-workers\" namespace: \"example-sno\" spec: bindingRules: sites: \"example-sno\" 1 mcp: \"worker\" 2 sourceFiles: - fileName: MachineConfigGeneric.yaml 3 policyName: \"config-policy\" metadata: labels: machineconfiguration.openshift.io/role: worker name: enable-workload-partitioning spec: config: storage: files: - contents: source: data:text/plain;charset=utf-8;base64,W2NyaW8ucnVudGltZS53b3JrbG9hZHMubWFuYWdlbWVudF0KYWN0aXZhdGlvbl9hbm5vdGF0aW9uID0gInRhcmdldC53b3JrbG9hZC5vcGVuc2hpZnQuaW8vbWFuYWdlbWVudCIKYW5ub3RhdGlvbl9wcmVmaXggPSAicmVzb3VyY2VzLndvcmtsb2FkLm9wZW5zaGlmdC5pbyIKcmVzb3VyY2VzID0geyAiY3B1c2hhcmVzIiA9IDAsICJjcHVzZXQiID0gIjAtMyIgfQo= mode: 420 overwrite: true path: /etc/crio/crio.conf.d/01-workload-partitioning user: name: root - contents: source: data:text/plain;charset=utf-8;base64,ewogICJtYW5hZ2VtZW50IjogewogICAgImNwdXNldCI6ICIwLTMiCiAgfQp9Cg== mode: 420 overwrite: true path: /etc/kubernetes/openshift-workload-pinning user: name: root - fileName: PerformanceProfile.yaml policyName: \"config-policy\" metadata: name: openshift-worker-node-performance-profile spec: cpu: 4 isolated: \"4-47\" reserved: \"0-3\" hugepages: defaultHugepagesSize: 1G pages: - size: 1G count: 32 realTimeKernel: enabled: true - fileName: TunedPerformancePatch.yaml policyName: \"config-policy\" metadata: name: performance-patch-worker spec: profile: - name: performance-patch-worker data: | [main] summary=Configuration changes profile inherited from performance created tuned include=openshift-node-performance-openshift-worker-node-performance-profile [bootloader] cmdline_crash=nohz_full=4-47 5 [sysctl] kernel.timer_migration=1 [scheduler] group.ice-ptp=0:f:10:*:ice-ptp.* [service] service.stalld=start,enable service.chronyd=stop,disable recommend: - profile: performance-patch-worker", "cat <<EOF | oc apply -f - apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: example-sno-worker-policies namespace: default spec: backup: false clusters: - example-sno enable: true managedPolicies: - group-du-sno-config-policy - example-sno-workers-config-policy - example-sno-config-policy preCaching: false remediationStrategy: maxConcurrency: 1 EOF", "nodes: - hostName: \"example-node2.example.com\" role: \"worker\" bmcAddress: \"idrac-virtualmedia+https://[1111:2222:3333:4444::bbbb:1]/redfish/v1/Systems/System.Embedded.1\" bmcCredentialsName: name: \"example-node2-bmh-secret\" bootMACAddress: \"AA:BB:CC:DD:EE:11\" bootMode: \"UEFI\" nodeNetwork: interfaces: - name: eno1 macAddress: \"AA:BB:CC:DD:EE:11\" config: interfaces: - name: eno1 type: ethernet state: up macAddress: \"AA:BB:CC:DD:EE:11\" ipv4: enabled: false ipv6: enabled: true address: - ip: 1111:2222:3333:4444::1 prefix-length: 64 dns-resolver: config: search: - example.com server: - 1111:2222:3333:4444::2 routes: config: - destination: ::/0 next-hop-interface: eno1 next-hop-address: 1111:2222:3333:4444::1 table-id: 254", "apiVersion: v1 data: password: \"password\" username: \"username\" kind: Secret metadata: name: \"example-node2-bmh-secret\" namespace: example-sno type: Opaque", "oc get ppimg -n example-sno", "NAMESPACE NAME READY REASON example-sno example-sno True ImageCreated example-sno example-node2 True ImageCreated", "oc get bmh -n example-sno", "NAME STATE CONSUMER ONLINE ERROR AGE example-sno provisioned true 69m example-node2 provisioning true 4m50s 1", "oc get agent -n example-sno --watch", "NAME CLUSTER APPROVED ROLE STAGE 671bc05d-5358-8940-ec12-d9ad22804faa example-sno true master Done [...] 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Starting installation 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Installing 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Writing image to disk [...] 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Waiting for control plane [...] 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Rebooting 14fd821b-a35d-9cba-7978-00ddf535ff37 example-sno true worker Done", "oc get managedclusterinfo/example-sno -n example-sno -o jsonpath='{range .status.nodeList[*]}{.name}{\"\\t\"}{.conditions}{\"\\t\"}{.labels}{\"\\n\"}{end}'", "example-sno [{\"status\":\"True\",\"type\":\"Ready\"}] {\"node-role.kubernetes.io/master\":\"\",\"node-role.kubernetes.io/worker\":\"\"} example-node2 [{\"status\":\"True\",\"type\":\"Ready\"}] {\"node-role.kubernetes.io/worker\":\"\"}" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/edge_computing/ztp-sno-additional-worker-node
15.8. Updating a Self-Hosted Engine
15.8. Updating a Self-Hosted Engine To update a self-hosted engine from your current version of 4.3 to the latest version of 4.3, you must place the environment in global maintenance mode and then follow the standard procedure for updating between minor versions. Enabling Global Maintenance Mode You must place the self-hosted engine environment in global maintenance mode before performing any setup or upgrade tasks on the Manager virtual machine. Procedure Log in to one of the self-hosted engine nodes and enable global maintenance mode: Confirm that the environment is in maintenance mode before proceeding: You should see a message indicating that the cluster is in maintenance mode. Updating the Red Hat Virtualization Manager Updates to the Red Hat Virtualization Manager are released through the Content Delivery Network. Procedure Log in to the Manager virtual machine. Check if updated packages are available: Update the setup packages: Update the Red Hat Virtualization Manager with the engine-setup script. The engine-setup script prompts you with some configuration questions, then stops the ovirt-engine service, downloads and installs the updated packages, backs up and updates the database, performs post-installation configuration, and starts the ovirt-engine service. When the script completes successfully, the following message appears: Note The engine-setup script is also used during the Red Hat Virtualization Manager installation process, and it stores the configuration values supplied. During an update, the stored values are displayed when previewing the configuration, and might not be up to date if engine-config was used to update configuration after installation. For example, if engine-config was used to update SANWipeAfterDelete to true after installation, engine-setup will output "Default SAN wipe after delete: False" in the configuration preview. However, the updated values will not be overwritten by engine-setup . Important The update process might take some time. Do not stop the process before it completes. Update the base operating system and any optional packages installed on the Manager: Important If any kernel packages were updated: Disable global maintenance mode Reboot the machine to complete the update. Related Information Disabling Global Maintenance Mode Disabling Global Maintenance Mode Procedure Log in to the Manager virtual machine and shut it down. Log in to one of the self-hosted engine nodes and disable global maintenance mode: When you exit global maintenance mode, ovirt-ha-agent starts the Manager virtual machine, and then the Manager automatically starts. It can take up to ten minutes for the Manager to start. Confirm that the environment is running: The listed information includes Engine Status . The value for Engine status should be: Note When the virtual machine is still booting and the Manager hasn't started yet, the Engine status is: If this happens, wait a few minutes and try again.
[ "hosted-engine --set-maintenance --mode=global", "hosted-engine --vm-status", "engine-upgrade-check", "yum update ovirt\\*setup\\* rh\\*vm-setup-plugins", "engine-setup", "Execution of setup completed successfully", "yum update", "hosted-engine --set-maintenance --mode=none", "hosted-engine --vm-status", "{\"health\": \"good\", \"vm\": \"up\", \"detail\": \"Up\"}", "{\"reason\": \"bad vm status\", \"health\": \"bad\", \"vm\": \"up\", \"detail\": \"Powering up\"}" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/Updating_a_self-hosted_engine_SHE_admin
Chapter 67. Kubernetes Persistent Volume Claim
Chapter 67. Kubernetes Persistent Volume Claim Since Camel 2.17 Only producer is supported The Kubernetes Persistent Volume Claim component is one of the Kubernetes Components which provides a producer to execute Kubernetes Persistent Volume Claims operations. 67.1. Dependencies When using kubernetes-persistent-volumes-claims with Red Hat build of Apache Camel for Spring Boot,use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency> 67.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 67.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 67.2.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 67.3. Component Options The Kubernetes Persistent Volume Claim component supports 3 options, which are listed below. Name Description Default Type kubernetesClient (producer) Autowired To use an existing kubernetes client. KubernetesClient lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 67.4. Endpoint Options The Kubernetes Persistent Volume Claim endpoint is configured using URI syntax: with the following path and query parameters: 67.4.1. Path Parameters (1 parameters) Name Description Default Type masterUrl (producer) Required Kubernetes Master url. String 67.4.2. Query Parameters (21 parameters) Name Description Default Type apiVersion (producer) The Kubernetes API Version to use. String dnsDomain (producer) The dns domain, used for ServiceCall EIP. String kubernetesClient (producer) Default KubernetesClient to use if provided. KubernetesClient namespace (producer) The namespace. String operation (producer) Producer operation to do on Kubernetes. String portName (producer) The port name, used for ServiceCall EIP. String portProtocol (producer) The port protocol, used for ServiceCall EIP. tcp String lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean connectionTimeout (advanced) Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer caCertData (security) The CA Cert Data. String caCertFile (security) The CA Cert File. String clientCertData (security) The Client Cert Data. String clientCertFile (security) The Client Cert File. String clientKeyAlgo (security) The Key Algorithm used by the client. String clientKeyData (security) The Client Key data. String clientKeyFile (security) The Client Key file. String clientKeyPassphrase (security) The Client Key Passphrase. String oauthToken (security) The Auth Token. String password (security) Password to connect to Kubernetes. String trustCerts (security) Define if the certs we used are trusted anyway or not. Boolean username (security) Username to connect to Kubernetes. String 67.5. Message Headers The Kubernetes Persistent Volume Claim component supports 5 message header(s), which is/are listed below: Name Description Default Type CamelKubernetesOperation (producer) Constant: KUBERNETES_OPERATION The Producer operation. String CamelKubernetesNamespaceName (producer) Constant: KUBERNETES_NAMESPACE_NAME The namespace name. String CamelKubernetesPersistentVolumesClaimsLabels (producer) Constant: KUBERNETES_PERSISTENT_VOLUMES_CLAIMS_LABELS The persistent volume claim labels. Map CamelKubernetesPersistentVolumeClaimName (producer) Constant: KUBERNETES_PERSISTENT_VOLUME_CLAIM_NAME The persistent volume claim name. String CamelKubernetesPersistentVolumeClaimSpec (producer) Constant: KUBERNETES_PERSISTENT_VOLUME_CLAIM_SPEC The spec for a persistent volume claim. PersistentVolumeClaimSpec 67.6. Supported producer operation listPersistentVolumesClaims listPersistentVolumesClaimsByLabels getPersistentVolumeClaim createPersistentVolumeClaim updatePersistentVolumeClaim deletePersistentVolumeClaim 67.7. Kubernetes Persistent Volume Claims Producer Examples listPersistentVolumesClaims: this operation lists the pvc on a kubernetes cluster. from("direct:list"). toF("kubernetes-persistent-volumes-claims:///?kubernetesClient=#kubernetesClient&operation=listPersistentVolumesClaims"). to("mock:result"); This operation returns a List of pvc from your cluster. listPersistentVolumesClaimsByLabels: this operation lists the pvc by labels on a kubernetes cluster. from("direct:listByLabels").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put("key1", "value1"); labels.put("key2", "value2"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_PERSISTENT_VOLUMES_CLAIMS_LABELS, labels); } }); toF("kubernetes-persistent-volumes-claims:///?kubernetesClient=#kubernetesClient&operation=listPersistentVolumesClaimsByLabels"). to("mock:result"); This operation returns a List of pvc from your cluster, using a label selector (with key1 and key2, with value value1 and value2). 67.8. Spring Boot Auto-Configuration The component supports 102 options, which are listed below. Name Description Default Type camel.cluster.kubernetes.attributes Custom service attributes. Map camel.cluster.kubernetes.cluster-labels Set the labels used to identify the pods composing the cluster. Map camel.cluster.kubernetes.config-map-name Set the name of the ConfigMap used to do optimistic locking (defaults to 'leaders'). String camel.cluster.kubernetes.connection-timeout-millis Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer camel.cluster.kubernetes.enabled Sets if the Kubernetes cluster service should be enabled or not, default is false. false Boolean camel.cluster.kubernetes.id Cluster Service ID. String camel.cluster.kubernetes.jitter-factor A jitter factor to apply in order to prevent all pods to call Kubernetes APIs in the same instant. Double camel.cluster.kubernetes.kubernetes-namespace Set the name of the Kubernetes namespace containing the pods and the configmap (autodetected by default). String camel.cluster.kubernetes.lease-duration-millis The default duration of the lease for the current leader. Long camel.cluster.kubernetes.master-url Set the URL of the Kubernetes master (read from Kubernetes client properties by default). String camel.cluster.kubernetes.order Service lookup order/priority. Integer camel.cluster.kubernetes.pod-name Set the name of the current pod (autodetected from container host name by default). String camel.cluster.kubernetes.renew-deadline-millis The deadline after which the leader must stop its services because it may have lost the leadership. Long camel.cluster.kubernetes.retry-period-millis The time between two subsequent attempts to check and acquire the leadership. It is randomized using the jitter factor. Long camel.component.kubernetes-config-maps.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-config-maps.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-config-maps.enabled Whether to enable auto configuration of the kubernetes-config-maps component. This is enabled by default. Boolean camel.component.kubernetes-config-maps.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-config-maps.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-custom-resources.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-custom-resources.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-custom-resources.enabled Whether to enable auto configuration of the kubernetes-custom-resources component. This is enabled by default. Boolean camel.component.kubernetes-custom-resources.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-custom-resources.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-deployments.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-deployments.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-deployments.enabled Whether to enable auto configuration of the kubernetes-deployments component. This is enabled by default. Boolean camel.component.kubernetes-deployments.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-deployments.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-events.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-events.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-events.enabled Whether to enable auto configuration of the kubernetes-events component. This is enabled by default. Boolean camel.component.kubernetes-events.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-events.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-hpa.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-hpa.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-hpa.enabled Whether to enable auto configuration of the kubernetes-hpa component. This is enabled by default. Boolean camel.component.kubernetes-hpa.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-hpa.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-job.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-job.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-job.enabled Whether to enable auto configuration of the kubernetes-job component. This is enabled by default. Boolean camel.component.kubernetes-job.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-job.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-namespaces.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-namespaces.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-namespaces.enabled Whether to enable auto configuration of the kubernetes-namespaces component. This is enabled by default. Boolean camel.component.kubernetes-namespaces.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-namespaces.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-nodes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-nodes.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-nodes.enabled Whether to enable auto configuration of the kubernetes-nodes component. This is enabled by default. Boolean camel.component.kubernetes-nodes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-nodes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes-claims.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes-claims.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes-claims component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes-claims.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes-claims.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-pods.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-pods.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-pods.enabled Whether to enable auto configuration of the kubernetes-pods component. This is enabled by default. Boolean camel.component.kubernetes-pods.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-pods.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-replication-controllers.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-replication-controllers.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-replication-controllers.enabled Whether to enable auto configuration of the kubernetes-replication-controllers component. This is enabled by default. Boolean camel.component.kubernetes-replication-controllers.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-replication-controllers.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-resources-quota.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-resources-quota.enabled Whether to enable auto configuration of the kubernetes-resources-quota component. This is enabled by default. Boolean camel.component.kubernetes-resources-quota.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-resources-quota.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-secrets.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-secrets.enabled Whether to enable auto configuration of the kubernetes-secrets component. This is enabled by default. Boolean camel.component.kubernetes-secrets.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-secrets.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-service-accounts.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-service-accounts.enabled Whether to enable auto configuration of the kubernetes-service-accounts component. This is enabled by default. Boolean camel.component.kubernetes-service-accounts.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-service-accounts.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-services.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-services.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-services.enabled Whether to enable auto configuration of the kubernetes-services component. This is enabled by default. Boolean camel.component.kubernetes-services.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-services.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-build-configs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-build-configs.enabled Whether to enable auto configuration of the openshift-build-configs component. This is enabled by default. Boolean camel.component.openshift-build-configs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-build-configs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-builds.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-builds.enabled Whether to enable auto configuration of the openshift-builds component. This is enabled by default. Boolean camel.component.openshift-builds.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-builds.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-deploymentconfigs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-deploymentconfigs.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.openshift-deploymentconfigs.enabled Whether to enable auto configuration of the openshift-deploymentconfigs component. This is enabled by default. Boolean camel.component.openshift-deploymentconfigs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-deploymentconfigs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency>", "kubernetes-persistent-volumes-claims:masterUrl", "from(\"direct:list\"). toF(\"kubernetes-persistent-volumes-claims:///?kubernetesClient=#kubernetesClient&operation=listPersistentVolumesClaims\"). to(\"mock:result\");", "from(\"direct:listByLabels\").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put(\"key1\", \"value1\"); labels.put(\"key2\", \"value2\"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_PERSISTENT_VOLUMES_CLAIMS_LABELS, labels); } }); toF(\"kubernetes-persistent-volumes-claims:///?kubernetesClient=#kubernetesClient&operation=listPersistentVolumesClaimsByLabels\"). to(\"mock:result\");" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-kubernetes-persistent-volume-claim-component-starter
Chapter 2. API reference
Chapter 2. API reference The full API reference is available on your Satellite Server at https:// satellite.example.com /apidoc/v2.html . Be aware that even though versions 1 and 2 of the Satellite 6 API are available, Red Hat only supports version 2. 2.1. Understanding the API syntax The built-in API reference shows the API route, or path, preceded by an HTTP verb: To work with the API, construct a command using the curl command syntax and the API route from the reference document: 1 To use curl for the API call, specify an HTTP verb with the --request option. For example, --request POST . 2 Add the --insecure option to skip SSL peer certificate verification check. 3 Provide user credentials with the --user option. 4 For POST and PUT requests, use the --data option to pass JSON formatted data. For more information, see Section 4.1.1, "Passing JSON data to the API request" . 5 6 When passing the JSON data with the --data option, you must specify the following headers with the --header option. For more information, see Section 4.1.1, "Passing JSON data to the API request" . 7 When downloading content from Satellite Server, specify the output file with the --output option. 8 Use the API route in the following format: https:// satellite.example.com /katello/api/activation_keys . In Satellite 6, version 2 of the API is the default. Therefore it is not necessary to use v2 in the URL for API calls. 9 Redirect the output to the Python json.tool module to make the output easier to read. 2.1.1. Using the GET HTTP verb Use the GET HTTP verb to get data from the API about an existing entry or resource. Example This example requests the amount of Satellite hosts: Example request: Example response: The response from the API indicates that there are two results in total, this is the first page of the results, and the maximum results per page is set to 20. For more information, see Section 2.2, "Understanding the JSON response format" . 2.1.2. Using the POST HTTP verb Use the POST HTTP verb to submit data to the API to create an entry or resource. You must submit the data in JSON format. For more information, see Section 4.1.1, "Passing JSON data to the API request" . Example This example creates an activation key. Create a test file, for example, activation-key.json , with the following content: Create an activation key by applying the data in the activation-key.json file: Example request: Example response: Verify that the new activation key is present. In the Satellite web UI, navigate to Content > Activation keys to view your activation keys. 2.1.3. Using the PUT HTTP verb Use the PUT HTTP verb to change an existing value or append to an existing resource. You must submit the data in JSON format. For more information, see Section 4.1.1, "Passing JSON data to the API request" . Example This example updates the TestKey activation key created in the example. Edit the activation-key.json file created previously as follows: Apply the changes in the JSON file: Example request: Example response: In the Satellite web UI, verify the changes by navigating to Content > Activation keys . 2.1.4. Using the DELETE HTTP verb To delete a resource, use the DELETE verb with an API route that includes the ID of the resource you want to delete. Example This example deletes the TestKey activation key which ID is 2: Example request: Example response: 2.1.5. Relating API error messages to the API reference The API uses a RAILs format to indicate an error: This translates to the following format used in the API reference: 2.2. Understanding the JSON response format Calls to the API return results in JSON format. The API call returns the result for a single-option response or for responses collections. JSON response format for single objects You can use single-object JSON responses to work with a single object. API requests to a single object require the object's unique identifier :id . This is an example of the format for a single-object request for the Satellite domain which ID is 23: Example request: Example response: JSON response format for collections Collections are a list of objects such as hosts and domains. The format for a collection JSON response consists of a metadata fields section and a results section. This is an example of the format for a collection request for a list of Satellite domains: Example request: Example response: The response metadata fields API response uses the following metadata fields: total - The total number of objects without any search parameters. subtotal - The number of objects returned with the given search parameters. If there is no search, then subtotal is equal to total. page - The page number. per_page - The maximum number of objects returned per page. limit - The specified number of objects to return in a collection response. offset - The number of objects skipped before returning a collection. search - The search string based on scoped_scoped syntax. sort by - Specifies by what field the API sorts the collection. order - The sort order, either ASC for ascending or DESC for descending. results - The collection of objects.
[ "HTTP_VERB API_ROUTE", "curl --request HTTP_VERB \\ 1 --insecure \\ 2 --user sat_username:sat_password \\ 3 --data @ file .json \\ 4 --header \"Accept:application/json\" \\ 5 --header \"Content-Type:application/json\" \\ 6 --output file 7 API_ROUTE \\ 8 | python -m json.tool 9", "curl --request GET --insecure --user sat_username:sat_password https:// satellite.example.com /api/hosts | python -m json.tool", "{ \"total\": 2, \"subtotal\": 2, \"page\": 1, \"per_page\": 20, \"search\": null, \"sort\": { \"by\": null, \"order\": null }, \"results\": output truncated", "{\"organization_id\":1, \"name\":\"TestKey\", \"description\":\"Just for testing\"}", "curl --header \"Accept:application/json\" --header \"Content-Type:application/json\" --request POST --user sat_username:sat_password --insecure --data @activation-key.json https:// satellite.example.com /katello/api/activation_keys | python -m json.tool", "{ \"id\": 2, \"name\": \"TestKey\", \"description\": \"Just for testing\", \"unlimited_hosts\": true, \"auto_attach\": true, \"content_view_id\": null, \"environment_id\": null, \"usage_count\": 0, \"user_id\": 3, \"max_hosts\": null, \"release_version\": null, \"service_level\": null, \"content_overrides\": [ ], \"organization\": { \"name\": \"Default Organization\", \"label\": \"Default_Organization\", \"id\": 1 }, \"created_at\": \"2017-02-16 12:37:47 UTC\", \"updated_at\": \"2017-02-16 12:37:48 UTC\", \"content_view\": null, \"environment\": null, \"products\": null, \"host_collections\": [ ], \"permissions\": { \"view_activation_keys\": true, \"edit_activation_keys\": true, \"destroy_activation_keys\": true } }", "{\"organization_id\":1, \"name\":\"TestKey\", \"description\":\"Just for testing\",\"max_hosts\":\"10\" }", "curl --header \"Accept:application/json\" --header \"Content-Type:application/json\" --request PUT --user sat_username:sat_password --insecure --data @activation-key.json https:// satellite.example.com /katello/api/activation_keys/2 | python -m json.tool", "{ \"id\": 2, \"name\": \"TestKey\", \"description\": \"Just for testing\", \"unlimited_hosts\": false, \"auto_attach\": true, \"content_view_id\": null, \"environment_id\": null, \"usage_count\": 0, \"user_id\": 3, \"max_hosts\": 10, \"release_version\": null, \"service_level\": null, \"content_overrides\": [ ], \"organization\": { \"name\": \"Default Organization\", \"label\": \"Default_Organization\", \"id\": 1 }, \"created_at\": \"2017-02-16 12:37:47 UTC\", \"updated_at\": \"2017-02-16 12:46:17 UTC\", \"content_view\": null, \"environment\": null, \"products\": null, \"host_collections\": [ ], \"permissions\": { \"view_activation_keys\": true, \"edit_activation_keys\": true, \"destroy_activation_keys\": true } }", "curl --header \"Accept:application/json\" --header \"Content-Type:application/json\" --request DELETE --user sat_username:sat_password --insecure https:// satellite.example.com /katello/api/activation_keys/2 | python -m json.tool", "output omitted \"started_at\": \"2017-02-16 12:58:17 UTC\", \"ended_at\": \"2017-02-16 12:58:18 UTC\", \"state\": \"stopped\", \"result\": \"success\", \"progress\": 1.0, \"input\": { \"activation_key\": { \"id\": 2, \"name\": \"TestKey\" output truncated", "Nested_Resource . Attribute_Name", "Resource [ Nested_Resource_attributes ][ Attribute_Name_id ]", "curl --request GET --insecure --user sat_username:sat_password https:// satellite.example.com /api/domains/23 | python -m json.tool", "{ \"id\": 23, \"name\": \"qa.lab.example.com\", \"fullname\": \"QA\", \"dns_id\": 10, \"created_at\": \"2013-08-13T09:02:31Z\", \"updated_at\": \"2013-08-13T09:02:31Z\" }", "curl --request GET --insecure --user sat_username:sat_password https:// satellite.example.com /api/domains | python -m json.tool", "{ \"total\": 3, \"subtotal\": 3, \"page\": 1, \"per_page\": 20, \"search\": null, \"sort\": { \"by\": null, \"order\": null }, \"results\": [ { \"id\": 23, \"name\": \"qa.lab.example.com\", \"fullname\": \"QA\", \"dns_id\": 10, \"created_at\": \"2013-08-13T09:02:31Z\", \"updated_at\": \"2013-08-13T09:02:31Z\" }, { \"id\": 25, \"name\": \"sat.lab.example.com\", \"fullname\": \"SATLAB\", \"dns_id\": 8, \"created_at\": \"2013-08-13T08:32:48Z\", \"updated_at\": \"2013-08-14T07:04:03Z\" }, { \"id\": 32, \"name\": \"hr.lab.example.com\", \"fullname\": \"HR\", \"dns_id\": 8, \"created_at\": \"2013-08-16T08:32:48Z\", \"updated_at\": \"2013-08-16T07:04:03Z\" } ] }" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/api_guide/chap-Red_Hat_Satellite-API_Guide-API_Reference
2.2. Using authconfig
2.2. Using authconfig The authconfig tool can help configure what kind of data store to use for user credentials, such as LDAP. On Red Hat Enterprise Linux, authconfig has both GUI and command-line options to configure any user data stores. The authconfig tool can configure the system to use specific services - SSSD, LDAP, NIS, or Winbind - for its user database, along with using different forms of authentication mechanisms. Important To configure Identity Management systems, Red Hat recommends using the ipa-client-install utility or the realmd system instead of authconfig . The authconfig utilities are limited and substantially less flexible. For more information, see Section 2.1, "Identity Management Tools for System Authentication" . The following three authconfig utilities are available for configuring authentication settings: authconfig-gtk provides a full graphical interface. authconfig provides a command-line interface for manual configuration. authconfig-tui provides a text-based UI. Note that this utility has been deprecated. All of these configuration utilities must be run as root . 2.2.1. Tips for Using the authconfig CLI The authconfig command-line tool updates all of the configuration files and services required for system authentication, according to the settings passed to the script. Along with providing even more identity and authentication configuration options than can be set through the UI, the authconfig tool can also be used to create backup and kickstart files. For a complete list of authconfig options, check the help output and the man page. There are some things to remember when running authconfig : With every command, use either the --update or --test option. One of those options is required for the command to run successfully. Using --update writes the configuration changes. The --test option displays the changes but does not apply the changes to the configuration. If the --update option is not used, then the changes are not written to the system configuration files. The command line can be used to update existing configuration as well as to set new configuration. Because of this, the command line does not enforce that required attributes are used with a given invocation (because the command may be updating otherwise complete settings). When editing the authentication configuration, be very careful that the configuration is complete and accurate. Changing the authentication settings to incomplete or wrong values can lock users out of the system. Use the --test option to confirm that the settings are proper before using the --update option to write them. Each enable option has a corresponding disable option. 2.2.2. Installing the authconfig UI The authconfig UI is not installed by default, but it can be useful for administrators to make quick changes to the authentication configuration. To install the UI, install the authconfig-gtk package. This has dependencies on some common system packages, such as the authconfig command-line tool, Bash, and Python. Most of those are installed by default. 2.2.3. Launching the authconfig UI Open the terminal and log in as root. Run the system-config-authentication command. Important Any changes take effect immediately when the authconfig UI is closed. There are three configuration tabs in the Authentication dialog box: Identity & Authentication , which configures the resource used as the identity store (the data repository where the user IDs and corresponding credentials are stored). Advanced Options , which configures authentication methods other than passwords or certificates, like smart cards and fingerprint. Password Options , which configures password authentication methods. Figure 2.1. authconfig Window 2.2.4. Testing Authentication Settings It is critical that authentication is fully and properly configured. Otherwise all users (even root) could be locked out of the system, or some users blocked. The --test option prints all of the authentication configuration for the system, for every possible identity and authentication mechanism. This shows both the settings for what is enabled and what areas are disabled. The test option can be run by itself to show the full, current configuration or it can be used with an authconfig command to show how the configuration will be changed (without actually changing it). This can be very useful in verifying that the proposed authentication settings are complete and correct. 2.2.5. Saving and Restoring Configuration Using authconfig Changing authentication settings can be problematic. Improperly changing the configuration can wrongly exclude users who should have access, can cause connections to the identity store to fail, or can even lock all access to a system. Before editing the authentication configuration, it is strongly recommended that administrators take a backup of all configuration files. This is done with the --savebackup option. The authentication configuration can be restored to any saved version using the --restorebackup option, with the name of the backup to use. The authconfig command saves an automatic backup every time the configuration is altered. It is possible to restore the last backup using the --restorelastbackup option.
[ "yum install authconfig-gtk Loaded plugins: langpacks, product-id, subscription-manager Resolving Dependencies --> Running transaction check ---> Package authconfig-gtk.x86_64 0:6.2.8-8.el7 will be installed --> Finished Dependency Resolution Dependencies Resolved ================================================================================ Package Arch Version Repository Size ================================================================================ Installing: authconfig-gtk x86_64 6.2.8-8.el7 RHEL-Server 105 k Transaction Summary ================================================================================ Install 1 Package ... 8<", "authconfig --test caching is disabled nss_files is always enabled nss_compat is disabled nss_db is disabled nss_hesiod is disabled hesiod LHS = \"\" hesiod RHS = \"\" nss_ldap is disabled LDAP+TLS is disabled LDAP server = \"\" LDAP base DN = \"\" nss_nis is disabled NIS server = \"\" NIS domain = \"\" nss_nisplus is disabled nss_winbind is disabled SMB workgroup = \"MYGROUP\" SMB servers = \"\" SMB security = \"user\" SMB realm = \"\" Winbind template shell = \"/bin/false\" SMB idmap range = \"16777216-33554431\" nss_sss is enabled by default nss_wins is disabled nss_mdns4_minimal is disabled DNS preference over NSS or WINS is disabled pam_unix is always enabled shadow passwords are enabled password hashing algorithm is sha512 pam_krb5 is disabled krb5 realm = \"#\" krb5 realm via dns is disabled krb5 kdc = \"\" krb5 kdc via dns is disabled krb5 admin server = \"\" pam_ldap is disabled LDAP+TLS is disabled LDAP server = \"\" LDAP base DN = \"\" LDAP schema = \"rfc2307\" pam_pkcs11 is disabled use only smartcard for login is disabled smartcard module = \"\" smartcard removal action = \"\" pam_fprintd is disabled pam_ecryptfs is disabled pam_winbind is disabled SMB workgroup = \"MYGROUP\" SMB servers = \"\" SMB security = \"user\" SMB realm = \"\" pam_sss is disabled by default credential caching in SSSD is enabled SSSD use instead of legacy services if possible is enabled IPAv2 is disabled IPAv2 domain was not joined IPAv2 server = \"\" IPAv2 realm = \"\" IPAv2 domain = \"\" pam_pwquality is enabled (try_first_pass local_users_only retry=3 authtok_type=) pam_passwdqc is disabled () pam_access is disabled () pam_mkhomedir or pam_oddjob_mkhomedir is disabled (umask=0077) Always authorize local users is enabled () Authenticate system accounts against network services is disabled", "authconfig --savebackup=/backups/authconfigbackup20200701", "authconfig --restorebackup=/backups/authconfigbackup20200701", "authconfig --restorelastbackup" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system-level_authentication_guide/authconfig-install
Chapter 8. Backing up and restoring IdM
Chapter 8. Backing up and restoring IdM Identity Management lets you manually back up and restore the IdM system after a data loss event. During a backup, the system creates a directory that stores information about your IdM setup. You can use this backup directory to restore your original IdM setup. Note The IdM backup and restore features are designed to help prevent data loss. To mitigate the impact of server loss and ensure continued operation, provide alternative servers to clients. For information on establishing a replication topology see Preparing for server loss with replication . 8.1. IdM backup types With the ipa-backup utility, you can create two types of backups: Full-server backup Contains all server configuration files related to IdM, and LDAP data in LDAP Data Interchange Format (LDIF) files IdM services must be offline . Suitable for rebuilding an IdM deployment from scratch. Data-only backup Contains LDAP data in LDIF files and the replication changelog IdM services can be online or offline . Suitable for restoring IdM data to a state in the past 8.2. Naming conventions for IdM backup files By default, IdM stores backups as .tar archives in subdirectories of the /var/lib/ipa/backup/ directory. The archives and subdirectories follow these naming conventions: Full-server backup An archive named ipa-full.tar in a directory named ipa-full- <YEAR-MM-DD-HH-MM-SS> , with the time specified in GMT time. Data-only backup An archive named ipa-data.tar in a directory named ipa-data- <YEAR-MM-DD-HH-MM-SS> , with the time specified in GMT time. Note Uninstalling an IdM server does not automatically remove any backup files. 8.3. Considerations when creating a backup The important behaviors and limitations of the ipa-backup command include the following: By default, the ipa-backup utility runs in offline mode, which stops all IdM services. The utility automatically restarts IdM services after the backup is finished. A full-server backup must always run with IdM services offline, but a data-only backup can be performed with services online. By default, the ipa-backup utility creates backups on the file system containing the /var/lib/ipa/backup/ directory. Red Hat recommends creating backups regularly on a file system separate from the production filesystem used by IdM, and archiving the backups to a fixed medium, such as tape or optical storage. Consider performing backups on hidden replicas . IdM services can be shut down on hidden replicas without affecting IdM clients. The ipa-backup utility checks if all of the services used in your IdM cluster, such as a Certificate Authority (CA), Domain Name System (DNS), and Key Recovery Agent (KRA), are installed on the server where you are running the backup. If the server does not have all these services installed, the ipa-backup utility exits with a warning, because backups taken on that host would not be sufficient for a full cluster restoration. For example, if your IdM deployment uses an integrated Certificate Authority (CA), a backup run on a non-CA replica will not capture CA data. Red Hat recommends verifying that the replica where you perform an ipa-backup has all of the IdM services used in the cluster installed. You can bypass the IdM server role check with the ipa-backup --disable-role-check command, but the resulting backup will not contain all the data necessary to restore IdM fully. 8.4. Creating an IdM backup Create a full-server and data-only backup in offline and online modes using the ipa-backup command. Prerequisites You must have root privileges to run the ipa-backup utility. Procedure To create a full-server backup in offline mode, use the ipa-backup utility without additional options. To create an offline data-only backup, specify the --data option. To create a full-server backup that includes IdM log files, use the --logs option. To create a data-only backup while IdM services are running, specify both --data and --online options. Note If the backup fails due to insufficient space in the /tmp directory, use the TMPDIR environment variable to change the destination for temporary files created by the backup process: Verification Ensure the backup directory contains an archive with the backup. Additional resources ipa-backup command fails to finish (Red Hat Knowledgebase) 8.5. Creating a GPG2-encrypted IdM backup You can create encrypted backups using GNU Privacy Guard (GPG) encryption. The following procedure creates an IdM backup and encrypts it using a GPG2 key. Prerequisites You have created a GPG2 key. See Creating a GPG2 key . Procedure Create a GPG-encrypted backup by specifying the --gpg option. Verification Ensure that the backup directory contains an encrypted archive with a .gpg file extension. Additional resources Creating a backup . 8.6. Creating a GPG2 key The following procedure describes how to generate a GPG2 key to use with encryption utilities. Prerequisites You need root privileges. Procedure Install and configure the pinentry utility. Create a key-input file used for generating a GPG keypair with your preferred details. For example: Optional: By default, GPG2 stores its keyring in the ~/.gnupg file. To use a custom keyring location, set the GNUPGHOME environment variable to a directory that is only accessible by root. Generate a new GPG2 key based on the contents of the key-input file. Enter a passphrase to protect the GPG2 key. You use this passphrase to access the private key for decryption. Confirm the correct passphrase by entering it again. Verify that the new GPG2 key was created successfully. Verification List the GPG keys on the server. Additional resources GNU Privacy Guard 8.7. When to restore from an IdM backup You can respond to several disaster scenarios by restoring from an IdM backup: Undesirable changes were made to the LDAP content : Entries were modified or deleted, replication carried out those changes throughout the deployment, and you want to revert those changes. Restoring a data-only backup returns the LDAP entries to the state without affecting the IdM configuration itself. Total Infrastructure Loss, or loss of all CA instances : If a disaster damages all Certificate Authority replicas, the deployment has lost the ability to rebuild itself by deploying additional servers. In this situation, restore a backup of a CA Replica and build new replicas from it. An upgrade on an isolated server failed : The operating system remains functional, but the IdM data is corrupted, which is why you want to restore the IdM system to a known good state. Red Hat recommends working with Technical Support to diagnose and troubleshoot the issue. If those efforts fail, restore from a full-server backup. Important The preferred solution for hardware or upgrade failure is to rebuild the lost server from a replica. For more information, see Recovering a single server with replication . 8.8. Considerations when restoring from an IdM backup If you have a backup created with the ipa-backup utility, you can restore your IdM server or the LDAP content to the state they were in when the backup was performed. The following are the key considerations while restoring from an IdM backup: You can only restore a backup on a server that matches the configuration of the server where the backup was originally created. The server must have: The same hostname The same IP address The same version of IdM software If one IdM server among many is restored, the restored server becomes the only source of information for IdM. All other servers must be re-initialized from the restored server. Since any data created after the last backup will be lost, do not use the backup and restore solution for normal system maintenance. If a server is lost, Red Hat recommends rebuilding the server by reinstalling it as a replica, instead of restoring from a backup. Creating a new replica preserves data from the current working environment. For more information, see Preparing for server loss with replication . The backup and restore features can only be managed from the command line and are not available in the IdM web UI. You cannot restore from backup files located in the /tmp or /var/tmp directories. The IdM Directory Server uses a PrivateTmp directory and cannot access the /tmp or /var/tmp directories commonly available to the operating system. Tip Restoring from a backup requires the same software (RPM) versions on the target host as were installed when the backup was performed. Due to this, Red Hat recommends restoring from a Virtual Machine snapshot rather than a backup. For more information, see Recovering from data loss with VM snapshots . 8.9. Restoring an IdM server from a backup Restore an IdM server, or its LDAP data, from an IdM backup. Figure 8.1. Replication topology used in this example Table 8.1. Server naming conventions used in this example Server host name Function server1.example.com The server that needs to be restored from backup. caReplica2.example.com A Certificate Authority (CA) replica connected to the server1.example.com host. replica3.example.com A replica connected to the caReplica2.example.com host. Prerequisites You have generated a full-server or data-only backup of the IdM server with the ipa-backup utility. See Creating a backup . Your backup files are not in the /tmp or /var/tmp directories. Before performing a full-server restore from a full-server backup, uninstall IdM from the server and reinstall IdM using the same server configuration as before. Procedure Use the ipa-restore utility to restore a full-server or data-only backup. If the backup directory is in the default /var/lib/ipa/backup/ location, enter only the name of the directory: If the backup directory is not in the default location, enter its full path: Note The ipa-restore utility automatically detects the type of backup that the directory contains, and performs the same type of restore by default. To perform a data-only restore from a full-server backup, add the --data option to the ipa-restore command: Enter the Directory Manager password. Enter yes to confirm overwriting current data with the backup. The ipa-restore utility disables replication on all servers that are available: The utility then stops IdM services, restores the backup, and restarts the services: Re-initialize all replicas connected to the restored server: List all replication topology segments for the domain suffix, taking note of topology segments involving the restored server. Re-initialize the domain suffix for all topology segments with the restored server. In this example, perform a re-initialization of caReplica2 with data from server1 . Moving on to Certificate Authority data, list all replication topology segments for the ca suffix. Re-initialize all CA replicas connected to the restored server. In this example, perform a csreplica re-initialization of caReplica2 with data from server1 . Continue moving outward through the replication topology, re-initializing successive replicas, until all servers have been updated with the data from restored server server1.example.com . In this example, we only have to re-initialize the domain suffix on replica3 with the data from caReplica2 : Clear SSSD's cache on every server to avoid authentication problems due to invalid data: Stop the SSSD service: Remove all cached content from SSSD: Start the SSSD service: Reboot the server. Additional resources The ipa-restore (1) man page also covers in detail how to handle complex replication scenarios during restoration. 8.10. Restoring from an encrypted backup This procedure restores an IdM server from an encrypted IdM backup. The ipa-restore utility automatically detects if an IdM backup is encrypted and restores it using the GPG2 root keyring. Prerequisites A GPG-encrypted IdM backup. See Creating encrypted IdM backups . The LDAP Directory Manager password The passphrase used when creating the GPG key Procedure If you used a custom keyring location when creating the GPG2 keys, verify that the USDGNUPGHOME environment variable is set to that directory. See Creating a GPG2 key . Provide the ipa-restore utility with the backup directory location. Enter the Directory Manager password. Enter the passphrase you used when creating the GPG key. Re-initialize all replicas connected to the restored server. See Restoring an IdM server from backup .
[ "ll /var/lib/ipa/backup/ ipa-full -2021-01-29-12-11-46 total 3056 -rw-r--r--. 1 root root 158 Jan 29 12:11 header -rw-r--r--. 1 root root 3121511 Jan 29 12:11 ipa-full.tar", "ll /var/lib/ipa/backup/ ipa-data -2021-01-29-12-14-23 total 1072 -rw-r--r--. 1 root root 158 Jan 29 12:14 header -rw-r--r--. 1 root root 1090388 Jan 29 12:14 ipa-data.tar", "ipa-backup Preparing backup on server.example.com Stopping IPA services Backing up ipaca in EXAMPLE-COM to LDIF Backing up userRoot in EXAMPLE-COM to LDIF Backing up EXAMPLE-COM Backing up files Starting IPA service Backed up to /var/lib/ipa/backup/ipa-full-2020-01-14-11-26-06 The ipa-backup command was successful", "ipa-backup --data", "ipa-backup --logs", "ipa-backup --data --online", "TMPDIR=/new/location ipa-backup", "ls /var/lib/ipa/backup/ipa-full-2020-01-14-11-26-06 header ipa-full.tar", "ipa-backup --gpg Preparing backup on server.example.com Stopping IPA services Backing up ipaca in EXAMPLE-COM to LDIF Backing up userRoot in EXAMPLE-COM to LDIF Backing up EXAMPLE-COM Backing up files Starting IPA service Encrypting /var/lib/ipa/backup/ipa-full-2020-01-13-14-38-00/ipa-full.tar Backed up to /var/lib/ipa/backup/ipa-full-2020-01-13-14-38-00 The ipa-backup command was successful", "ls /var/lib/ipa/backup/ipa-full-2020-01-13-14-38-00 header ipa-full.tar.gpg", "dnf install pinentry mkdir ~/.gnupg -m 700 echo \"pinentry-program /usr/bin/pinentry-curses\" >> ~/.gnupg/gpg-agent.conf", "cat >key-input <<EOF %echo Generating a standard key Key-Type: RSA Key-Length: 2048 Name-Real: GPG User Name-Comment: first key Name-Email: [email protected] Expire-Date: 0 %commit %echo Finished creating standard key EOF", "export GNUPGHOME= /root/backup mkdir -p USDGNUPGHOME -m 700", "gpg2 --batch --gen-key key-input", "β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Please enter the passphrase to β”‚ β”‚ protect your new key β”‚ β”‚ β”‚ β”‚ Passphrase: <passphrase> β”‚ β”‚ β”‚ β”‚ <OK> <Cancel> β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜", "β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Please re-enter this passphrase β”‚ β”‚ β”‚ β”‚ Passphrase: <passphrase> β”‚ β”‚ β”‚ β”‚ <OK> <Cancel> β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜", "gpg: keybox '/root/backup/pubring.kbx' created gpg: Generating a standard key gpg: /root/backup/trustdb.gpg: trustdb created gpg: key BF28FFA302EF4557 marked as ultimately trusted gpg: directory '/root/backup/openpgp-revocs.d' created gpg: revocation certificate stored as '/root/backup/openpgp-revocs.d/8F6FCF10C80359D5A05AED67BF28FFA302EF4557.rev' gpg: Finished creating standard key", "gpg2 --list-secret-keys gpg: checking the trustdb gpg: marginals needed: 3 completes needed: 1 trust model: pgp gpg: depth: 0 valid: 1 signed: 0 trust: 0-, 0q, 0n, 0m, 0f, 1u / root /backup/pubring.kbx ------------------------ sec rsa2048 2020-01-13 [SCEA] 8F6FCF10C80359D5A05AED67BF28FFA302EF4557 uid [ultimate] GPG User (first key) <[email protected]>", "ipa-restore ipa-full-2020-01-14-12-02-32", "ipa-restore /mybackups/ipa-data-2020-02-01-05-30-00", "ipa-restore --data ipa-full-2020-01-14-12-02-32", "Directory Manager (existing master) password:", "Preparing restore from /var/lib/ipa/backup/ipa-full-2020-01-14-12-02-32 on server1.example.com Performing FULL restore from FULL backup Temporary setting umask to 022 Restoring data will overwrite existing live data. Continue to restore? [no]: yes", "Each master will individually need to be re-initialized or re-created from this one. The replication agreements on masters running IPA 3.1 or earlier will need to be manually re-enabled. See the man page for details. Disabling all replication. Disabling replication agreement on server1.example.com to caReplica2.example.com Disabling CA replication agreement on server1.example.com to caReplica2.example.com Disabling replication agreement on caReplica2.example.com to server1.example.com Disabling replication agreement on caReplica2.example.com to replica3.example.com Disabling CA replication agreement on caReplica2.example.com to server1.example.com Disabling replication agreement on replica3.example.com to caReplica2.example.com", "Stopping IPA services Systemwide CA database updated. Restoring files Systemwide CA database updated. Restoring from userRoot in EXAMPLE-COM Restoring from ipaca in EXAMPLE-COM Restarting GSS-proxy Starting IPA services Restarting SSSD Restarting oddjobd Restoring umask to 18 The ipa-restore command was successful", "ipa topologysegment-find domain ------------------ 2 segments matched ------------------ Segment name: server1.example.com-to-caReplica2.example.com Left node: server1.example.com Right node: caReplica2.example.com Connectivity: both Segment name: caReplica2.example.com-to-replica3.example.com Left node: caReplica2.example.com Right node: replica3.example.com Connectivity: both ---------------------------- Number of entries returned 2 ----------------------------", "ipa-replica-manage re-initialize --from= server1.example.com Update in progress, 2 seconds elapsed Update succeeded", "ipa topologysegment-find ca ----------------- 1 segment matched ----------------- Segment name: server1.example.com-to-caReplica2.example.com Left node: server1.example.com Right node: caReplica2.example.com Connectivity: both ---------------------------- Number of entries returned 1 ----------------------------", "ipa-csreplica-manage re-initialize --from= server1.example.com Directory Manager password: Update in progress, 3 seconds elapsed Update succeeded", "ipa-replica-manage re-initialize --from= caReplica2.example.com Directory Manager password: Update in progress, 3 seconds elapsed Update succeeded", "systemctl stop sssd", "sss_cache -E", "systemctl start sssd", "echo USDGNUPGHOME /root/backup", "ipa-restore ipa-full-2020-01-13-18-30-54", "Directory Manager (existing master) password:", "β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Please enter the passphrase to unlock the OpenPGP secret key: β”‚ β”‚ \"GPG User (first key) <[email protected]>\" β”‚ β”‚ 2048-bit RSA key, ID BF28FFA302EF4557, β”‚ β”‚ created 2020-01-13. β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ Passphrase: <passphrase> β”‚ β”‚ β”‚ β”‚ <OK> <Cancel> β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/planning_identity_management/backing-up-and-restoring-idm_planning-identity-management
Chapter 1. Introducing remote host configuration and management
Chapter 1. Introducing remote host configuration and management Remote host configuration is a powerful tool that enables the following capabilities: Easy registration. With the rhc client, you can register systems to Red Hat Subscription Management (RHSM) and Red Hat Insights for Red Hat Enterprise Linux. Configuration management. Using the remote host configuration manager, you can configure the connection with Insights for Red Hat Enterprise Linux for all of the Red Hat Enterprise Linux (RHEL) systems in your infrastructure. You can enable or disable the rhc client, direct remediations, and other application settings from Insights for Red Hat Enterprise Linux. Remediations from Insights for Red Hat Enterprise Linux. When systems are connected to Insights for Red Hat Enterprise Linux with the rhc client, you can manage the end-to-end experience of finding and fixing issues. Registered systems can directly consume remediation playbooks executed from the Insights for Red Hat Enterprise Linux application. Supported configurations The rhc client is supported on systems registered to Insights for Red Hat Enterprise Linux and running Red Hat Enterprise Linux (RHEL) 8.5 and later, and RHEL 9.0 and later. Single-command registration is supported by RHEL 8.6 and later, and RHEL 9.0 and later. 1.1. Remote host configuration components The complete remote host configuration solution comes with two main components: a client-side daemon and a server-side service to facilitate system management. The remote configuration client. The rhc client comes preinstalled with all Red Hat Enterprise Linux (RHEL) 8.5 and later installations, with the exception of minimal installation. The rhc client consists of the following utility programs: The rhcd daemon runs on the system and listens for messages from the Red Hat Hybrid Cloud Console. It also receives and executes remediation playbooks for systems that are properly configured. The rhc command-line utility for RHEL. The remote host configuration manager. With the remote host configuration manager user interface, you can enable or disable Insights for Red Hat Enterprise Linux connectivity and features. To maximize the value of remote host configuration, you must install additional packages. To allow systems to be managed by remote host configuration manager and to support the execution of remediation playbooks, install the following additional packages: ansible or ansible-core rhc-worker-playbook Important Starting with RHEL 8.6 and RHEL 9.0, the ansible-core and rhc-worker-playbook packages should automatically be installed in the background to make your system fully manageable from the remote host configuration manager user interface. However, a known bug is preventing the process from completing as expected. Thus, the packages must be installed manually after registration. 1.2. User Access settings in the Red Hat Hybrid Cloud Console User Access is the Red Hat implementation of role-based access control (RBAC). Your Organization Administrator uses User Access to configure what users can see and do on the Red Hat Hybrid Cloud Console (the console): Control user access by organizing roles instead of assigning permissions individually to users. Create groups that include roles and their corresponding permissions. Assign users to these groups, allowing them to inherit the permissions associated with their group's roles. 1.2.1. Predefined User Access groups and roles To make groups and roles easier to manage, Red Hat provides two predefined groups and a set of predefined roles. 1.2.1.1. Predefined groups The Default access group contains all users in your organization. Many predefined roles are assigned to this group. It is automatically updated by Red Hat. Note If the Organization Administrator makes changes to the Default access group its name changes to Custom default access group and it is no longer updated by Red Hat. The Default admin access group contains only users who have Organization Administrator permissions. This group is automatically maintained and users and roles in this group cannot be changed. On the Hybrid Cloud Console navigate to Red Hat Hybrid Cloud Console > the Settings icon (βš™) > Identity & Access Management > User Access > Groups to see the current groups in your account. This view is limited to the Organization Administrator. 1.2.1.2. Predefined roles assigned to groups The Default access group contains many of the predefined roles. Because all users in your organization are members of the Default access group, they inherit all permissions assigned to that group. The Default admin access group includes many (but not all) predefined roles that provide update and delete permissions. The roles in this group usually include administrator in their name. On the Hybrid Cloud Console navigate to Red Hat Hybrid Cloud Console > the Settings icon (βš™) > Identity & Access Management > User Access > Roles to see the current roles in your account. You can see how many groups each role is assigned to. This view is limited to the Organization Administrator. See User Access Configuration Guide for Role-based Access Control (RBAC) for additional information. 1.2.2. Access permissions The Prerequisites for each procedure list which predefined role provides the permissions you must have. As a user, you can navigate to Red Hat Hybrid Cloud Console > the Settings icon (βš™) > My User Access to view the roles and application permissions currently inherited by you. If you try to access Insights for Red Hat Enterprise Linux features and see a message that you do not have permission to perform this action, you must obtain additional permissions. The Organization Administrator or the User Access administrator for your organization configures those permissions. Use the Red Hat Hybrid Cloud Console Virtual Assistant to ask "Contact my Organization Administrator". The assistant sends an email to the Organization Administrator on your behalf. 1.2.3. User Access roles for remote host configuration and management There are several User Access roles that are relevant for Red Hat Insights for Red Hat Enterprise Linux users. These roles determine if an Insights user can simply view settings or change them, and use remediation features. User Access roles for using the Remote Host Configuration Manager in the Insights for Red Hat Enterprise Linux web console RHC administrator. Members in a group with this role can perform any operations in the rhc manager. RHC user. This is a default permission for all users on your organization's Red Hat Hybrid Cloud Console account, allowing anyone to see the current status of the configuration. User Access roles for using remediation features in the Insights for Red Hat Enterprise Linux web console Remediations administrator. Members in a group with this role can perform any available operation against any remediations resource, including direct remediations. Remediations user. Members in a group with this role can create, view, update, and delete operations against any remediations resource. This is a default permission given to all Hybrid Cloud Console users on your account.
null
https://docs.redhat.com/en/documentation/red_hat_hybrid_cloud_console/1-latest/html/remote_host_configuration_and_management/intro-rhc
8.97. libtevent
8.97. libtevent 8.97.1. RHBA-2013:1552 - libtevent bug fix and enhancement update Updated libtevent packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The libtevent packages provide Tevent, an event system based on the talloc memory management library. Tevent supports many event types, including timers, signals, and the classic file descriptor events. Tevent also provides helpers to deal with asynchronous code represented by the tevent_req (Tevent Request) functions. Note The libtevent packages have been upgraded to upstream version 0.9.18, which provides a number of bug fixes and enhancements over the version. (BZ# 951034 ) Bug Fixes BZ# 975489 Prior to this update, a condition in the poll backend copied a 64-bit variable into an unsigned integer variable, which was smaller than 64-bit on 32-bit architectures. Using the unsigned integer variable in a condition rendered the condition to be always false. The variable format has been changed to the uint64_t format guaranteeing its width to be 64 bits on all architectures. As a result, the condition now yields expected results. BZ# 978962 Previously, the tevent_loop_wait() function internally registered its own signal handler even though it had been never removed. Consequently, tevent_loop_wait() could not end even there were no registered custom handlers. This update applies a patch to fix this bug and tevent_loop_wait() now works as expected. Users of libtevent are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/libtevent
13.4. InfiniBand and RDMA related software packages
13.4. InfiniBand and RDMA related software packages Because RDMA applications are so different from Berkeley Sockets based applications, and from normal IP networking, most applications that are used on an IP network cannot be used directly on an RDMA network. Red Hat Enterprise Linux 7 comes with a number of different software packages for RDMA network administration, testing and debugging, high level software development APIs, and performance analysis. In order to utilize these networks, some or all of these packages need to be installed (this list is not exhaustive, but does cover the most important packages related to RDMA). Required packages: rdma - responsible for kernel initialization of the RDMA stack. libibverbs - provides the InfiniBand Verbs API. opensm - subnet manager (only required on one machine, and only if there is no subnet manager active on the fabric). user space driver for installed hardware - one or more of: infinipath-psm , libcxgb3 , libcxgb4 , libehca , libipathverbs , libmthca , libmlx4 , libmlx5 , libnes , and libocrdma . Note that libehca is only available for IBM Power Systems servers. Recommended packages: librdmacm , librdmacm-utils , and ibacm - Connection management library that is aware of the differences between InfiniBand, iWARP, and RoCE and is able to properly open connections across all of these hardware types, some simple test programs for verifying the operation of the network, and a caching daemon that integrates with the library to make remote host resolution faster in large clusters. libibverbs-utils - Simple Verbs based programs for querying the installed hardware and verifying communications over the fabric. infiniband-diags and ibutils - Provide a number of useful debugging tools for InfiniBand fabric management. These provide only very limited functionality on iWARP or RoCE as most of the tools work at the InfiniBand link layer, not the Verbs API layer. perftest and qperf - Performance testing applications for various types of RDMA communications. Optional packages: These packages are available in the Optional channel. Before installing packages from the Optional channel, see Scope of Coverage Details . Information on subscribing to the Optional channel can be found in the Red Hat Knowledgebase solution How to access Optional and Supplementary channels . dapl , dapl-devel , and dapl-utils - Provide a different API for RDMA than the Verbs API. There is both a runtime component and a development component to these packages. openmpi , mvapich2 , and mvapich2-psm - MPI stacks that have the ability to use RDMA communications. User-space applications writing to these stacks are not necessarily aware that RDMA communications are taking place.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/sec-infiniband_and_rdma_related_software_packages
2.3.3. Disabling ACPI Completely in the grub.conf File
2.3.3. Disabling ACPI Completely in the grub.conf File The preferred method of disabling ACPI Soft-Off is with chkconfig management ( Section 2.3.1, "Disabling ACPI Soft-Off with chkconfig Management" ). If the preferred method is not effective for your cluster, you can disable ACPI Soft-Off with the BIOS power management ( Section 2.3.2, "Disabling ACPI Soft-Off with the BIOS" ). If neither of those methods is effective for your cluster, you can disable ACPI completely by appending acpi=off to the kernel boot command line in the grub.conf file. Important This method completely disables ACPI; some computers do not boot correctly if ACPI is completely disabled. Use this method only if the other methods are not effective for your cluster. You can disable ACPI completely by editing the grub.conf file of each cluster node as follows: Open /boot/grub/grub.conf with a text editor. Append acpi=off to the kernel boot command line in /boot/grub/grub.conf (refer to Example 2.12, "Kernel Boot Command Line with acpi=off Appended to It" ). Reboot the node. When the cluster is configured and running, verify that the node turns off immediately when fenced. Note You can fence the node with the fence_node command or Conga . Example 2.12. Kernel Boot Command Line with acpi=off Appended to It In this example, acpi=off has been appended to the kernel boot command line - the line starting with "kernel /vmlinuz-2.6.18-36.el5".
[ "grub.conf generated by anaconda # Note that you do not have to rerun grub after making changes to this file NOTICE: You have a /boot partition. This means that all kernel and initrd paths are relative to /boot/, eg. root (hd0,0) kernel /vmlinuz-version ro root=/dev/VolGroup00/LogVol00 initrd /initrd-version.img #boot=/dev/hda default=0 timeout=5 serial --unit=0 --speed=115200 terminal --timeout=5 serial console title Red Hat Enterprise Linux Server (2.6.18-36.el5) root (hd0,0) kernel /vmlinuz-2.6.18-36.el5 ro root=/dev/VolGroup00/LogVol00 console=ttyS0,115200n8 acpi=off initrd /initrd-2.6.18-36.el5.img" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_administration/s2-apci-disable-boot-CA
Upgrade Guide
Upgrade Guide Red Hat Ceph Storage 8 Upgrading a Red Hat Ceph Storage Cluster Red Hat Ceph Storage Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/upgrade_guide/index
5.362. xorg-x11-drv-ati and mesa
5.362. xorg-x11-drv-ati and mesa 5.362.1. RHEA-2012:0903 - xorg-x11-drv-ati and mesa bug fix and enhancement update Updated xorg-x11-drv-ati and mesa packages that fix a bug and add an enhancement are now available for Red Hat Enterprise Linux 6. The xorg-x11-drv-ati packages provide a driver for ATI graphics cards for the X.Org implementation of the X Window System. The mesa packages provide hardware-accelerated drivers for many popular graphics chipsets, and Mesa, a 3D graphics application programming interface (API) compatible with the Open Graphics Library (OpenGL). Bug Fix BZ# 821873 Previously, Mesa did not recognize Intel HD Graphics chipsets integrated into Intel E3-family processors. Consequently, these chipsets provided limited display resolutions and their graphics performance was low. This update adds support for these chipsets. As a result, the chipsets are recognized by Mesa and perform as expected. Enhancement BZ# 788166 , BZ# 788168 This update adds support for AMD FirePro M100 (alternatively referred to as AMD FirePro M2000), AMD Radeon HD 74xx Series, AMD Radeon HD 75xx Series, and AMD Radeon HD 76xx Series graphics cards, and the AMD FusionA integrated graphics processing unit. All users of xorg-x11-drv-ati and Mesa are advised to upgrade to these updated packages, which fix this bug and add this enhancement.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/xorg-x11-drv-ati_and_mesa
1.8.2. Making the Kickstart File Available on the Network
1.8.2. Making the Kickstart File Available on the Network Network installations using kickstart are quite common, because system administrators can easily automate the installation on many networked computers quickly and painlessly. In general, the approach most commonly used is for the administrator to have both a BOOTP/DHCP server and an NFS server on the local network. The BOOTP/DHCP server is used to give the client system its networking information, while the actual files used during the installation are served by the NFS server. Often, these two servers run on the same physical machine, but they are not required to. To perform a network-based kickstart installation, you must have a BOOTP/DHCP server on your network, and it must include configuration information for the machine on which you are attempting to install Red Hat Enterprise Linux. The BOOTP/DHCP server provides the client with its networking information as well as the location of the kickstart file. If a kickstart file is specified by the BOOTP/DHCP server, the client system attempts an NFS mount of the file's path, and copies the specified file to the client, using it as the kickstart file. The exact settings required vary depending on the BOOTP/DHCP server you use. Here is an example of a line from the dhcpd.conf file for the DHCP server: Note that you should replace the value after filename with the name of the kickstart file (or the directory in which the kickstart file resides) and the value after -server with the NFS server name. If the file name returned by the BOOTP/DHCP server ends with a slash ("/"), then it is interpreted as a path only. In this case, the client system mounts that path using NFS, and searches for a particular file. The file name the client searches for is: The <ip-addr> section of the file name should be replaced with the client's IP address in dotted decimal notation. For example, the file name for a computer with an IP address of 10.10.0.1 would be 10.10.0.1-kickstart . Note that if you do not specify a server name, then the client system attempts to use the server that answered the BOOTP/DHCP request as its NFS server. If you do not specify a path or file name, the client system tries to mount /kickstart from the BOOTP/DHCP server and tries to find the kickstart file using the same <ip-addr> -kickstart file name as described above.
[ "filename \"/usr/new-machine/kickstart/\" ; next-server blarg.redhat.com;", "<ip-addr> -kickstart" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Making_the_Kickstart_File_Available-Making_the_Kickstart_File_Available_on_the_Network