title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
8.4. Table Metadata | 8.4. Table Metadata SYS.Tables This table supplies information about all the groups (tables, views and documents) in the virtual database. Column Name Type Description VDBName string VDB name SchemaName string Schema Name Name string Short group name Type string Table type (Table, View, Document, ...) NameInSource string Name of this group in the source IsPhysical boolean True if this is a source table SupportsUpdates boolean True if group can be updated UID string Group unique ID OID integer Unique ID Cardinality integer Approximate number of rows in the group Description string Description IsSystem boolean True if in system table IsMaterialized boolean True if materialized. SYS.Columns This table supplies information about all the elements (columns, tags, attributes, etc) in the virtual database. Column Name Type Description VDBName string VDB name SchemaName string Schema Name TableName string Table name Name string Element name (not qualified) Position integer Position in group (1-based) NameInSource string Name of element in source DataType string Data Virtualization runtime data type name Scale integer Number of digits after the decimal point ElementLength integer Element length (mostly used for strings) sLengthFixed boolean Whether the length is fixed or variable SupportsSelect boolean Element can be used in SELECT SupportsUpdates boolean Values can be inserted or updated in the element IsCaseSensitive boolean Element is case-sensitive IsSigned boolean Element is signed numeric value IsCurrency boolean Element represents monetary value IsAutoIncremented boolean Element is auto-incremented in the source NullType string Nullability: "Nullable", "No Nulls", "Unknown" MinRange string Minimum value MaxRange string Maximum value DistinctCount integer Distinct value count, -1 can indicate unknown NullCount integer Null value count, -1 can indicate unknown SearchType string Searchability: "Searchable", "All Except Like", "Like Only", "Unsearchable" Format string Format of string value DefaultValue string Default value JavaClass string Java class that will be returned Precision integer Number of digits in numeric value CharOctetLength integer Measure of return value size Radix integer Radix for numeric values GroupUpperName string Upper-case full group name UpperName string Upper-case element name UID string Element unique ID OID integer Unique ID Description string Description SYS.Keys This table supplies information about primary, foreign, and unique keys. Column Name Type Description VDBName string VDB name SchemaName string Schema Name Table Name string Table name Name string Key name Description string Description NameInSource string Name of key in source system Type string Type of key: "Primary", "Foreign", "Unique", etc IsIndexed boolean True if key is indexed RefKeyUID string Referenced key UID (if foreign key) UID string Key unique ID OID integer Unique ID TableUID string - RefTableUID string - ColPositions short[] - SYS.KeyColumns This table supplies information about the columns referenced by a key. Column Name Type Description VDBName string VDB name SchemaName string Schema Name TableName string Table name Name string Element name KeyName string Key name KeyType string Key type: "Primary", "Foreign", "Unique", etc RefKeyUID string Referenced key UID UID string Key UID OID integer Unique ID Position integer Position in key Warning The OID column is no longer used on system tables. Use UID instead. SYS.Spatial_Sys_Ref Here are the attributes for this table: Column Name Type Description srid integer Spatial Reference Identifier auth_name string Name of the standard or standards body. auth_srid integer SRID for the auth_name authority. srtext string Well-Known Text representation proj4text string For use with the Proj4 library. SYS.Geometry_Columns Here are the attributes for this table: Column Name Type Description F_TABLE_CATALOG string catalog name F_TABLE_SCHEMA string schema name F_TABLE_NAME string table name F_GEOMETRY_COLUMN string column name COORD_DIMENSION integer Number of coordinate dimensions SRID integer Spatial Reference Identifier TYPE string Geometry type name Note The coord_dimension and srid properties are determined from the coord_dimension and the srid extension properties on the column. When possible, these values are set automatically by the relevant importer. If they are not set, they are reported as 2 and 0 respectively. If client logic expects actual values, then you may need to set them manually. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/table_metadata |
Chapter 3. Important update on odo | Chapter 3. Important update on odo Red Hat does not provide information about odo on the OpenShift Container Platform documentation site. See the documentation maintained by Red Hat and the upstream community for documentation information related to odo . Important For the materials maintained by the upstream community, Red Hat provides support under Cooperative Community Support . | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/cli_tools/developer-cli-odo |
Chapter 6. Developer previews | Chapter 6. Developer previews This section describes the developer preview features introduced in Red Hat OpenShift Data Foundation 4.18. Important Developer preview feature is subject to Developer preview support limitations. Developer preview releases are not intended to be run in production environments. The clusters deployed with the developer preview features are considered to be development clusters and are not supported through the Red Hat Customer Portal case management system. If you need assistance with developer preview features, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. 6.1. Consistent RADOS block device (RBD) group disaster recovery The OpenShift Data Foundation Disaster Recovery solution provides a way to consistently mirror multiple ReadWriteOnce (RWO) persistent volumes (PVs) with regional disaster recovery. For more information, see the knowledgebase article, Enabling and Managing Consistency Groups in OpenShift 4.18 . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/4.18_release_notes/developer_previews |
Chapter 4. Configuring Red Hat Quay | Chapter 4. Configuring Red Hat Quay Before running the Red Hat Quay service as a container, you need to use that same Quay container to create the configuration file ( config.yaml ) needed to deploy Red Hat Quay. To do that, you pass a config argument and a password (replace my-secret-password here) to the Quay container. Later, you use that password to log into the configuration tool as the user quayconfig . Here's an example of how to do that: Start quay in setup mode : On the first quay node, run the following: Open browser : When the quay configuration tool starts up, open a browser to the URL and port 8080 of the system you are running the configuration tool on (for example http://myquay.example.com:8080 ). You are prompted for a username and password. Log in as quayconfig : When prompted, enter the quayconfig username and password (the one from the podman run command line). Fill in the required fields : When you start the config tool without mounting an existing configuration bundle, you will be booted into an initial setup session. In a setup session, default values will be filled automatically. The following steps will walk through how to fill out the remaining required fields. Identify the database : For the initial setup, you must include the following information about the type and location of the database to be used by Red Hat Quay: Database Type : Choose MySQL or PostgreSQL. MySQL will be used in the basic example; PostgreSQL is used with the high availability Red Hat Quay on OpenShift examples. Database Server : Identify the IP address or hostname of the database, along with the port number if it is different from 3306. Username : Identify a user with full access to the database. Password : Enter the password you assigned to the selected user. Database Name : Enter the database name you assigned when you started the database server. SSL Certificate : For production environments, you should provide an SSL certificate to connect to the database. The following figure shows an example of the screen for identifying the database used by Red Hat Quay: Identify the Redis hostname, Server Configuration and add other desired settings : Other setting you can add to complete the setup are as follows. More settings for high availability Red Hat Quay deployment that for the basic deployment: For the basic, test configuration, identifying the Redis Hostname should be all you need to do. However, you can add other features, such as Clair Scanning and Repository Mirroring, as described at the end of this procedure. For the high availability and OpenShift configurations, more settings are needed (as noted below) to allow for shared storage, secure communications between systems, and other features. Here are the settings you need to consider: Custom SSL Certificates : Upload custom or self-signed SSL certificates for use by Red Hat Quay. See Using SSL to protect connections to Red Hat Quay for details. Recommended for high availability. Important Using SSL certificates is recommended for both basic and high availability deployments. If you decide to not use SSL, you must configure your container clients to use your new Red Hat Quay setup as an insecure registry as described in Test an Insecure Registry . Basic Configuration : Upload a company logo to rebrand your Red Hat Quay registry. Server Configuration : Hostname or IP address to reach the Red Hat Quay service, along with TLS indication (recommended for production installations). The Server Hostname is required for all Red Hat Quay deployments. TLS termination can be done in two different ways: On the instance itself, with all TLS traffic governed by the nginx server in the Quay container (recommended). On the load balancer. This is not recommended. Access to Red Hat Quay could be lost if the TLS setup is not done correctly on the load balancer. Data Consistency Settings : Select to relax logging consistency guarantees to improve performance and availability. Time Machine : Allow older image tags to remain in the repository for set periods of time and allow users to select their own tag expiration times. redis : Identify the hostname or IP address (and optional password) to connect to the redis service used by Red Hat Quay. Repository Mirroring : Choose the checkbox to Enable Repository Mirroring. With this enabled, you can create repositories in your Red Hat Quay cluster that mirror selected repositories from remote registries. Before you can enable repository mirroring, start the repository mirroring worker as described later in this procedure. Registry Storage : Identify the location of storage. A variety of cloud and local storage options are available. Remote storage is required for high availability. Identify the Ceph storage location if you are following the example for Red Hat Quay high availability storage. On OpenShift, the example uses Amazon S3 storage. Action Log Storage Configuration : Action logs are stored in the Red Hat Quay database by default. If you have a large amount of action logs, you can have those logs directed to Elasticsearch for later search and analysis. To do this, change the value of Action Logs Storage to Elasticsearch and configure related settings as described in Configure action log storage . Action Log Rotation and Archiving : Select to enable log rotation, which moves logs older than 30 days into storage, then indicate storage area. Security Scanner : Enable security scanning by selecting a security scanner endpoint and authentication key. To setup Clair to do image scanning, refer to Clair Setup and Configuring Clair . Recommended for high availability. Application Registry : Enable an additional application registry that includes things like Kubernetes manifests or Helm charts (see the App Registry specification ). rkt Conversion : Allow rkt fetch to be used to fetch images from Red Hat Quay registry. Public and private GPG2 keys are needed. This field is deprecated. E-mail : Enable e-mail to use for notifications and user password resets. Internal Authentication : Change default authentication for the registry from Local Database to LDAP, Keystone (OpenStack), JWT Custom Authentication, or External Application Token. External Authorization (OAuth) : Enable to allow GitHub or GitHub Enterprise to authenticate to the registry. Google Authentication : Enable to allow Google to authenticate to the registry. Access Settings : Basic username/password authentication is enabled by default. Other authentication types that can be enabled include: external application tokens (user-generated tokens used with docker or rkt commands), anonymous access (enable for public access to anyone who can get to the registry), user creation (let users create their own accounts), encrypted client password (require command-line user access to include encrypted passwords), and prefix username autocompletion (disable to require exact username matches on autocompletion). Registry Protocol Settings : Leave the Restrict V1 Push Support checkbox enabled to restrict access to Docker V1 protocol pushes. Although Red Hat recommends against enabling Docker V1 push protocol, if you do allow it, you must explicitly whitelist the namespaces for which it is enabled. Dockerfile Build Support : Enable to allow users to submit Dockerfiles to be built and pushed to Red Hat Quay. This is not recommended for multitenant environments. Validate the changes : Select Validate Configuration Changes . If validation is successful, you will be presented with the following Download Configuration modal: Download configuration : Select the Download Configuration button and save the tarball ( quay-config.tar.gz ) to a local directory to use later to start Red Hat Quay. At this point, you can shutdown the Red Hat Quay configuration tool and close your browser. , copy the tarball file to the system on which you want to install your first Red Hat Quay node. For a basic install, you might just be running Red Hat Quay on the same system. | [
"sudo podman run --rm -it --name quay_config -p 8080:8080 registry.redhat.io/quay/quay-rhel8:v3.13.3 config my-secret-password"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/deploy_red_hat_quay_-_high_availability/configuring_red_hat_quay |
Chapter 21. MachineConfiguration [operator.openshift.io/v1] | Chapter 21. MachineConfiguration [operator.openshift.io/v1] Description MachineConfiguration provides information to configure an operator to manage Machine Configuration. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 21.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec is the specification of the desired behavior of the Machine Config Operator status object status is the most recently observed status of the Machine Config Operator 21.1.1. .spec Description spec is the specification of the desired behavior of the Machine Config Operator Type object Property Type Description failedRevisionLimit integer failedRevisionLimit is the number of failed static pod installer revisions to keep on disk and in the api -1 = unlimited, 0 or unset = 5 (default) forceRedeploymentReason string forceRedeploymentReason can be used to force the redeployment of the operand by providing a unique string. This provides a mechanism to kick a previously failed deployment and provide a reason why you think it will work this time instead of failing again on the same config. logLevel string logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". managementState string managementState indicates whether and how the operator should manage the component observedConfig `` observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator operatorLogLevel string operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". succeededRevisionLimit integer succeededRevisionLimit is the number of successful static pod installer revisions to keep on disk and in the api -1 = unlimited, 0 or unset = 5 (default) unsupportedConfigOverrides `` unsupportedConfigOverrides overrides the final configuration that was computed by the operator. Red Hat does not support the use of this field. Misuse of this field could lead to unexpected behavior or conflict with other configuration options. Seek guidance from the Red Hat support before using this field. Use of this property blocks cluster upgrades, it must be removed before upgrading your cluster. 21.1.2. .status Description status is the most recently observed status of the Machine Config Operator Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. latestAvailableRevision integer latestAvailableRevision is the deploymentID of the most recent deployment latestAvailableRevisionReason string latestAvailableRevisionReason describe the detailed reason for the most recent deployment nodeStatuses array nodeStatuses track the deployment values and errors across individual nodes nodeStatuses[] object NodeStatus provides information about the current state of a particular node managed by this operator. observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state version string version is the level this availability applies to 21.1.3. .status.conditions Description conditions is a list of conditions and their status Type array 21.1.4. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Property Type Description lastTransitionTime string message string reason string status string type string 21.1.5. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 21.1.6. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 21.1.7. .status.nodeStatuses Description nodeStatuses track the deployment values and errors across individual nodes Type array 21.1.8. .status.nodeStatuses[] Description NodeStatus provides information about the current state of a particular node managed by this operator. Type object Property Type Description currentRevision integer currentRevision is the generation of the most recently successful deployment lastFailedCount integer lastFailedCount is how often the installer pod of the last failed revision failed. lastFailedReason string lastFailedReason is a machine readable failure reason string. lastFailedRevision integer lastFailedRevision is the generation of the deployment we tried and failed to deploy. lastFailedRevisionErrors array (string) lastFailedRevisionErrors is a list of human readable errors during the failed deployment referenced in lastFailedRevision. lastFailedTime string lastFailedTime is the time the last failed revision failed the last time. lastFallbackCount integer lastFallbackCount is how often a fallback to a revision happened. nodeName string nodeName is the name of the node targetRevision integer targetRevision is the generation of the deployment we're trying to apply 21.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/machineconfigurations DELETE : delete collection of MachineConfiguration GET : list objects of kind MachineConfiguration POST : create a MachineConfiguration /apis/operator.openshift.io/v1/machineconfigurations/{name} DELETE : delete a MachineConfiguration GET : read the specified MachineConfiguration PATCH : partially update the specified MachineConfiguration PUT : replace the specified MachineConfiguration /apis/operator.openshift.io/v1/machineconfigurations/{name}/status GET : read status of the specified MachineConfiguration PATCH : partially update status of the specified MachineConfiguration PUT : replace status of the specified MachineConfiguration 21.2.1. /apis/operator.openshift.io/v1/machineconfigurations HTTP method DELETE Description delete collection of MachineConfiguration Table 21.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind MachineConfiguration Table 21.2. HTTP responses HTTP code Reponse body 200 - OK MachineConfigurationList schema 401 - Unauthorized Empty HTTP method POST Description create a MachineConfiguration Table 21.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 21.4. Body parameters Parameter Type Description body MachineConfiguration schema Table 21.5. HTTP responses HTTP code Reponse body 200 - OK MachineConfiguration schema 201 - Created MachineConfiguration schema 202 - Accepted MachineConfiguration schema 401 - Unauthorized Empty 21.2.2. /apis/operator.openshift.io/v1/machineconfigurations/{name} Table 21.6. Global path parameters Parameter Type Description name string name of the MachineConfiguration HTTP method DELETE Description delete a MachineConfiguration Table 21.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 21.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified MachineConfiguration Table 21.9. HTTP responses HTTP code Reponse body 200 - OK MachineConfiguration schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified MachineConfiguration Table 21.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 21.11. HTTP responses HTTP code Reponse body 200 - OK MachineConfiguration schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified MachineConfiguration Table 21.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 21.13. Body parameters Parameter Type Description body MachineConfiguration schema Table 21.14. HTTP responses HTTP code Reponse body 200 - OK MachineConfiguration schema 201 - Created MachineConfiguration schema 401 - Unauthorized Empty 21.2.3. /apis/operator.openshift.io/v1/machineconfigurations/{name}/status Table 21.15. Global path parameters Parameter Type Description name string name of the MachineConfiguration HTTP method GET Description read status of the specified MachineConfiguration Table 21.16. HTTP responses HTTP code Reponse body 200 - OK MachineConfiguration schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified MachineConfiguration Table 21.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 21.18. HTTP responses HTTP code Reponse body 200 - OK MachineConfiguration schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified MachineConfiguration Table 21.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 21.20. Body parameters Parameter Type Description body MachineConfiguration schema Table 21.21. HTTP responses HTTP code Reponse body 200 - OK MachineConfiguration schema 201 - Created MachineConfiguration schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/operator_apis/machineconfiguration-operator-openshift-io-v1 |
8.3. Configuring a Virtual Domain as a Resource | 8.3. Configuring a Virtual Domain as a Resource You can configure a virtual domain that is managed by the libvirt virtualization framework as a cluster resource with the pcs resource create command, specifying VirtualDomain as the resource type. When configuring a virtual domain as a resource, take the following considerations into account: A virtual domain should be stopped before you configure it as a cluster resource. Once a virtual domain is a cluster resource, it should not be started, stopped, or migrated except through the cluster tools. Do not configure a virtual domain that you have configured as a cluster resource to start when its host boots. All nodes must have access to the necessary configuration files and storage devices for each managed virtual domain. If you want the cluster to manage services within the virtual domain itself, you can configure the virtual domain as a guest node. For information on configuring guest nodes, see Section 8.4, "The pacemaker_remote Service" For information on configuring virtual domains, see the Virtualization Deployment and Administration Guide . Table 8.3, "Resource Options for Virtual Domain Resources" describes the resource options you can configure for a VirtualDomain resource. Table 8.3. Resource Options for Virtual Domain Resources Field Default Description config (required) Absolute path to the libvirt configuration file for this virtual domain. hypervisor System dependent Hypervisor URI to connect to. You can determine the system's default URI by running the virsh --quiet uri command. force_stop 0 Always forcefully shut down ("destroy") the domain on stop. The default behavior is to resort to a forceful shutdown only after a graceful shutdown attempt has failed. You should set this to true only if your virtual domain (or your virtualization back end) does not support graceful shutdown. migration_transport System dependent Transport used to connect to the remote hypervisor while migrating. If this parameter is omitted, the resource will use libvirt 's default transport to connect to the remote hypervisor. migration_network_suffix Use a dedicated migration network. The migration URI is composed by adding this parameter's value to the end of the node name. If the node name is a fully qualified domain name (FQDN), insert the suffix immediately prior to the first period (.) in the FQDN. Ensure that this composed host name is locally resolvable and the associated IP address is reachable through the favored network. monitor_scripts To additionally monitor services within the virtual domain, add this parameter with a list of scripts to monitor. Note : When monitor scripts are used, the start and migrate_from operations will complete only when all monitor scripts have completed successfully. Be sure to set the timeout of these operations to accommodate this delay autoset_utilization_cpu true If set to true , the agent will detect the number of domainU 's vCPU s from virsh , and put it into the CPU utilization of the resource when the monitor is executed. autoset_utilization_hv_memory true If set it true, the agent will detect the number of Max memory from virsh , and put it into the hv_memory utilization of the source when the monitor is executed. migrateport random highport This port will be used in the qemu migrate URI. If unset, the port will be a random highport. snapshot Path to the snapshot directory where the virtual machine image will be stored. When this parameter is set, the virtual machine's RAM state will be saved to a file in the snapshot directory when stopped. If on start a state file is present for the domain, the domain will be restored to the same state it was in right before it stopped last. This option is incompatible with the force_stop option. In addition to the VirtualDomain resource options, you can configure the allow-migrate metadata option to allow live migration of the resource to another node. When this option is set to true , the resource can be migrated without loss of state. When this option is set to false , which is the default state, the virtual domain will be shut down on the first node and then restarted on the second node when it is moved from one node to the other. The following example configures a VirtualDomain resource named VM . Since the allow-migrate option is set to true a pcs move VM nodeX command would be done as a live migration. | [
"pcs resource create VM VirtualDomain config=.../vm.xml migration_transport=ssh meta allow-migrate=true"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/virtualnoderesource |
Chapter 9. Using canonicalized DNS host names in IdM | Chapter 9. Using canonicalized DNS host names in IdM DNS canonicalization is disabled by default on Identity Management (IdM) clients to avoid potential security risks. For example, if an attacker controls the DNS server and a host in the domain, the attacker can cause the short host name, such as demo , to resolve to a compromised host, such as malicious.example.com . In this case, the user connects to a different server than expected. This procedure describes how to use canonicalized host names on IdM clients. 9.1. Adding an alias to a host principal By default, Identity Management (IdM) clients enrolled by using the ipa-client-install command do not allow to use short host names in service principals. For example, users can use only host/[email protected] instead of host/[email protected] when accessing a service. Follow this procedure to add an alias to a Kerberos principal. Note that you can alternatively enable canonicalization of host names in the /etc/krb5.conf file. For details, see Enabling canonicalization of host names in service principals on clients . Prerequisites The IdM client is installed. The host name is unique in the network. Procedure Authenticate to IdM as the admin user: Add the alias to the host principal. For example, to add the demo alias to the demo.examle.com host principal: 9.2. Enabling canonicalization of host names in service principals on clients Follow this procedure to enable canonicalization of host names in services principals on clients. Note that if you use host principal aliases, as described in Adding an alias to a host principal , you do not need to enable canonicalization. Prerequisites The Identity Management (IdM) client is installed. You are logged in to the IdM client as the root user. The host name is unique in the network. Procedure Set the dns_canonicalize_hostname parameter in the [libdefaults] section in the /etc/krb5.conf file to false : 9.3. Options for using host names with DNS host name canonicalization enabled If you set dns_canonicalize_hostname = true in the /etc/krb5.conf file as explained in Enabling canonicalization of host names in service principals on clients , you have the following options when you use a host name in a service principal: In Identity Management (IdM) environments, you can use the full host name in a service principal, such as host/[email protected] . In environments without IdM, but if the RHEL host as a member of an Active Directory (AD) domain, no further considerations are required, because AD domain controllers (DC) automatically create service principals for NetBIOS names of the machines enrolled into AD. | [
"kinit admin",
"ipa host-add-principal demo.example.com --principal= demo",
"[libdefaults] dns_canonicalize_hostname = true"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/working_with_dns_in_identity_management/using-canonicalized-dns-host-names-in-idm_working-with-dns-in-identity-management |
Chapter 2. General Updates | Chapter 2. General Updates Cross channel package dependency improvements The yum utility has been enhanced to prompt the end user to search disabled package repositories on the system when a package dependency error occurs. This change will allow users to quickly resolve dependency errors by first checking all known channels for the missing package dependency. To enable this functionality, execute yum update yum subscription-manager prior to upgrading your machine to Red Hat Enterprise Linux 6.8. See the System and Subscription Management chapter for further details on the implementation of this feature. (BZ#1197245) Packages moved to the Optional Channel The following packages have been moved to the Optional channel: gnome-devel-docs libstdc++-docs xorg-x11-docs Note that if any of these packages have previously been installed, using the yum update command for updating these packages can lead to problems causing the update to fail. Enable the Optional channel before updating the mentioned installed packages or uninstall them before updating your system. For detailed instructions on how to subscribe your system to the Optional channel, see the relevant Knowledgebase articles on Red Hat Customer Portal: https://access.redhat.com/solutions/392003 for Red Hat Subscription Management or https://access.redhat.com/solutions/70019 if your system is registered with RHN Classic. (BZ#1300789) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.8_release_notes/new_features_general_updates |
Chapter 74. Coverage reports for test scenarios | Chapter 74. Coverage reports for test scenarios The test scenario designer provides a clear and coherent way of displaying the test coverage statistics using the Coverage Report tab on the right side of the test scenario designer. You can also download the coverage report to view and analyze the test coverage statistics. Downloaded test scenario coverage report supports the .CSV file format. For more information about the RFC specification for the Comma-Separated Values (CSV) format, see Common Format and MIME Type for Comma-Separated Values (CSV) Files . You can view the coverage report for rule-based and DMN-based test scenarios. 74.1. Generating coverage reports for rule-based test scenarios In rule-based test scenarios, the Coverage Report tab contains the detailed information about the following: Number of available rules Number of fired rules Percentage of fired rules Percentage of executed rules represented as a pie chart Number of times each rule has executed The rules that are executed for each defined test scenario Follow the procedure to generate a coverage report for rule-based test scenarios: Prerequisites The rule-based test scenario template are created for the selected test scenario. For more information about creating rule-based test scenarios, see Section 65.1, "Creating a test scenario template for rule-based test scenarios" . The individual test scenarios are defined. For more information about defining a test scenario, see Chapter 67, Defining a test scenario . Note To generate the coverage report for rule-based test scenario, you must create at least one rule. Procedure Open the rule-based test scenarios in the test scenario designer. Run the defined test scenarios. Click Coverage Report on the right of the test scenario designer to display the test coverage statistics. Optional: To download the test scenario coverage report, Click Download report . 74.2. Generating coverage reports for DMN-based test scenarios In DMN-based test scenarios, the Coverage Report tab contains the detailed information about the following: Number of available decisions Number of executed decisions Percentage of executed decisions Percentage of executed decisions represented as a pie chart Number of times each decision has executed Decisions that are executed for each defined test scenario Follow the procedure to generate a coverage report for DMN-based test scenarios: Prerequisites The DMN-based test scenario template is created for the selected test scenario. For more information about creating DMN-based test scenarios, see Section 66.1, "Creating a test scenario template for DMN-based test scenarios" . The individual test scenarios are defined. For more information about defining a test scenario, see Chapter 67, Defining a test scenario . Procedure Open the DMN-based test scenarios in the test scenario designer. Run the defined test scenarios. Click Coverage Report on the right of the test scenario designer to display the test coverage statistics. Optional: To download the test scenario coverage report, Click Download report . | null | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_decision_services_in_red_hat_decision_manager/test-scenarios-coverage-report-con_test-scenarios |
Chapter 13. Authentication and Interoperability | Chapter 13. Authentication and Interoperability Manual Backup and Restore Functionality This update introduces the ipa-backup and ipa-restore commands to Identity Management (IdM), which allow users to manually back up their IdM data and restore them in case of a hardware failure. For further information, see the ipa-backup (1) and ipa-restore (1) manual pages or the documentation in the Linux Domain Identity, Authentication, and Policy Guide . Support for Migration from WinSync to Trust This update implements the new ID Views mechanism of user configuration. It enables the migration of Identity Management users from a WinSync synchronization-based architecture used by Active Directory to an infrastructure based on Cross-Realm Trusts. For the details of ID Views and the migration procedure, see the documentation in the Windows Integration Guide . One-Time Password Authentication One of the best ways to increase authentication security is to require two factor authentication (2FA). A very popular option is to use one-time passwords (OTP). This technique began in the proprietary space, but over time some open standards emerged (HOTP: RFC 4226, TOTP: RFC 6238). Identity Management in Red Hat Enterprise Linux 7.1 contains the first implementation of the standard OTP mechanism. For further details, see the documentation in the System-Level Authentication Guide . SSSD Integration for the Common Internet File System A plug-in interface provided by SSSD has been added to configure the way in which the cifs-utils utility conducts the ID-mapping process. As a result, an SSSD client can now access a CIFS share with the same functionality as a client running the Winbind service. For further information, see the documentation in the Windows Integration Guide . Certificate Authority Management Tool The ipa-cacert-manage renew command has been added to the Identity management (IdM) client, which makes it possible to renew the IdM Certification Authority (CA) file. This enables users to smoothly install and set up IdM using a certificate signed by an external CA. For details on this feature, see the ipa-cacert-manage (1) manual page. Increased Access Control Granularity It is now possible to regulate read permissions of specific sections in the Identity Management (IdM) server UI. This allows IdM server administrators to limit the accessibility of privileged content only to chosen users. In addition, authenticated users of the IdM server no longer have read permissions to all of its contents by default. These changes improve the overall security of the IdM server data. Limited Domain Access for Unprivileged Users The domains= option has been added to the pam_sss module, which overrides the domains= option in the /etc/sssd/sssd.conf file. In addition, this update adds the pam_trusted_users option, which allows the user to add a list of numerical UIDs or user names that are trusted by the SSSD daemon, and the pam_public_domains option and a list of domains accessible even for untrusted users. The mentioned additions allow the configuration of systems, where regular users are allowed to access the specified applications, but do not have login rights on the system itself. For additional information on this feature, see the documentation in the Linux Domain Identity, Authentication, and Policy Guide . Automatic data provider configuration The ipa-client-install command now by default configures SSSD as the data provider for the sudo service. This behavior can be disabled by using the --no-sudo option. In addition, the --nisdomain option has been added to specify the NIS domain name for the Identity Management client installation, and the --no_nisdomain option has been added to avoid setting the NIS domain name. If neither of these options are used, the IPA domain is used instead. Use of AD and LDAP sudo Providers The AD provider is a back end used to connect to an Active Directory server. In Red Hat Enterprise Linux 7.1, using the AD sudo provider together with the LDAP provider is supported as a Technology Preview. To enable the AD sudo provider, add the sudo_provider=ad setting in the domain section of the sssd.conf file. 32-bit Version of krb5-server and krb5-server-ldap Deprecated The 32-bit version of Kerberos 5 Server is no longer distributed, and the following packages are deprecated since Red Hat Enterprise Linux 7.1: krb5-server.i686 , krb5-server.s390 , krb5-server.ppc , krb5-server-ldap.i686 , krb5-server-ldap.s390 , and krb5-server-ldap.ppc . There is no need to distribute the 32-bit version of krb5-server on Red Hat Enterprise Linux 7, which is supported only on the following architectures: AMD64 and Intel 64 systems ( x86_64 ), 64-bit IBM Power Systems servers ( ppc64 ), and IBM System z ( s390x ). SSSD Leverages GPO Policies to Define HBAC SSSD is now able to use GPO objects stored on an AD server for access control. This enhancement mimics the functionality of Windows clients, allowing to use a single set of access control rules to handle both Windows and Unix machines. In effect, Windows administrators can now use GPOs to control access to Linux clients. Apache Modules for IPA A set of Apache modules has been added to Red Hat Enterprise Linux 7.1 as a Technology Preview. The Apache modules can be used by external applications to achieve tighter interaction with Identity Management beyond simple authentication. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.1_release_notes/chap-Red_Hat_Enterprise_Linux-7.1_Release_Notes-Authentication_and_Interoperability |
Chapter 115. KafkaUserTlsExternalClientAuthentication schema reference | Chapter 115. KafkaUserTlsExternalClientAuthentication schema reference Used in: KafkaUserSpec The type property is a discriminator that distinguishes use of the KafkaUserTlsExternalClientAuthentication type from KafkaUserTlsClientAuthentication , KafkaUserScramSha512ClientAuthentication . It must have the value tls-external for the type KafkaUserTlsExternalClientAuthentication . Property Property type Description type string Must be tls-external . | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-KafkaUserTlsExternalClientAuthentication-reference |
3.9. The SET Statement | 3.9. The SET Statement Execution properties are set on the connection using the SET statement. The SET statement is not yet a language feature of JBoss Data Virtualization and is handled only in the JDBC client. SET Syntax: SET [PAYLOAD] (parameter|SESSION AUTHORIZATION) value SET SESSION CHARACTERISTICS AS TRANSACTION ISOLATION LEVEL (READ UNCOMMITTED|READ COMMITTED|REPEATABLE READ|SERIALIZABLE) Syntax Rules: The parameter must be an identifier. If quoted, it can contain spaces and other special characters, but otherwise it can not. The value may be either a non-quoted identifier or a quoted string literal value. If payload is specified, for example, SET PAYLOAD x y , then a session scoped payload properties object will have the corresponding name value pair set. The payload object is not fully session scoped. It will be removed from the session when the XAConnection handle is closed/returned to the pool (assumes the use of TeiidDataSource). The session scoped payload is superseded by usage of the TeiidStatement.setPayload . Using SET SESSION CHARACTERISTICS AS TRANSACTION ISOLATION LEVEL is equivalent to calling Connection.setTransactionIsolation with the corresponding level. The SET statement is most commonly used to control planning and execution. SET SHOWPLAN (ON|DEBUG|OFF) SET NOEXEC (ON|OFF) The following is an example of how to use the SET statement to enable a debug plan: The SET statement may also be used to control authorization. A SET SESSION AUTHORIZATION statement will perform a reauthentication (see Section 2.6, "Reauthentication" ) given the credentials currently set on the connection. The connection credentials may be changed by issuing a SET PASSWORD statement. | [
"Statement s = connection.createStatement(); s.execute(\"SET SHOWPLAN DEBUG\"); Statement s1 = connection.createStatement(); ResultSet rs = s1.executeQuery(\"select col from table\"); ResultSet planRs = s1.executeQuery(\"SHOW PLAN\"); planRs.next(); String debugLog = planRs.getString(\"DEBUG_LOG\"); Query Plan without executing the query s.execute(\"SET NOEXEC ON\"); s.execute(\"SET SHOWPLAN DEBUG\"); e.execute(\"SET NOEXEC OFF\");",
"Statement s = connection.createStatement(); s.execute(\"SET PASSWORD 'someval'\"); s.execute(\"SET SESSION AUTHORIZATION 'newuser'\");"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_1_client_development/the_set_statement1 |
Chapter 2. Getting Started: Overview | Chapter 2. Getting Started: Overview This chapter provides a summary procedure for setting up a basic Red Hat High Availability cluster consisting of two nodes running Red Hat Enterprise Linux release 6. This procedure uses the luci user interface to create the cluster. While this procedure creates a basic cluster, it does not yield a complete supported cluster configuration. Further details on planning and deploying a cluster are provided in the remainder of this document. 2.1. Installation and System Setup Before creating a Red Hat High Availability cluster, perform the following setup and installation steps. Ensure that your Red Hat account includes the following support entitlements: RHEL: Server Red Hat Applications: High availability Red Hat Applications: Resilient Storage, if using the Clustered Logical Volume Manager (CLVM) and GFS2 file systems. Register the cluster systems for software updates, using either Red Hat Subscriptions Manager (RHSM) or RHN Classic. On each node in the cluster, configure the iptables firewall. The iptables firewall can be disabled, or it can be configured to allow cluster traffic to pass through. To disable the iptables system firewall, execute the following commands. For information on configuring the iptables firewall to allow cluster traffic to pass through, see Section 3.3, "Enabling IP Ports" . On each node in the cluster, configure SELinux. SELinux is supported on Red Hat Enterprise Linux 6 cluster nodes in Enforcing or Permissive mode with a targeted policy, or it can be disabled. To check the current SELinux state, run the getenforce : For information on enabling and disabling SELinux, see the Security-Enhanced Linux user guide. Install the cluster packages and package groups. On each node in the cluster, install the High Availability and Resiliant Storage package groups. On the node that will be hosting the web management interface, install the luci package. | [
"service iptables stop chkconfig iptables off",
"getenforce Permissive",
"yum groupinstall 'High Availability' 'Resilient Storage'",
"yum install luci"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/ch-startup-ca |
Chapter 3. Configuring Fencing with Conga | Chapter 3. Configuring Fencing with Conga This chapter describes how to configure fencing in Red Hat High Availability Add-On using Conga . Note Conga is a graphical user interface that you can use to administer the Red Hat High Availability Add-On. Note, however, that in order to use this interface effectively you need to have a good and clear understanding of the underlying concepts. Learning about cluster configuration by exploring the available features in the user interface is not recommended, as it may result in a system that is not robust enough to keep all services running when components fail. Section 3.2, "Configuring Fence Devices" 3.1. Configuring Fence Daemon Properties Clicking on the Fence Daemon tab displays the Fence Daemon Properties page, which provides an interface for configuring Post Fail Delay and Post Join Delay . The values you configure for these parameters are general fencing properties for the cluster. To configure specific fence devices for the nodes of the cluster, use the Fence Devices menu item of the cluster display, as described in Section 3.2, "Configuring Fence Devices" . The Post Fail Delay parameter is the number of seconds the fence daemon ( fenced ) waits before fencing a node (a member of the fence domain) after the node has failed. The Post Fail Delay default value is 0 . Its value may be varied to suit cluster and network performance. The Post Join Delay parameter is the number of seconds the fence daemon ( fenced ) waits before fencing a node after the node joins the fence domain. The Post Join Delay default value is 6 . A typical setting for Post Join Delay is between 20 and 30 seconds, but can vary according to cluster and network performance. Enter the values required and click Apply for changes to take effect. Note For more information about Post Join Delay and Post Fail Delay , see the fenced (8) man page. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/fence_configuration_guide/ch-config-conga-CA |
6.4. Changing the Default Mapping | 6.4. Changing the Default Mapping In Red Hat Enterprise Linux, Linux users are mapped to the SELinux __default__ login by default (which is in turn mapped to the SELinux unconfined_u user). If you would like new Linux users, and Linux users not specifically mapped to an SELinux user to be confined by default, change the default mapping with the semanage login command. For example, enter the following command as root to change the default mapping from unconfined_u to user_u : Verify the __default__ login is mapped to user_u : If a new Linux user is created and an SELinux user is not specified, or if an existing Linux user logs in and does not match a specific entry from the semanage login -l output, they are mapped to user_u , as per the __default__ login. To change back to the default behavior, enter the following command as root to map the __default__ login to the SELinux unconfined_u user: | [
"~]# semanage login -m -S targeted -s \"user_u\" -r s0 __default__",
"~]# semanage login -l Login Name SELinux User MLS/MCS Range Service __default__ user_u s0-s0:c0.c1023 * root unconfined_u s0-s0:c0.c1023 * system_u system_u s0-s0:c0.c1023 *",
"~]# semanage login -m -S targeted -s \"unconfined_u\" -r s0-s0:c0.c1023 __default__"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/sect-security-enhanced_linux-confining_users-changing_the_default_mapping |
15.5. Adding a Remote Connection | 15.5. Adding a Remote Connection This procedure covers how to set up a connection to a remote system using virt-manager . To create a new connection open the File menu and select the Add Connection... menu item. The Add Connection wizard appears. Select the hypervisor. For Red Hat Enterprise Linux 6 systems select QEMU/KVM . Select Local for the local system or one of the remote connection options and click Connect . This example uses Remote tunnel over SSH which works on default installations. For more information on configuring remote connections refer to Chapter 5, Remote Management of Guests Figure 15.8. Add Connection Enter the root password for the selected host when prompted. A remote host is now connected and appears in the main virt-manager window. Figure 15.9. Remote host in the main virt-manager window | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-virtualization-managing_guests_with_the_virtual_machine_manager_virt_manager-the_open_connection_window |
Chapter 4. Enabling Windows container workloads | Chapter 4. Enabling Windows container workloads Before adding Windows workloads to your cluster, you must install the Windows Machine Config Operator (WMCO), which is available in the OpenShift Container Platform OperatorHub. The WMCO orchestrates the process of deploying and managing Windows workloads on a cluster. Note Dual NIC is not supported on WMCO-managed Windows instances. Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). You have installed your cluster using installer-provisioned infrastructure. Clusters installed with user-provisioned infrastructure are not supported for Windows container workloads. You have configured hybrid networking with OVN-Kubernetes for your cluster. This must be completed during the installation of your cluster. For more information, see Configuring hybrid networking . You are running an OpenShift Container Platform cluster version 4.6.8 or later. Note The WMCO is not supported in clusters that use a cluster-wide proxy because the WMCO is not able to route traffic through the proxy connection for the workloads. Additional resources For the comprehensive prerequisites for the Windows Machine Config Operator, see Understanding Windows container workloads . 4.1. Installing the Windows Machine Config Operator You can install the Windows Machine Config Operator using either the web console or OpenShift CLI ( oc ). 4.1.1. Installing the Windows Machine Config Operator using the web console You can use the OpenShift Container Platform web console to install the Windows Machine Config Operator (WMCO). Note Dual NIC is not supported on WMCO-managed Windows instances. Procedure From the Administrator perspective in the OpenShift Container Platform web console, navigate to the Operators OperatorHub page. Use the Filter by keyword box to search for Windows Machine Config Operator in the catalog. Click the Windows Machine Config Operator tile. Review the information about the Operator and click Install . On the Install Operator page: Select the stable channel as the Update Channel . The stable channel enables the latest stable release of the WMCO to be installed. The Installation Mode is preconfigured because the WMCO must be available in a single namespace only. Choose the Installed Namespace for the WMCO. The default Operator recommended namespace is openshift-windows-machine-config-operator . Click the Enable Operator recommended cluster monitoring on the Namespace checkbox to enable cluster monitoring for the WMCO. Select an Approval Strategy . The Automatic strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available. The Manual strategy requires a user with appropriate credentials to approve the Operator update. Click Install . The WMCO is now listed on the Installed Operators page. Note The WMCO is installed automatically into the namespace you defined, like openshift-windows-machine-config-operator . Verify that the Status shows Succeeded to confirm successful installation of the WMCO. 4.1.2. Installing the Windows Machine Config Operator using the CLI You can use the OpenShift CLI ( oc ) to install the Windows Machine Config Operator (WMCO). Note Dual NIC is not supported on WMCO-managed Windows instances. Procedure Create a namespace for the WMCO. Create a Namespace object YAML file for the WMCO. For example, wmco-namespace.yaml : apiVersion: v1 kind: Namespace metadata: name: openshift-windows-machine-config-operator 1 labels: openshift.io/cluster-monitoring: "true" 2 1 It is recommended to deploy the WMCO in the openshift-windows-machine-config-operator namespace. 2 This label is required for enabling cluster monitoring for the WMCO. Create the namespace: USD oc create -f <file-name>.yaml For example: USD oc create -f wmco-namespace.yaml Create the Operator group for the WMCO. Create an OperatorGroup object YAML file. For example, wmco-og.yaml : apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: windows-machine-config-operator namespace: openshift-windows-machine-config-operator spec: targetNamespaces: - openshift-windows-machine-config-operator Create the Operator group: USD oc create -f <file-name>.yaml For example: USD oc create -f wmco-og.yaml Subscribe the namespace to the WMCO. Create a Subscription object YAML file. For example, wmco-sub.yaml : apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: windows-machine-config-operator namespace: openshift-windows-machine-config-operator spec: channel: "stable" 1 installPlanApproval: "Automatic" 2 name: "windows-machine-config-operator" source: "redhat-operators" 3 sourceNamespace: "openshift-marketplace" 4 1 Specify stable as the channel. 2 Set an approval strategy. You can set Automatic or Manual . 3 Specify the redhat-operators catalog source, which contains the windows-machine-config-operator package manifests. If your OpenShift Container Platform is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object you created when you configured the Operator LifeCycle Manager (OLM). 4 Namespace of the catalog source. Use openshift-marketplace for the default OperatorHub catalog sources. Create the subscription: USD oc create -f <file-name>.yaml For example: USD oc create -f wmco-sub.yaml The WMCO is now installed to the openshift-windows-machine-config-operator . Verify the WMCO installation: USD oc get csv -n openshift-windows-machine-config-operator Example output NAME DISPLAY VERSION REPLACES PHASE windows-machine-config-operator.2.0.0 Windows Machine Config Operator 2.0.0 Succeeded 4.2. Configuring a secret for the Windows Machine Config Operator To run the Windows Machine Config Operator (WMCO), you must create a secret in the WMCO namespace containing a private key. This is required to allow the WMCO to communicate with the Windows virtual machine (VM). Prerequisites You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM). You created a PEM-encoded file containing an RSA key. Procedure Define the secret required to access the Windows VMs: USD oc create secret generic cloud-private-key --from-file=private-key.pem=USD{HOME}/.ssh/<key> \ -n openshift-windows-machine-config-operator 1 1 You must create the private key in the WMCO namespace, like openshift-windows-machine-config-operator . It is recommended to use a different private key than the one used when installing the cluster. 4.3. Additional resources Generating an SSH private key and adding it to the agent Adding Operators to a cluster . | [
"apiVersion: v1 kind: Namespace metadata: name: openshift-windows-machine-config-operator 1 labels: openshift.io/cluster-monitoring: \"true\" 2",
"oc create -f <file-name>.yaml",
"oc create -f wmco-namespace.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: windows-machine-config-operator namespace: openshift-windows-machine-config-operator spec: targetNamespaces: - openshift-windows-machine-config-operator",
"oc create -f <file-name>.yaml",
"oc create -f wmco-og.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: windows-machine-config-operator namespace: openshift-windows-machine-config-operator spec: channel: \"stable\" 1 installPlanApproval: \"Automatic\" 2 name: \"windows-machine-config-operator\" source: \"redhat-operators\" 3 sourceNamespace: \"openshift-marketplace\" 4",
"oc create -f <file-name>.yaml",
"oc create -f wmco-sub.yaml",
"oc get csv -n openshift-windows-machine-config-operator",
"NAME DISPLAY VERSION REPLACES PHASE windows-machine-config-operator.2.0.0 Windows Machine Config Operator 2.0.0 Succeeded",
"oc create secret generic cloud-private-key --from-file=private-key.pem=USD{HOME}/.ssh/<key> -n openshift-windows-machine-config-operator 1"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/windows_container_support_for_openshift/enabling-windows-container-workloads |
5.5. Load Balancing Policy: Power_Saving | 5.5. Load Balancing Policy: Power_Saving Figure 5.2. Power Saving Scheduling Policy A power saving load balancing policy selects the host for a new virtual machine according to lowest CPU or highest available memory. The maximum CPU load and minimum available memory that is allowed for hosts in a cluster for a set amount of time is defined by the power saving scheduling policy's parameters. Beyond these limits the environment's performance will degrade. The power saving parameters also define the minimum CPU load and maximum available memory allowed for hosts in a cluster for a set amount of time before the continued operation of a host is considered an inefficient use of electricity. If a host has reached the maximum CPU load or minimum available memory and stays there for more than the set time, the virtual machines on that host are migrated one by one to the host that has the lowest CPU or highest available memory depending on which parameter is being utilized. Host resources are checked once per minute, and one virtual machine is migrated at a time until the host CPU load is below the defined limit or the host available memory is above the defined limit. If the host's CPU load falls below the defined minimum level or the host's available memory rises above the defined maximum level the virtual machines on that host are migrated to other hosts in the cluster as long as the other hosts in the cluster remain below maximum CPU load and above minimum available memory. When an under-utilized host is cleared of its remaining virtual machines, the Manager will automatically power down the host machine, and restart it again when load balancing requires or there are not enough free hosts in the cluster. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/technical_reference/load_balancing_policy_power_saving |
Chapter 4. Configuring the Block Storage service (cinder) | Chapter 4. Configuring the Block Storage service (cinder) The Block Storage service (cinder) provides access to remote block storage devices through volumes to provide persistent storage. The Block Storage service has three mandatory services; api , scheduler , and volume ; and one optional service, backup . All Block Storage services use the cinder section of the OpenStackControlPlane custom resource (CR) for their configuration: Global configuration options are applied directly under the cinder and template sections. Service specific configuration options appear under their associated sections. The following example demonstrates all of the sections where Block Storage service configuration is applied and what type of configuration is applied in each section: 4.1. Terminology The following terms are important to understanding the Block Storage service (cinder): Storage back end: A physical storage system where volume data is stored. Cinder driver: The part of the Block Storage service that enables communication with the storage back end.It is configured with the volume_driver and backup_driver options. Cinder back end: A logical representation of the grouping of a cinder driver with its configuration. This grouping is used to manage and address the volumes present in a specific storage back end. The name of this logical construct is configured with the volume_backend_name option. Storage pool: A logical grouping of volumes in a given storage back end. Cinder pool: A representation in the Block Storage service of a storage pool. Volume host: The way the Block Storage service address volumes. There are two different representations, short ( <hostname>@<backend-name> ) and full ( <hostname>@<backend-name>#<pool-name> ). Quota: Limits defined per project to constrain the use of Block Storage specific resources. 4.2. Block Storage service (cinder) enhancements in Red Hat OpenStack Services on OpenShift (RHOSO) The following functionality enhancements have been integrated into the Block Storage service: Ease of deployment for multiple volume back ends. Back end deployment does not affect running volume back ends. Back end addition and removal does not affect running back ends. Back end configuration changes do not affect other running back ends. Each back end can use its own vendor-specific container image. It is no longer necessary to build a custom image that holds dependencies from two drivers. Pacemaker has been replaced by Red Hat OpenShift Container Platform (RHOCP) functionality. Improved methods for troubleshooting the service code. 4.3. Configuring transport protocols Deployments use different transport protocols to connect to volumes. The Block Storage service (cinder) supports the following transport protocols: iSCSI Fibre Channel (FC) NVMe over TCP (NVMe-TCP) NFS Red Hat Ceph Storage RBD Control plane services that use volumes, such as the Block Storage service (cinder) volume and backup services, may require the support of the Red Hat OpenShift Container Platform (RHOCP) cluster to use iscsid and multipathd modules, depending on the storage array in use. These modules must be available on all nodes where these volume-dependent services execute. To use these transport protocols, a MachineConfig CR is created to define where these modules execute. For more information on a MachineConfig , see Understanding the Machine Config operator . Important Using a MachineConfig to change the configuration of a node causes the node to reboot. Consult with your RHOCP administrator before applying a 'MachineConfig` to ensure the integrity of RHOCP workloads. The procedures in this section are meant as a guide to the general configuration of these protocols. Storage back end vendors will supply configuration information on how to connect to their specific solution. In addition to protocol specific configuration, configure multipathing regardless of the transport protocol used. After you have completed the transport protocol configuration, see Configuring multipathing for the procedure. Note These services are automatically started on EDPM nodes. 4.3.1. Configuring the iSCSI protocol Connecting to iSCSI volumes from the RHOCP nodes requires the iSCSI initiator service. There must be a single instance of the iscsid service module for the normal RHOCP usage, OpenShift CSI plugins usage, and the RHOSO services. Apply a MachineConfig to the applicable nodes to configure nodes to use the iSCSI protocol. Note If the iscsid service module is already running, this procedure is not required. Procedure Create a MachineConfig CR to configure the nodes for the iscsid module. The following example starts the iscsid service with a default configuration in all RHOCP worker nodes: Save the file. Apply the MachineConfig CR file. Replace <machine_config_file> with the name of your MachineConfig CR file. 4.3.2. Configuring the Fibre Channel protocol There is no additional node configuration required to use the Fibre Channel protocol to connect to volumes. It is mandatory though that all nodes using Fibre Channel have an Host Bus Adapter (HBA) card. Unless all worker nodes in your RHOCP deployment have an HBA card, you must use a nodeSelector in your control plane configuration to select which nodes are used for volume and backup services, as well as the Image service instances that use the Block Storage service for their storage back end. 4.3.3. Configuring the NVMe over TCP (NVMe-TCP) protocol Connecting to NVMe-TCP volumes from the RHOCP nodes requires the nvme kernel modules. Procedure Create a MachineConfig CR to configure the nodes for the nvme kernel modules. The following example starts the nvme kernel modules with a default configuration in all RHOCP worker nodes: Save the file. Apply the MachineConfig CR file. Replace <machine_config_file> with the name of your MachineConfig CR file. After the nodes have rebooted, verify the nvme-fabrics module are loaded and support ANA on a host: Note Even though ANA does not use the Linux Multipathing Device Mapper, multipathd must be running for the Compute nodes to be able to use multipathing when connecting volumes to instances. 4.3.4. Configuring multipathing Configuring multipathing on RHOCP nodes requires a MachineConfig CR that creates a multipath.conf file on a node and starts the service. Note The example provided in this procedure creates only a minimal multipath.conf file. Production deployments may require hardware vendor specific additions as appropriate to your environment. Consult with the appropriate systems administrators for any values required for your deployment. Procedure Create a MachineConfig CR to configure the nodes multipathing. The following example creates a multipath.conf file and starts the multipathd module on all RHOCP nodes: Note The following would be the contents of the multipath.conf created by this example: Save the file. Apply the MachineConfig CR file. Replace <machine_config_file> with the name of your MachineConfig CR file. Note In RHOSO deployments, the use_multipath_for_image_xfer configuration option is enabled by default. 4.4. Configuring initial defaults The Block Storage service (cinder) has a set of initial defaults that should be configured when the service is first enabled. They must be defined in the main customServiceConfig section. Once deployed, these initial defaults are modified using the openstack client. Procedure Open your OpenStackControlPlane CR file, openstack_control_plane.yaml . Edit the CR file and add the Block Storage service global configuration. The following example demonstrates a Block Storage service initial configuration: For a complete list of all initial default parameters, see Initial default parameters . Update the control plane: Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status: The OpenStackControlPlane resources are created when the status is "Setup complete". Tip Append the -w option to the end of the get command to track deployment progress. 4.4.1. Initial default parameters These initial default parameters should be configured when the service is first enabled. Parameter Description default_volume_type Provides the default volume type for all users. The default type of any non-default value will not be automatically created. The default value is __DEFAULT__ . no_snapshot_gb_quota Determines if the size of snapshots count against the gigabyte quota in addition to the size of volumes. The default is false , which means that the size of the snapshots are included in the gigabyle quota. per_volume_size_limit Provides the maximum size of each volume in gigabytes. The default is -1 (unlimited). quota_volumes Provides the number of volumes allowed for each project. The default value is 10 . quota_snapshots Provides the number of snapshots allowed for each project. The default value is 10 . quota_groups Provides the number of volume groups allowed for each project, which includes the consistency groups. The default value is 10 . quota_gigabytes Provides the total amount of storage for each project, in gigabytes, allowed for volumes, and depending upon the configuration of the no_snapshot_gb_quota initial parameter this might also include the size of the snapshots. The default values, also count the size of the snapshots against this limit of 1000 GB. quota_backups Provides the number backups allowed for each project. The default value is 10 . quota_backup_gigabytes Provides the total amount of storage for each project, in gigabytes, allowed for backups. The default is 1000 . 4.5. Configuring the API service The Block Storage service (cinder) provides an API interface for all external interaction with the service for both users and other RHOSO services. Procedure Open your OpenStackControlPlane CR file, openstack_control_plane.yaml . Edit the CR file and add the configuration for the internal Red Hat OpenShift Container Platform (RHOCP) load balancer. The following example demonstrates a load balancer configuration: Edit the CR file and add the configuration for the number of API service replicas. Run the cinderAPI service in an Active-Active configuration with three replicas. The following example demonstrates configuring the cinderAPI service to use three replicas: Edit the CR file and configure cinderAPI options. These options are configured in the customServiceConfig section under the cinderAPI section. The following example demonstrates configuring cinderAPI service options and enabling debugging on all services: For a listing of commonly used cinderAPI service option parameters, see API service option parameters . Save the file. Update the control plane: Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status: The OpenStackControlPlane resources are created when the status is "Setup complete". Tip Append the -w option to the end of the get command to track deployment progress. 4.5.1. API service option parameters API service option parameters are provided for the configuration of the cinderAPI portions of the Block Storage service. Parameter Description api_rate_limit Provides a value to determine if the API rate limit is enabled. The default is false . debug Provides a value to determine whether the logging level is set to DEBUG instead of the default of INFO . The default is false . The logging level can be dynamically set without restarting. osapi_max_limit Provides a value for the maximum number of items a collection resource returns in a single response. The default is 1000 . osapi_volume_workers Provides a value for the number of workers assigned to the API component. The default is the number of CPUs available. 4.6. Configuring the scheduler service The Block Storage service (cinder) has a scheduler service ( cinderScheduler ) that is responsible for making decisions such as selecting which back end receives new volumes, whether there is enough free space to perform an operation or not, and deciding where an existing volume should be moved to on some specific operations. Use only a single instance of cinderScheduler for scheduling consistency and ease of troubleshooting. While cinderScheduler can be run with multiple instances, the service default replicas: 1 is the best practice. Procedure Open your OpenStackControlPlane CR file, openstack_control_plane.yaml . Edit the CR file and add the configuration for the service down detection timeouts. The following example demonstrates this configuration: 1 The number of seconds between Block Storage service components reporting an operational state in the form of a heartbeat through the database. The default is 10 . 2 The maximum number of seconds since the last heartbeat from the component for it to be considered non-operational. The default is 60 . Note Configure these values at the cinder level of the CR instead of the cinderScheduler so that these values are applied to all components consistently. Edit the CR file and add the configuration for the statistics reporting interval. The following example demonstrates configuring these values at the cinder level to apply them globally to all services: 1 The number of seconds between requests from the volume for usage statistics from the back end. The default is 60 2 The number of seconds between requests from the volume for usage statistics from backup service. The default is 60 . The following example demonstrates configuring these values at the cinderVolume and cinderBackup level to customize settings at the service level. 1 The number of seconds between requests from the volume for usage statistics from the back end. The default is 60 2 The number of seconds between requests from the volume for usage statistics from backup service. The default is 60 . Note The generation of usage statistics can be resource intensive for some back ends. Setting these values too low can affect back end performance. You may need to tune the configuration of these settings to better suit individual back ends. Perform any additional configuration necessary to customize the cinderScheduler service. For more configuration options for the customization of the cinderScheduler service, see Scheduler service parameters . Save the file. Update the control plane: Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status: The OpenStackControlPlane resources are created when the status is "Setup complete". Tip Append the -w option to the end of the get command to track deployment progress. 4.6.1. Scheduler service parameters Scheduler service parameters are provided for the configuration of the cinderScheduler portions of the Block Storage service Parameter Description debug Provides a setting for the logging level. When this parameter is true the logging level is set to DEBUG instead of INFO . The default is false . scheduler_max_attempts Provides a setting for the maximum number of attempts to schedule a volume. The default is 3 scheduler_default_filters Provides a setting for filter class names to use for filtering hosts when not specified in the request. This is a comma separated list. The default is AvailabilityZoneFilter,CapacityFilter,CapabilitiesFilter . scheduler_default_weighers Provide a setting for weigher class names to use for weighing hosts. This is a comma separated list. The default is CapacityWeigher . scheduler_weight_handler Provides a setting for a handler to use for selecting the host or pool after weighing. The value cinder.scheduler.weights.OrderedHostWeightHandler selects the first host from the list of hosts that passed filtering and the value cinder.scheduler.weights.stochastic.stochasticHostWeightHandler gives every pool a chance to be chosen where the probability is proportional to each pool weight. The default is cinder.scheduler.weights.OrderedHostWeightHandler . The following is an explanation of the filter class names from the parameter table: AvailabilityZoneFilter Filters out all back ends that do not meet the availability zone requirements of the requested volume. CapacityFilter Selects only back ends with enough space to accommodate the volume. CapabilitiesFilter Selects only back ends that can support any specified settings in the volume. InstanceLocality Configures clusters to use volumes local to the same node. 4.7. Configuring the volume service The Block Storage service (cinder) has a volume service ( cinderVolumes section) that is responsible for managing operations related to volumes, snapshots, and groups. These operations include creating, deleting, and cloning volumes and making snapshots. This service requires access to the storage back end ( storage ) and storage management ( storageMgmt ) networks in the networkAttachments of the OpenStackControlPlane CR. Some operations, such as creating an empty volume or a snapshot, does not require any data movement between the volume service and the storage back end. Other operations though, such as migrating data from one storage back end to another, that requires the data to pass through the volume service to do require access. Volume service configuration is performed in the cinderVolumes section with parameters set in the customServiceConfig , customServiceConfigSecrets , networkAttachments , replicas , and the nodeSelector sections. The volume service cannot have multiple replicas. Procedure Open your OpenStackControlPlane CR file, openstack_control_plane.yaml . Edit the CR file and add the configuration for your back end. The following example demonstrates the service configuration for a Red Hat Ceph Storage back end: 1 The configuration area for the individual back end. Each unique back end requires an individual configuration area. No back end is deployed by default. The Block Storage service volume service will not run unless at least one back end is configured during deployment. For more information about configuring back ends, see Block Storage service (cinder) back ends and Multiple Block Storage service (cinder) back ends . 2 The configuration area for the back end network connections. 3 The name assigned to this back end. 4 The driver used to connect to this back end. For a list of commonly used volume service parameters, see Volume service parameters . Save the file. Update the control plane: Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status: The OpenStackControlPlane resources are created when the status is "Setup complete". Tip Append the -w option to the end of the get command to track deployment progress. 4.7.1. Volume service parameters Volume service parameters are provided for the configuration of the cinderVolumes portions of the Block Storage service Parameter Description backend_availability_zone Provides a setting for the availability zone of the back end. This is set in the [DEFAULT] section. The default value is storage_availability_zone . volume_backend_name Provides a setting for the back end name for a given driver implementation. There is no default value. volume_driver Provides a setting for the driver to use for volume creation. It is provided in the form of Python namespace for the specific class. There is no default value. enabled_backends Provides a setting for a list of back end names to use. These back end names should be backed by a unique [CONFIG] group with its options. This is a comma-seperated list of values. The default value is the name of the section with a volume_backend_name option. image_conversion_dir Provides a setting for a directory used for temporary storage during image conversion. The default value is /var/lib/cinder/conversion . backend_stats_polling_interval Provides a setting for the number of seconds between the volume requests for usage statistics from the storage back end. The default is 60 . 4.7.2. Block Storage service (cinder) back ends Each Block Storage service back end should have an individual configuration section in the cinderVolumes section. This ensures each back end runs in a dedicated pod. This approach has the following benefits: Increased isolation. Adding and removing back ends is fast and does not affect other running back ends. Configuration changes do not affect other running back ends. Automatically spreads the Volume pods into different nodes. Each Block Storage service back end uses a storage transport protocol to access data in the volumes. Each storage transport protocol has individual requirements as described in Configuring transport protocols . Storage protocol information should also be provided in individual vendor installation guides. Note Configure each back end with an independent pod. In director-based releases of RHOSP, all back ends run in a single cinder-volume container. This is no longer the best practice. No back end is deployed by default. The Block Storage service volume service will not run unless at least one back end is configured during deployment. All storage vendors provide an installation guide with best practices, deployment configuration, and configuration options for vendor drivers. These installation guides provide the specific configuration information required to properly configure the volume service for deployment. Installation guides are available in the Red Hat Ecosystem Catalog . For more information on integrating and certifying vendor drivers, see Integrating partner content . For information on Red Hat Ceph Storage back end configuration, see Integrating Red Hat Ceph Storage and Deploying a Hyperconverged Infrastructure environment . For information on configuring a generic (non-vendor specific) NFS back end, see Configuring a generic NFS back end . Note Use a certified storage back end and driver. If you use NFS storage that comes from the generic NFS back end, its capabilities are limited compared to a certified storage back end and driver. 4.7.3. Multiple Block Storage service (cinder) back ends Multiple Block Storage service back ends are deployed by adding multiple, independent entries in the cinderVolumes configuration section. Each back end runs in an independent pod. The following configuration example, deploys two independent back ends; one for iSCSI and another for NFS: 4.8. Configuring back end availability zones Configure back end availability zones (AZs) for Volume service back ends and the Backup service to group cloud infrastructure services for users. AZs are mapped to failure domains and Compute resources for high availability, fault tolerance, and resource scheduling. For example, you could create an AZ of Compute nodes with specific hardware that users can select when they create an instance that requires that hardware. Note Post-deployment, AZs are created using the RESKEY:availability_zones volume type extra specification. Users can create a volume directly in an AZ as long as the volume type does not restrict the AZ. Procedure Open your OpenStackControlPlane CR file, openstack_control_plane.yaml . Edit the CR file and add the AZ configuration. The following example demonstrates an AZ configuration: 1 The availability zone associated with the back end. Save the file. Update the control plane: Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status: The OpenStackControlPlane resources are created when the status is "Setup complete". Tip Append the -w option to the end of the get command to track deployment progress. 4.9. Configuring a generic NFS back end The Block Storage service (cinder) can be configured with a generic NFS back end to provide an alternative storage solution for volumes and backups. The Block Storage service supports a generic NFS solution with the following caveats: Use a certified storage back end and driver. If you use NFS storage that comes from the generic NFS back end, its capabilities are limited compared to a certified storage back end and driver. For example, the generic NFS back end does not support features such as volume encryption and volume multi-attach. For information about supported drivers, see the Red Hat Ecosystem Catalog . For Block Storage (cinder) and Compute (nova) services, you must use NFS version 4.0 or later. RHOSO does not support earlier versions of NFS. RHOSO does not support the NetApp NAS secure feature. It interferes with normal volume operations. This feature must be disabled in the customServiceConfig in the specific back-end configuration with the following parameters: Do not configure the nfs_mount_options option. The default value is the best NFS options for RHOSO environments. If you experience issues when you configure multiple services to share the same NFS server, contact Red Hat Support. Procedure Create a Secret CR to store the volume connection information. The following is an example of a Secret CR: 1 The name used when including it in the cinderVolumes back end configuration. Save the file. Update the control plane: Replace <secret_file_name> with the name of the file that contains your Secret CR. Open your OpenStackControlPlane CR file, openstack_control_plane.yaml . Edit the CR file and add the configuration for the generic NFS back end. The following example demonstrates this configuration: 1 The storageMgmt network is not listed because generic NFS does not have a management interface. 2 The name from the Secret CR. Note If you are configuring multiple generic NFS back ends, ensure each is in an individual configuration section so that one pod is devoted to each back end. Save the file. Update the control plane: Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status: The OpenStackControlPlane resources are created when the status is "Setup complete". Tip Append the -w option to the end of the get command to track deployment progress. 4.10. Configuring an NFS conversion directory When the Block Storage service (cinder) performs image format conversion, and the space is limited, conversion of large Image service (glance) images can cause the node root disk space to be completely used. You can use an external NFS share for the conversion to prevent the space on the node from being completely filled. Procedure Open your OpenStackControlPlane CR file, openstack_control_plane.yaml . Edit the CR file and add the configuration for the conversion directory. The following example demonstrates a conversion directory configuration: 1 The path to the directory to use for conversion. 2 The IP address of the server providing the conversion directory. Note The example provided demonstrates how to create a common conversion directory used by all volume service pods. It is also possible to define a conversion directory for each volume service pod. To do this, define each conversion directory using extraMounts as demonstrated above but in the cinder section of the OpenStackControlPlane CR file. You would then set the propagation value to the name of the specific Volume section instead of CinderVolume . Save the file. Update the control plane: Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status: The OpenStackControlPlane resources are created when the status is "Setup complete". Tip Append the -w option to the end of the get command to track deployment progress. 4.11. Configuring automatic database cleanup The Block Storage service (cinder) performs a soft-deletion of database entries. This means that database entries are marked for deletion but are not actually deleted from the database. This allows for the auditing of deleted resources. These database rows marked for deletion will grow endlessly and consume resources if not purged. RHOSO automatically purges database entries marked for deletion after a set number of days. By default, records marked for deletion after 30 days are purged. You can configure a different record age and schedule for purge jobs. Procedure Open your openstack_control_plane.yaml file to edit the OpenStackControlPlane CR. Add the dbPurge parameter to the cinder template to configure database cleanup depending on the service you want to configure. The following is an example of using the dbPurge parameter to configure the Block Storage service: 1 The number of days a record has been marked for deletion before it is purged. The default value is 30 . The minimum value is 1 . 2 The schedule of when to run the job in a crontab format. The default value is 1 0 * * * . This default value is equivalent to 00:01 daily. Update the control plane: 4.12. Preserving jobs The Block Storage service (cinder) requires maintenance operations that are run automatically. Some operations are one-off and some are periodic. These operations are run using OpenShift Jobs. If jobs and their pods are automatically removed on completion, you cannot check the logs of these operations. However, you can use the preserveJob field in your OpenStackControlPlane CR to stop the automatic removal of jobs and preserve them. Example: 4.13. Resolving hostname conflicts Most storage back ends in the Block Storage service (cinder) require the hosts that connect to them to have unique hostnames. These hostnames are used to identify permissions and addresses, such as iSCSI initiator name, HBA WWN and WWPN. Because you deploy in OpenShift, the hostnames that the Block Storage service volumes and backups report are not the OpenShift hostnames but the pod names instead. These pod names are formed using a predetermined template: * For volumes: cinder-volume-<backend_key>-0 * For backups: cinder-backup-<replica-number> If you use the same storage back end in multiple deployments, the unique hostname requirement may not be honored, resulting in operational problems. To address this issue, you can request the installer to have unique pod names, and hence unique hostnames, by using the uniquePodNames field. When you set the uniquePodNames field to true , a short hash is added to the pod names, which addresses hostname conflicts. Example: 4.14. Using other container images Red Hat OpenStack Services on OpenShift (RHOSO) services are deployed using a container image for a specific release and version. There are times when a deployment requires a container image other than the one produced for that release and version. The most common reasons for this are: Deploying a hotfix. Using a certified, vendor-provided container image. The container images used by the installer are controlled through the OpenStackVersion CR. An OpenStackVersion CR is automatically created by the openstack operator during the deployment of services. Alternatively, it can be created manually before the application of the OpenStackControlPlane CR but after the openstack operator is installed. This allows for the container image for any service and component to be individually designated. The granularity of this designation depends on the service. For example, in the Block Storage service (cinder) all the cinderAPI , cinderScheduler , and cinderBackup pods must have the same image. However, for the Volume service, the container image is defined for each of the cinderVolumes . The following example demonstrates a OpenStackControlPlane configuration with two back ends; one called ceph and one called custom-fc . The custom-fc backend requires a certified, vendor-provided container image. Additionally, we must configure the other service images to use a non-standard image from a hotfix. The following example demonstrates what our OpenStackVersion CR might look like in order to set up the container images properly. Replace <custom-api-image> with the name of the API service image to use. Replace <custom-backup-image> with the name of the Backup service image to use. Replace <custom-scheduler-image> with the name of the Scheduler service image to use. Replace <vendor-volume-volume-image> with the name of the certified, vendor-provided image to use. Note The name attribute in your OpenStackVersion CR must match the same attribute in your OpenStackControlPlane CR. | [
"apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder:",
"apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: <global-options> template: <global-options> cinderAPI: <cinder-api-options> cinderScheduler: <cinder-scheduler-options> cinderVolumes: <name1>: <cinder-volume-options> <name2>: <cinder-volume-options> cinderBackup: <cinder-backup-options>",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker service: cinder name: 99-worker-cinder-enable-iscsid spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: iscsid.service",
"oc apply -f <machine_config_file> -n openstack",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker service: cinder name: 99-worker-cinder-load-nvme-fabrics spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/modules-load.d/nvme_fabrics.conf overwrite: false mode: 420 user: name: root group: name: root contents: source: data:,nvme-fabrics%0Anvme-tcp",
"oc apply -f <machine_config_file> -n openstack",
"cat /sys/module/nvme_core/parameters/multipath",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker service: cinder name: 99-worker-cinder-enable-multipathd spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/multipath.conf overwrite: false mode: 384 user: name: root group: name: root contents: source: data:,defaults%20%7B%0A%20%20user_friendly_names%20no%0A%20%20recheck_wwid%20yes%0A%20%20skip_kpartx%20yes%0A%20%20find_multipaths%20yes%0A%7D%0A%0Ablacklist%20%7B%0A%7D systemd: units: - enabled: true name: multipathd.service",
"defaults { user_friendly_names no recheck_wwid yes skip_kpartx yes find_multipaths yes } blacklist { }",
"oc apply -f <machine_config_file> -n openstack",
"apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: enabled: true template: customServiceConfig: | [DEFAULT] quota_volumes = 20 quota_snapshots = 15",
"oc apply -f openstack_control_plane.yaml -n openstack",
"oc get openstackcontrolplane -n openstack",
"apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: cinderAPI: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer",
"apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: cinderAPI: replicas: 3",
"apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: customServiceConfig: | [DEFAULT] debug = true cinderAPI: customServiceConfig: | [DEFAULT] osapi_volume_workers = 3",
"oc apply -f openstack_control_plane.yaml -n openstack",
"oc get openstackcontrolplane -n openstack",
"apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: customServiceConfig: | [DEFAULT] report_interval = 20 1 service_down_time = 120 2",
"apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: customServiceConfig: | [DEFAULT] backend_stats_polling_interval = 120 1 backup_driver_stats_polling_interval = 120 2",
"apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: cinderBackup: customServiceConfig: | [DEFAULT] backup_driver_stats_polling_interval = 120 1 < rest of the config > cinderVolumes: nfs: customServiceConfig: | [DEFAULT] backend_stats_polling_interval = 120 2",
"oc apply -f openstack_control_plane.yaml -n openstack",
"oc get openstackcontrolplane -n openstack",
"apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: customServiceConfig: | [DEFAULT] debug = true cinderVolumes: ceph: 1 networkAttachments: 2 - storage customServiceConfig: | [ceph] volume_backend_name = ceph 3 volume_driver = cinder.volume.drivers.rbd.RBDDriver 4",
"oc apply -f openstack_control_plane.yaml -n openstack",
"oc get openstackcontrolplane -n openstack",
"apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: cinderVolumes: nfs: networkAttachments: - storage customServiceConfigSecrets: - cinder-volume-nfs-secrets customServiceConfig: | [nfs] volume_backend_name=nfs iSCSI: networkAttachments: - storage - storageMgmt customServiceConfig: | [iscsi] volume_backend_name=iscsi",
"apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: cinderVolumes: nfs: networkAttachments: - storage - storageMgmt customServiceConfigSecrets: - cinder-volume-nfs-secrets customServiceConfig: | [nfs] volume_backend_name=nfs backend_availability_zone=zone1 1 iSCSI: networkAttachments: - storage - storageMgmt customServiceConfig: | [iscsi] volume_backend_name=iscsi backend_availability_zone=zone2",
"oc apply -f openstack_control_plane.yaml -n openstack",
"oc get openstackcontrolplane -n openstack",
"nas_secure_file_operation=false nas_secure_file_permissions=false",
"apiVersion: v1 kind: Secret metadata: name: cinder-volume-nfs-secrets 1 type: Opaque stringData: cinder-volume-nfs-secrets: | [nfs] nas_host=192.168.130.1 nas_share_path=/var/nfs/cinder",
"oc apply -f <secret_file_name> -n openstack",
"apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: cinderVolumes: nfs: networkAttachments: 1 - storage customServiceConfig: | [nfs] volume_backend_name=nfs volume_driver=cinder.volume.drivers.nfs.NfsDriver nfs_snapshot_support=true nas_secure_file_operations=false nas_secure_file_permissions=false customServiceConfigSecrets: - cinder-volume-nfs-secrets 2",
"oc apply -f openstack_control_plane.yaml -n openstack",
"oc get openstackcontrolplane -n openstack",
"apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: extraMounts: extraVol: - propagation: - CinderVolume volumes: - name: cinder-conversion nfs: path: <nfs_share_path> 1 server: <nfs_server> 2 mounts: - name: cinder-conversion mountPath: /var/lib/cinder/conversion readOnly: true",
"oc apply -f openstack_control_plane.yaml -n openstack",
"oc get openstackcontrolplane -n openstack",
"apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: dbPurge: age: 20 1 schedule: 1 0 * * 0 2",
"oc apply -f openstack_control_plane.yaml",
"apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: preserveJobs: true",
"apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: uniquePodNames: true",
"apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: cinderVolumes: ceph: networkAttachments: - storage < . . . > custom-fc: networkAttachments: - storage",
"apiVersion: core.openstack.org/v1beta1 kind: OpenStackVersion metadata: name: openstack spec: customContainerImages: cinderAPIImages: <custom-api-image> cinderBackupImages: <custom-backup-image> cinderSchedulerImages: <custom-scheduler-image> cinderVolumeImages: custom-fc: <vendor-volume-volume-image>"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/configuring_persistent_storage/assembly_cinder-configuring-the-block-storage-service_block-storage |
Chapter 21. Resolving Common Issues | Chapter 21. Resolving Common Issues This chapter provides some of the Red Hat Gluster Storage troubleshooting methods. Section 6.3.2.3, "Troubleshooting Gluster NFS (Deprecated)" Section 6.3.3.9, "Troubleshooting NFS Ganesha" Section 8.14, "Troubleshooting Snapshots" Section 10.12, "Troubleshooting Geo-replication" Section 17.9, "Troubleshooting issues in the Red Hat Gluster Storage Trusted Storage Pool" 21.1. Identifying locked file and clear locks You can use the statedump command to list the locks held on files. The statedump output also provides information on each lock with its range, basename, and PID of the application holding the lock, and so on. You can analyze the output to find the locks whose owner/application is no longer running or interested in that lock. After ensuring that no application is using the file, you can clear the lock using the following clear-locks command: # gluster volume clear-locks VOLNAME path kind {blocked | granted | all}{inode range | entry basename | posix range } For more information on performing statedump , see Section 17.7, "Viewing complete volume state with statedump" To identify locked file and clear locks Perform statedump on the volume to view the files that are locked using the following command: # gluster volume statedump VOLNAME For example, to display statedump of test-volume: The statedump files are created on the brick servers in the /tmp directory or in the directory set using the server.statedump-path volume option. The naming convention of the dump file is brick-path . brick-pid .dump . Clear the entry lock using the following command: # gluster volume clear-locks VOLNAME path kind granted entry basename The following are the sample contents of the statedump file indicating entry lock (entrylk). Ensure that those are stale locks and no resources own them. For example, to clear the entry lock on file1 of test-volume: Clear the inode lock using the following command: # gluster volume clear-locks VOLNAME path kind granted inode range The following are the sample contents of the statedump file indicating there is an inode lock (inodelk). Ensure that those are stale locks and no resources own them. For example, to clear the inode lock on file1 of test-volume: Clear the granted POSIX lock using the following command: # gluster volume clear-locks VOLNAME path kind granted posix range The following are the sample contents of the statedump file indicating there is a granted POSIX lock. Ensure that those are stale locks and no resources own them. For example, to clear the granted POSIX lock on file1 of test-volume: Clear the blocked POSIX lock using the following command: # gluster volume clear-locks VOLNAME path kind blocked posix range The following are the sample contents of the statedump file indicating there is a blocked POSIX lock. Ensure that those are stale locks and no resources own them. For example, to clear the blocked POSIX lock on file1 of test-volume: Clear all POSIX locks using the following command: # gluster volume clear-locks VOLNAME path kind all posix range The following are the sample contents of the statedump file indicating that there are POSIX locks. Ensure that those are stale locks and no resources own them. For example, to clear all POSIX locks on file1 of test-volume: You can perform statedump on test-volume again to verify that all the above locks are cleared. | [
"gluster volume statedump test-volume Volume statedump successful",
"[xlator.features.locks.vol-locks.inode] path=/ mandatory=0 entrylk-count=1 lock-dump.domain.domain=vol-replicate-0 xlator.feature.locks.lock-dump.domain.entrylk.entrylk[0](ACTIVE)=type=ENTRYLK_WRLCK on basename=file1, pid = 714782904, owner=ffffff2a3c7f0000, transport=0x20e0670, , granted at Mon Feb 27 16:01:01 2012 conn.2.bound_xl./rhgs/brick1.hashsize=14057 conn.2.bound_xl./rhgs/brick1.name=/gfs/brick1/inode conn.2.bound_xl./rhgs/brick1.lru_limit=16384 conn.2.bound_xl./rhgs/brick1.active_size=2 conn.2.bound_xl./rhgs/brick1.lru_size=0 conn.2.bound_xl./rhgs/brick1.purge_size=0",
"gluster volume clear-locks test-volume / kind granted entry file1 Volume clear-locks successful test-volume-locks: entry blocked locks=0 granted locks=1",
"[conn.2.bound_xl./rhgs/brick1.active.1] gfid=538a3d4a-01b0-4d03-9dc9-843cd8704d07 nlookup=1 ref=2 ia_type=1 [xlator.features.locks.vol-locks.inode] path=/file1 mandatory=0 inodelk-count=1 lock-dump.domain.domain=vol-replicate-0 inodelk.inodelk[0](ACTIVE)=type=WRITE, whence=0, start=0, len=0, pid = 714787072, owner=00ffff2a3c7f0000, transport=0x20e0670, , granted at Mon Feb 27 16:01:01 2012",
"gluster volume clear-locks test-volume /file1 kind granted inode 0,0-0 Volume clear-locks successful test-volume-locks: inode blocked locks=0 granted locks=1",
"xlator.features.locks.vol1-locks.inode] path=/file1 mandatory=0 posixlk-count=15 posixlk.posixlk[0](ACTIVE)=type=WRITE, whence=0, start=8, len=1, pid = 23848, owner=d824f04c60c3c73c, transport=0x120b370, , blocked at Mon Feb 27 16:01:01 2012 , granted at Mon Feb 27 16:01:01 2012 posixlk.posixlk[1](ACTIVE)=type=WRITE, whence=0, start=7, len=1, pid = 1, owner=30404152462d436c-69656e7431, transport=0x11eb4f0, , granted at Mon Feb 27 16:01:01 2012 posixlk.posixlk[2](BLOCKED)=type=WRITE, whence=0, start=8, len=1, pid = 1, owner=30404152462d436c-69656e7431, transport=0x11eb4f0, , blocked at Mon Feb 27 16:01:01 2012 posixlk.posixlk[3](ACTIVE)=type=WRITE, whence=0, start=6, len=1, pid = 12776, owner=a36bb0aea0258969, transport=0x120a4e0, , granted at Mon Feb 27 16:01:01 2012",
"gluster volume clear-locks test-volume /file1 kind granted posix 0,8-1 Volume clear-locks successful test-volume-locks: posix blocked locks=0 granted locks=1 test-volume-locks: posix blocked locks=0 granted locks=1 test-volume-locks: posix blocked locks=0 granted locks=1",
"[xlator.features.locks.vol1-locks.inode] path=/file1 mandatory=0 posixlk-count=30 posixlk.posixlk[0](ACTIVE)=type=WRITE, whence=0, start=0, len=1, pid = 23848, owner=d824f04c60c3c73c, transport=0x120b370, , blocked at Mon Feb 27 16:01:01 2012 , granted at Mon Feb 27 16:01:01 posixlk.posixlk[1](BLOCKED)=type=WRITE, whence=0, start=0, len=1, pid = 1, owner=30404146522d436c-69656e7432, transport=0x1206980, , blocked at Mon Feb 27 16:01:01 2012 posixlk.posixlk[2](BLOCKED)=type=WRITE, whence=0, start=0, len=1, pid = 1, owner=30404146522d436c-69656e7432, transport=0x1206980, , blocked at Mon Feb 27 16:01:01 2012 posixlk.posixlk[3](BLOCKED)=type=WRITE, whence=0, start=0, len=1, pid = 1, owner=30404146522d436c-69656e7432, transport=0x1206980, , blocked at Mon Feb 27 16:01:01 2012 posixlk.posixlk[4](BLOCKED)=type=WRITE, whence=0, start=0, len=1, pid = 1, owner=30404146522d436c-69656e7432, transport=0x1206980, , blocked at Mon Feb 27 16:01:01 2012",
"gluster volume clear-locks test-volume /file1 kind blocked posix 0,0-1 Volume clear-locks successful test-volume-locks: posix blocked locks=28 granted locks=0 test-volume-locks: posix blocked locks=1 granted locks=0 No locks cleared.",
"[xlator.features.locks.vol1-locks.inode] path=/file1 mandatory=0 posixlk-count=11 posixlk.posixlk[0](ACTIVE)=type=WRITE, whence=0, start=8, len=1, pid = 12776, owner=a36bb0aea0258969, transport=0x120a4e0, , blocked at Mon Feb 27 16:01:01 2012 , granted at Mon Feb 27 16:01:01 2012 posixlk.posixlk[1](ACTIVE)=type=WRITE, whence=0, start=0, len=1, pid = 12776, owner=a36bb0aea0258969, transport=0x120a4e0, , granted at Mon Feb 27 16:01:01 2012 posixlk.posixlk[2](ACTIVE)=type=WRITE, whence=0, start=7, len=1, pid = 23848, owner=d824f04c60c3c73c, transport=0x120b370, , granted at Mon Feb 27 16:01:01 2012 posixlk.posixlk[3](ACTIVE)=type=WRITE, whence=0, start=6, len=1, pid = 1, owner=30404152462d436c-69656e7431, transport=0x11eb4f0, , granted at Mon Feb 27 16:01:01 2012 posixlk.posixlk[4](BLOCKED)=type=WRITE, whence=0, start=8, len=1, pid = 23848, owner=d824f04c60c3c73c, transport=0x120b370, , blocked at Mon Feb 27 16:01:01 2012",
"gluster volume clear-locks test-volume /file1 kind all posix 0,0-1 Volume clear-locks successful test-volume-locks: posix blocked locks=1 granted locks=0 No locks cleared. test-volume-locks: posix blocked locks=4 granted locks=1"
] | https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/chap-troubleshooting |
Chapter 18. Re-enabling accounts that reached the inactivity limit | Chapter 18. Re-enabling accounts that reached the inactivity limit If Directory Server inactivated an account because it reached the inactivity limit, an administrator can re-enable the account. 18.1. Re-enabling accounts inactivated by the Account Policy plug-in You can re-enable accounts using the dsconf account unlock command or by manually updating the lastLoginTime attribute of the inactivated user. Prerequisites An inactivated user account. Procedure Reactivate the account using one of the following methods: Using the dsconf account unlock command: # dsidm -D "cn=Directory manager" ldap://server.example.com -b " dc=example,dc=com " account unlock " uid=example,ou=People,dc=example,dc=com " By setting the lastLoginTime attribute of the user to a recent time stamp: # ldapmodify -H ldap://server.example.com -x -D " cn=Directory Manager " -W dn: uid=example,ou=People,dc=example,dc=com changetype: modify replace: lastLoginTime lastLoginTime: 20210901000000Z Verification Authenticate as the user that you have reactivated. For example, perform a search: # ldapsearch -H ldap://server.example.com -x -D " uid=example,ou=People,dc=example,dc=com " -W -b " dc=example,dc=com -s base" If the user can successfully authenticate, the account was reactivated. | [
"dsidm -D \"cn=Directory manager\" ldap://server.example.com -b \" dc=example,dc=com \" account unlock \" uid=example,ou=People,dc=example,dc=com \"",
"ldapmodify -H ldap://server.example.com -x -D \" cn=Directory Manager \" -W dn: uid=example,ou=People,dc=example,dc=com changetype: modify replace: lastLoginTime lastLoginTime: 20210901000000Z",
"ldapsearch -H ldap://server.example.com -x -D \" uid=example,ou=People,dc=example,dc=com \" -W -b \" dc=example,dc=com -s base\""
] | https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/securing_red_hat_directory_server/assembly_re-enabling-accounts-that-reached-the-inactivity-limit_securing-rhds |
Chapter 5. Virtual builds with Red Hat Quay on OpenShift Container Platform | Chapter 5. Virtual builds with Red Hat Quay on OpenShift Container Platform Documentation for the builds feature has been moved to Builders and image automation . This chapter will be removed in a future version of Red Hat Quay. | null | https://docs.redhat.com/en/documentation/red_hat_quay/3/html/red_hat_quay_operator_features/red-hat-quay-builders-enhancement |
Appendix A. Revision History | Appendix A. Revision History Revision History Revision 6-1 Wed Aug 7 2019 Steven Levine Preparing document for 7.7 GA publication. Revision 5-2 Thu Oct 4 2018 Steven Levine Preparing document for 7.6 GA publication. Revision 4-2 Wed Mar 14 2018 Steven Levine Preparing document for 7.5 GA publication. Revision 4-1 Thu Dec 14 2017 Steven Levine Preparing document for 7.5 Beta publication. Revision 3-4 Wed Aug 16 2017 Steven Levine Updated version for 7.4. Revision 3-3 Wed Jul 19 2017 Steven Levine Document version for 7.4 GA publication. Revision 3-1 Wed May 10 2017 Steven Levine Preparing document for 7.4 Beta publication. Revision 2-6 Mon Apr 17 2017 Steven Levine Update for 7.3 Revision 2-4 Mon Oct 17 2016 Steven Levine Version for 7.3 GA publication. Revision 2-3 Fri Aug 12 2016 Steven Levine Preparing document for 7.3 Beta publication. Revision 1.2-3 Mon Nov 9 2015 Steven Levine Preparing document for 7.2 GA publication. Revision 1.2-2 Tue Aug 18 2015 Steven Levine Preparing document for 7.2 Beta publication. Revision 1.1-19 Mon Feb 16 2015 Steven Levine Version for 7.1 GA release Revision 1.1-10 Thu Dec 11 2014 Steven Levine Version for 7.1 Beta release Revision 0.1-33 Mon Jun 2 2014 Steven Levine Version for 7.0 GA release | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_administration/appe-publican-revision_history |
14.15.2.2. Creating a snapshot for the current domain | 14.15.2.2. Creating a snapshot for the current domain The virsh snapshot-create-as domain command creates a snapshot for the domain with the properties specified in the domain XML file (such as <name> and <description> elements). If these values are not included in the XML string, libvirt will choose a value. To create a snapshot run: The remaining options are as follows: --print-xml creates appropriate XML for snapshot-create as output, rather than actually creating a snapshot. --diskspec option can be used to control how --disk-only and external checkpoints create external files. This option can occur multiple times, according to the number of <disk> elements in the domain XML. Each <diskspec> is in the form disk [,snapshot=type][,driver=type][,file=name] . To include a literal comma in disk or in file=name , escape it with a second comma. A literal --diskspec must precede each diskspec unless all three of <domain>, <name>, and <description> are also present. For example, a diskspec of vda,snapshot=external,file=/path/to,,new results in the following XML: --reuse-external creates an external snapshot reusing an existing file as the destination (meaning this file is overwritten). If this destination does not exist, the snapshot request will be refused to avoid losing contents of the existing files. --no-metadata creates snapshot data but any metadata is immediately discarded (that is, libvirt does not treat the snapshot as current, and cannot revert to the snapshot unless snapshot-create is later used to teach libvirt about the metadata again). This option is incompatible with --print-xml . | [
"virsh snapshot-create-as domain {[--print-xml] | [--no-metadata] [--reuse-external]} [name] [description] [--diskspec] diskspec]",
"<disk name='vda' snapshot='external'> <source file='/path/to,new'/> </disk>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sub-sect-managing_snapshots-snapshot_create_as_domain |
14.7. The (Transactional) CarMart Quickstart Using JBoss EAP | 14.7. The (Transactional) CarMart Quickstart Using JBoss EAP This CarMart Transactional quickstart requires JBoss Data Grid's Library mode with the JBoss Enterprise Application Platform container. All required libraries (jar files) are bundled with the application and deployed to the server. Caches are configured programmatically and run in the same JVM as the web application. All operations are transactional and are configured at JBossASCacheContainerProvider / TomcatCacheContainerProvider implementation classes for the CacheContainerProvider interface. Report a bug 14.7.1. Quickstart Prerequisites The prerequisites for this quickstart are as follows: Java 6.0 (Java SDK 1.6) or better JBoss Enterprise Application Platform 6.x or JBoss Enterprise Web Server 2.x Maven 3.0 or better Configure the Maven Repository. For details, see Chapter 3, Install and Use the Maven Repositories Report a bug 14.7.2. Build and Deploy the Transactional CarMart Quickstart Prerequisites Ensure that the following prerequisites are met before building and deploying the CarMart quickstart. Configure Maven (See Section 14.7.1, "Quickstart Prerequisites" ) Start JBoss Enterprise Application Platform: In a command line terminal, navigate to the root of the JBoss EAP server directory. Use one of the following commands to start the server with a web profile: For Linux: For Windows: Procedure 14.13. Build and Deploy the Transactional Quickstart Open a command line and navigate to the root directory of this quickstart. Enter the following command to build and deploy the archive: The target/jboss-carmart-tx.war file is deployed to the running instance of the server. Report a bug 14.7.3. View the Transactional CarMart Quickstart The following procedure outlines how to view the CarMart quickstart: Prerequisite The CarMart quickstart must be built and deployed to be viewed. Procedure 14.14. View the CarMart Quickstart To view the application, use your browser to navigate to the following link: Report a bug 14.7.4. Undeploy The Transactional CarMart Quickstart Undeploy the transactional CarMart quickstart as follows: In a command line terminal, navigate to the root directory of the quickstart. Undeploy the archive as follows: Report a bug 14.7.5. Test the Transactional CarMart Quickstart The JBoss Data Grid quickstarts include Arquillian Selenium tests. To run these tests: Stop JBoss EAP, if it is running. In a command line terminal, navigate to root directory for the quickstart. Build the quickstarts as follows: Run the tests as follows: Report a bug | [
"USDJBOSS_HOME/bin/standalone.sh",
"%JBOSS_HOME%\\bin\\standalone.bat",
"mvn clean package jboss-as:deploy",
"http://localhost:8080/jboss-carmart-tx",
"mvn jboss-as:undeploy",
"mvn clean package",
"mvn test -Puitests-jbossas -Das7home=/path/to/server"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/getting_started_guide/sect-The_Transactional_CarMart_Quickstart_Using_JBoss_EAP |
4.267. redhat-rpm-config | 4.267. redhat-rpm-config 4.267.1. RHBA-2011:1748 - redhat-rpm-config bug fix update An updated redhat-rpm-config package that fixes several bugs is now available for Red Hat Enterprise Linux 6. The redhat-rpm-config package is used during building of RPM packages to apply various default distribution options determined by Red Hat. It also provides a few Red Hat RPM macro customizations, such as those used during the building of Driver Update packages. Bug Fixes BZ# 642768 Previously, when building two RPM packages, where one depended on symbols in the other, the Driver Update Program (DUP) generated "Provides" and "Requires" symbols that did not match. This bug has been fixed, and these symbols are now generated correctly by DUP in the described scenario. BZ# 681884 If two kernel modules had a dependency, where one module referred to a function implemented by the other, the symbol reference was built incorrectly. As a consequence, the package that contained the module that depended on the other module, could not be installed. A patch has been provided to address this issue, and symbol references are now generated correctly in the described scenario. Enhancement BZ# 720866 Driver Update Disks now include additional dependency information to work with later releases of Red Hat Enterprise Linux 6, in which a small change to installer behavior will impact only newly-created Driver Update Disks. Disks made previously are not affected by this update. Users of redhat-rpm-config are advised to upgrade to this updated package, which fixes these bugs and adds this enhancement. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/redhat-rpm-config |
Chapter 4. Managing secure access to Kafka | Chapter 4. Managing secure access to Kafka You can secure your Kafka cluster by managing the access each client has to the Kafka brokers. A secure connection between Kafka brokers and clients can encompass: Encryption for data exchange Authentication to prove identity Authorization to allow or decline actions executed by users This chapter explains how to set up secure connections between Kafka brokers and clients, with sections describing: Security options for Kafka clusters and clients How to secure Kafka brokers How to use an authorization server for OAuth 2.0 token-based authentication and authorization 4.1. Security options for Kafka Use the Kafka resource to configure the mechanisms used for Kafka authentication and authorization. 4.1.1. Listener authentication For clients inside the OpenShift cluster, you can create plain (without encryption) or tls internal listeners. For clients outside the OpenShift cluster, you create external listeners and specify a connection mechanism, which can be nodeport , loadbalancer , ingress or route (on OpenShift). For more information on the configuration options for connecting an external client, see Configuring external listeners . Supported authentication options: Mutual TLS authentication (only on the listeners with TLS enabled encryption) SCRAM-SHA-512 authentication OAuth 2.0 token based authentication The authentication option you choose depends on how you wish to authenticate client access to Kafka brokers. Figure 4.1. Kafka listener authentication options The listener authentication property is used to specify an authentication mechanism specific to that listener. If no authentication property is specified then the listener does not authenticate clients which connect through that listener. The listener will accept all connections without authentication. Authentication must be configured when using the User Operator to manage KafkaUsers . The following example shows: A plain listener configured for SCRAM-SHA-512 authentication A tls listener with mutual TLS authentication An external listener with mutual TLS authentication Each listener is configured with a unique name and port within a Kafka cluster. Note Listeners cannot be configured to use the ports reserved for inter-broker communication (9091 or 9090) and metrics (9404). An example showing listener authentication configuration # ... listeners: - name: plain port: 9092 type: internal tls: true authentication: type: scram-sha-512 - name: tls port: 9093 type: internal tls: true authentication: type: tls - name: external port: 9094 type: loadbalancer tls: true authentication: type: tls # ... 4.1.1.1. Mutual TLS authentication Mutual TLS authentication is always used for the communication between Kafka brokers and ZooKeeper pods. AMQ Streams can configure Kafka to use TLS (Transport Layer Security) to provide encrypted communication between Kafka brokers and clients either with or without mutual authentication. For mutual, or two-way, authentication, both the server and the client present certificates. When you configure mutual authentication, the broker authenticates the client (client authentication) and the client authenticates the broker (server authentication). Note TLS authentication is more commonly one-way, with one party authenticating the identity of another. For example, when HTTPS is used between a web browser and a web server, the browser obtains proof of the identity of the web server. 4.1.1.2. SCRAM-SHA-512 authentication SCRAM (Salted Challenge Response Authentication Mechanism) is an authentication protocol that can establish mutual authentication using passwords. AMQ Streams can configure Kafka to use SASL (Simple Authentication and Security Layer) SCRAM-SHA-512 to provide authentication on both unencrypted and encrypted client connections. When SCRAM-SHA-512 authentication is used with a TLS client connection, the TLS protocol provides the encryption, but is not used for authentication. The following properties of SCRAM make it safe to use SCRAM-SHA-512 even on unencrypted connections: The passwords are not sent in the clear over the communication channel. Instead the client and the server are each challenged by the other to offer proof that they know the password of the authenticating user. The server and client each generate a new challenge for each authentication exchange. This means that the exchange is resilient against replay attacks. When a KafkaUser.spec.authentication.type is configured with scram-sha-512 the User Operator will generate a random 12-character password consisting of upper and lowercase ASCII letters and numbers. 4.1.1.3. Network policies AMQ Streams automatically creates a NetworkPolicy resource for every listener that is enabled on a Kafka broker. By default, a NetworkPolicy grants access to a listener to all applications and namespaces. If you want to restrict access to a listener at the network level to only selected applications or namespaces, use the networkPolicyPeers property. Use network policies as part of the listener authentication configuration. Each listener can have a different networkPolicyPeers configuration. For more information, refer to the Listener network policies section and the NetworkPolicyPeer API reference . Note Your configuration of OpenShift must support ingress NetworkPolicies in order to use network policies in AMQ Streams. 4.1.1.4. Additional listener configuration options You can use the properties of the GenericKafkaListenerConfiguration schema to add further configuration to listeners. 4.1.2. Kafka authorization You can configure authorization for Kafka brokers using the authorization property in the Kafka.spec.kafka resource. If the authorization property is missing, no authorization is enabled and clients have no restrictions. When enabled, authorization is applied to all enabled listeners. The authorization method is defined in the type field. Supported authorization options: Simple authorization OAuth 2.0 authorization (if you are using OAuth 2.0 token based authentication) Open Policy Agent (OPA) authorization Custom authorization Figure 4.2. Kafka cluster authorization options 4.1.2.1. Super users Super users can access all resources in your Kafka cluster regardless of any access restrictions, and are supported by all authorization mechanisms. To designate super users for a Kafka cluster, add a list of user principals to the superUsers property. If a user uses TLS client authentication, their username is the common name from their certificate subject prefixed with CN= . An example configuration with super users apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # ... authorization: type: simple superUsers: - CN=client_1 - user_2 - CN=client_3 # ... 4.2. Security options for Kafka clients Use the KafkaUser resource to configure the authentication mechanism, authorization mechanism, and access rights for Kafka clients. In terms of configuring security, clients are represented as users. You can authenticate and authorize user access to Kafka brokers. Authentication permits access, and authorization constrains the access to permissible actions. You can also create super users that have unconstrained access to Kafka brokers. The authentication and authorization mechanisms must match the specification for the listener used to access the Kafka brokers . 4.2.1. Identifying a Kafka cluster for user handling A KafkaUser resource includes a label that defines the appropriate name of the Kafka cluster (derived from the name of the Kafka resource) to which it belongs. apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster The label is used by the User Operator to identify the KafkaUser resource and create a new user, and also in subsequent handling of the user. If the label does not match the Kafka cluster, the User Operator cannot identify the KafkaUser and the user is not created. If the status of the KafkaUser resource remains empty, check your label. 4.2.2. User authentication User authentication is configured using the authentication property in KafkaUser.spec . The authentication mechanism enabled for the user is specified using the type field. Supported authentication mechanisms: TLS client authentication SCRAM-SHA-512 authentication When no authentication mechanism is specified, the User Operator does not create the user or its credentials. Additional resources When to use mutual TLS authentication or SCRAM-SHA Authentication authentication for clients 4.2.2.1. TLS Client Authentication To use TLS client authentication, you set the type field to tls . An example KafkaUser with TLS client authentication enabled apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: type: tls # ... When the user is created by the User Operator, it creates a new Secret with the same name as the KafkaUser resource. The Secret contains a private and public key for TLS client authentication. The public key is contained in a user certificate, which is signed by the client Certificate Authority (CA). All keys are in X.509 format. Secrets provide private keys and certificates in PEM and PKCS #12 formats. For more information on securing Kafka communication with Secrets, see Chapter 11, Managing TLS certificates . An example Secret with user credentials apiVersion: v1 kind: Secret metadata: name: my-user labels: strimzi.io/kind: KafkaUser strimzi.io/cluster: my-cluster type: Opaque data: ca.crt: # Public key of the client CA user.crt: # User certificate that contains the public key of the user user.key: # Private key of the user user.p12: # PKCS #12 archive file for storing certificates and keys user.password: # Password for protecting the PKCS #12 archive file 4.2.2.2. SCRAM-SHA-512 Authentication To use the SCRAM-SHA-512 authentication mechanism, you set the type field to scram-sha-512 . An example KafkaUser with SCRAM-SHA-512 authentication enabled apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: type: scram-sha-512 # ... When the user is created by the User Operator, it creates a new secret with the same name as the KafkaUser resource. The secret contains the generated password in the password key, which is encoded with base64. In order to use the password, it must be decoded. An example Secret with user credentials apiVersion: v1 kind: Secret metadata: name: my-user labels: strimzi.io/kind: KafkaUser strimzi.io/cluster: my-cluster type: Opaque data: password: Z2VuZXJhdGVkcGFzc3dvcmQ= 1 sasl.jaas.config: b3JnLmFwYWNoZS5rYWZrYS5jb21tb24uc2VjdXJpdHkuc2NyYW0uU2NyYW1Mb2dpbk1vZHVsZSByZXF1aXJlZCB1c2VybmFtZT0ibXktdXNlciIgcGFzc3dvcmQ9ImdlbmVyYXRlZHBhc3N3b3JkIjsK 2 1 The generated password, base64 encoded. 2 The JAAS configuration string for SASL SCRAM-SHA-512 authentication, base64 encoded. Decoding the generated password: 4.2.3. User authorization User authorization is configured using the authorization property in KafkaUser.spec . The authorization type enabled for a user is specified using the type field. To use simple authorization, you set the type property to simple in KafkaUser.spec.authorization . Simple authorization uses the default Kafka authorization plugin, AclAuthorizer . Alternatively, you can use OPA authorization , or if you are already using OAuth 2.0 token based authentication, you can also use OAuth 2.0 authorization . If no authorization is specified, the User Operator does not provision any access rights for the user. Whether such a KafkaUser can still access resources depends on the authorizer being used. For example, for the AclAuthorizer this is determined by its allow.everyone.if.no.acl.found configuration. 4.2.3.1. ACL rules AclAuthorizer uses ACL rules to manage access to Kafka brokers. ACL rules grant access rights to the user, which you specify in the acls property. For more information about the AclRule object, see the AclRule schema reference . 4.2.3.2. Super user access to Kafka brokers If a user is added to a list of super users in a Kafka broker configuration, the user is allowed unlimited access to the cluster regardless of any authorization constraints defined in ACLs in KafkaUser . For more information on configuring super user access to brokers, see Kafka authorization . 4.2.3.3. User quotas You can configure the spec for the KafkaUser resource to enforce quotas so that a user does not exceed a configured level of access to Kafka brokers. You can set size-based network usage and time-based CPU utilization thresholds. You can also add a partition mutation quota to control the rate at which requests to change partitions are accepted for user requests. An example KafkaUser with user quotas apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: # ... quotas: producerByteRate: 1048576 1 consumerByteRate: 2097152 2 requestPercentage: 55 3 controllerMutationRate: 10 4 1 Byte-per-second quota on the amount of data the user can push to a Kafka broker 2 Byte-per-second quota on the amount of data the user can fetch from a Kafka broker 3 CPU utilization limit as a percentage of time for a client group 4 Number of concurrent partition creation and deletion operations (mutations) allowed per second For more information on these properties, see the KafkaUserQuotas schema reference . 4.3. Securing access to Kafka brokers To establish secure access to Kafka brokers, you configure and apply: A Kafka resource to: Create listeners with a specified authentication type Configure authorization for the whole Kafka cluster A KafkaUser resource to access the Kafka brokers securely through the listeners Configure the Kafka resource to set up: Listener authentication Network policies that restrict access to Kafka listeners Kafka authorization Super users for unconstrained access to brokers Authentication is configured independently for each listener. Authorization is always configured for the whole Kafka cluster. The Cluster Operator creates the listeners and sets up the cluster and client certificate authority (CA) certificates to enable authentication within the Kafka cluster. You can replace the certificates generated by the Cluster Operator by installing your own certificates . You can also configure your listener to use a Kafka listener certificate managed by an external Certificate Authority . Certificates are available in PKCS #12 format (.p12) and PEM (.crt) formats. Use KafkaUser to enable the authentication and authorization mechanisms that a specific client uses to access Kafka. Configure the KafkaUser resource to set up: Authentication to match the enabled listener authentication Authorization to match the enabled Kafka authorization Quotas to control the use of resources by clients The User Operator creates the user representing the client and the security credentials used for client authentication, based on the chosen authentication type. Additional resources For more information about the schema for: Kafka , see the Kafka schema reference . KafkaUser , see the KafkaUser schema reference . 4.3.1. Securing Kafka brokers This procedure shows the steps involved in securing Kafka brokers when running AMQ Streams. The security implemented for Kafka brokers must be compatible with the security implemented for the clients requiring access. Kafka.spec.kafka.listeners[*].authentication matches KafkaUser.spec.authentication Kafka.spec.kafka.authorization matches KafkaUser.spec.authorization The steps show the configuration for simple authorization and a listener using TLS authentication. For more information on listener configuration, see GenericKafkaListener schema reference . Alternatively, you can use SCRAM-SHA or OAuth 2.0 for listener authentication , and OAuth 2.0 or OPA for Kafka authorization . Procedure Configure the Kafka resource. Configure the authorization property for authorization. Configure the listeners property to create a listener with authentication. For example: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # ... authorization: 1 type: simple superUsers: 2 - CN=client_1 - user_2 - CN=client_3 listeners: - name: tls port: 9093 type: internal tls: true authentication: type: tls 3 # ... zookeeper: # ... 1 Authorization enables simple authorization on the Kafka broker using the AclAuthorizer Kafka plugin . 2 List of user principals with unlimited access to Kafka. CN is the common name from the client certificate when TLS authentication is used. 3 Listener authentication mechanisms may be configured for each listener, and specified as mutual TLS, SCRAM-SHA-512 or token-based OAuth 2.0 . If you are configuring an external listener, the configuration is dependent on the chosen connection mechanism. Create or update the Kafka resource. oc apply -f KAFKA-CONFIG-FILE The Kafka cluster is configured with a Kafka broker listener using TLS authentication. A service is created for each Kafka broker pod. A service is created to serve as the bootstrap address for connection to the Kafka cluster. The cluster CA certificate to verify the identity of the kafka brokers is also created with the same name as the Kafka resource. 4.3.2. Securing user access to Kafka Use the properties of the KafkaUser resource to configure a Kafka user. You can use oc apply to create or modify users, and oc delete to delete existing users. For example: oc apply -f USER-CONFIG-FILE oc delete KafkaUser USER-NAME When you configure the KafkaUser authentication and authorization mechanisms, ensure they match the equivalent Kafka configuration: KafkaUser.spec.authentication matches Kafka.spec.kafka.listeners[*].authentication KafkaUser.spec.authorization matches Kafka.spec.kafka.authorization This procedure shows how a user is created with TLS authentication. You can also create a user with SCRAM-SHA authentication. The authentication required depends on the type of authentication configured for the Kafka broker listener . Note Authentication between Kafka users and Kafka brokers depends on the authentication settings for each. For example, it is not possible to authenticate a user with TLS if it is not also enabled in the Kafka configuration. Prerequisites A running Kafka cluster configured with a Kafka broker listener using TLS authentication and encryption . A running User Operator (typically deployed with the Entity Operator ). The authentication type in KafkaUser should match the authentication configured in Kafka brokers. Procedure Configure the KafkaUser resource. For example: apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: 1 type: tls authorization: type: simple 2 acls: - resource: type: topic name: my-topic patternType: literal operation: Read - resource: type: topic name: my-topic patternType: literal operation: Describe - resource: type: group name: my-group patternType: literal operation: Read 1 User authentication mechanism, defined as mutual tls or scram-sha-512 . 2 Simple authorization, which requires an accompanying list of ACL rules. Create or update the KafkaUser resource. oc apply -f USER-CONFIG-FILE The user is created, as well as a Secret with the same name as the KafkaUser resource. The Secret contains a private and public key for TLS client authentication. For information on configuring a Kafka client with properties for secure connection to Kafka brokers, see Setting up access for clients outside of OpenShift in the Deploying and Upgrading AMQ Streams on OpenShift guide. 4.3.3. Restricting access to Kafka listeners using network policies You can restrict access to a listener to only selected applications by using the networkPolicyPeers property. Prerequisites An OpenShift cluster with support for Ingress NetworkPolicies. The Cluster Operator is running. Procedure Open the Kafka resource. In the networkPolicyPeers property, define the application pods or namespaces that will be allowed to access the Kafka cluster. For example, to configure a tls listener to allow connections only from application pods with the label app set to kafka-client : apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # ... listeners: - name: tls port: 9093 type: internal tls: true authentication: type: tls networkPolicyPeers: - podSelector: matchLabels: app: kafka-client # ... zookeeper: # ... Create or update the resource. Use oc apply : oc apply -f your-file Additional resources For more information about the schema, see the NetworkPolicyPeer API reference and the GenericKafkaListener schema reference . 4.4. Using OAuth 2.0 token-based authentication AMQ Streams supports the use of OAuth 2.0 authentication using the SASL OAUTHBEARER mechanism. OAuth 2.0 enables standardized token-based authentication and authorization between applications, using a central authorization server to issue tokens that grant limited access to resources. You can configure OAuth 2.0 authentication, then OAuth 2.0 authorization . OAuth 2.0 authentication can also be used in conjunction with simple or OPA-based Kafka authorization . Using OAuth 2.0 token-based authentication, application clients can access resources on application servers (called resource servers ) without exposing account credentials. The application client passes an access token as a means of authenticating, which application servers can also use to determine the level of access to grant. The authorization server handles the granting of access and inquiries about access. In the context of AMQ Streams: Kafka brokers act as OAuth 2.0 resource servers Kafka clients act as OAuth 2.0 application clients Kafka clients authenticate to Kafka brokers. The brokers and clients communicate with the OAuth 2.0 authorization server, as necessary, to obtain or validate access tokens. For a deployment of AMQ Streams, OAuth 2.0 integration provides: Server-side OAuth 2.0 support for Kafka brokers Client-side OAuth 2.0 support for Kafka MirrorMaker, Kafka Connect and the Kafka Bridge Additional resources OAuth 2.0 site 4.4.1. OAuth 2.0 authentication mechanisms AMQ Streams supports the OAUTHBEARER and PLAIN mechanisms for OAuth 2.0 authentication. Both mechanisms allow Kafka clients to establish authenticated sessions with Kafka brokers. The authentication flow between clients, the authorization server, and Kafka brokers is different for each mechanism. We recommend that you configure clients to use OAUTHBEARER whenever possible. OAUTHBEARER provides a higher level of security than PLAIN because client credentials are never shared with Kafka brokers. Consider using PLAIN only with Kafka clients that do not support OAUTHBEARER. If necessary, OAUTHBEARER and PLAIN can be enabled together, on the same oauth listener. OAUTHBEARER overview Kafka supports the OAUTHBEARER authentication mechanism, however it must be explicitly configured. Also, many Kafka client tools use libraries that provide basic support for OAUTHBEARER at the protocol level. To ease application development, AMQ Streams provides an OAuth callback handler for the upstream Kafka Client Java libraries (but not for other libraries). Therefore, you do not need to write your own callback handlers for such clients. An application client can use the callback handler to provide the access token. Clients written in other languages, such as Go, must use custom code to connect to the authorization server and obtain the access token. With OAUTHBEARER, the client initiates a session with the Kafka broker for credentials exchange, where credentials take the form of a bearer token provided by the callback handler. Using the callbacks, you can configure token provision in one of three ways: Client ID and Secret (by using the OAuth 2.0 client credentials mechanism) A long-lived access token, obtained manually at configuration time A long-lived refresh token, obtained manually at configuration time OAUTHBEARER is automatically enabled in the oauth listener configuration for the Kafka broker. You can set the enableOauthBearer property to true , though this is not required. # ... authentication: type: oauth # ... enableOauthBearer: true Note OAUTHBEARER authentication can only be used by Kafka clients that support the OAUTHBEARER mechanism at the protocol level. PLAIN overview PLAIN is a simple authentication mechanism used by all Kafka client tools (including developer tools such as kafkacat). To enable PLAIN to be used together with OAuth 2.0 authentication, AMQ Streams includes server-side callbacks and calls this OAuth 2.0 over PLAIN . With the AMQ Streams implementation of PLAIN, the client credentials are not stored in ZooKeeper. Instead, client credentials are handled centrally behind a compliant authorization server, similar to when OAUTHBEARER authentication is used. When used with the OAuth 2.0 over PLAIN callbacks, Kafka clients authenticate with Kafka brokers using either of the following methods: Client ID and secret (by using the OAuth 2.0 client credentials mechanism) A long-lived access token, obtained manually at configuration time The client must be enabled to use PLAIN authentication, and provide a username and password . If the password is prefixed with USDaccessToken: followed by the value of the access token, the Kafka broker will interpret the password as the access token. Otherwise, the Kafka broker will interpret the username as the client ID and the password as the client secret. If the password is set as the access token, the username must be set to the same principal name that the Kafka broker obtains from the access token. The process depends on how you configure username extraction using userNameClaim , fallbackUserNameClaim , fallbackUsernamePrefix , or userInfoEndpointUri . It also depends on your authorization server; in particular, how it maps client IDs to account names. To use PLAIN, you must enable it in the oauth listener configuration for the Kafka broker. In the following example, PLAIN is enabled in addition to OAUTHBEARER, which is enabled by default. If you want to use PLAIN only, you can disable OAUTHBEARER by setting enableOauthBearer to false . # ... authentication: type: oauth # ... enablePlain: true tokenEndpointUri: https:// OAUTH-SERVER-ADDRESS /auth/realms/external/protocol/openid-connect/token Additional resources Section 4.4.6.2, "Configuring OAuth 2.0 support for Kafka brokers" 4.4.2. OAuth 2.0 Kafka broker configuration Kafka broker configuration for OAuth 2.0 involves: Creating the OAuth 2.0 client in the authorization server Configuring OAuth 2.0 authentication in the Kafka custom resource Note In relation to the authorization server, Kafka brokers and Kafka clients are both regarded as OAuth 2.0 clients. 4.4.2.1. OAuth 2.0 client configuration on an authorization server To configure a Kafka broker to validate the token received during session initiation, the recommended approach is to create an OAuth 2.0 client definition in an authorization server, configured as confidential , with the following client credentials enabled: Client ID of kafka (for example) Client ID and Secret as the authentication mechanism Note You only need to use a client ID and secret when using a non-public introspection endpoint of the authorization server. The credentials are not typically required when using public authorization server endpoints, as with fast local JWT token validation. 4.4.2.2. OAuth 2.0 authentication configuration in the Kafka cluster To use OAuth 2.0 authentication in the Kafka cluster, you specify, for example, a TLS listener configuration for your Kafka cluster custom resource with the authentication method oauth : Assigining the authentication method type for OAuth 2.0 apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # ... listeners: - name: tls port: 9093 type: internal tls: true authentication: type: oauth #... You can configure plain , tls and external listeners, but it is recommended not to use plain listeners or external listeners with disabled TLS encryption with OAuth 2.0 as this creates a vulnerability to network eavesdropping and unauthorized access through token theft. You configure an external listener with type: oauth for a secure transport layer to communicate with the client. Using OAuth 2.0 with an external listener # ... listeners: - name: external port: 9094 type: loadbalancer tls: true authentication: type: oauth #... The tls property is false by default, so it must be enabled. When you have defined the type of authentication as OAuth 2.0, you add configuration based on the type of validation, either as fast local JWT validation or token validation using an introspection endpoint . The procedure to configure OAuth 2.0 for listeners, with descriptions and examples, is described in Configuring OAuth 2.0 support for Kafka brokers . 4.4.2.3. Fast local JWT token validation configuration Fast local JWT token validation checks a JWT token signature locally. The local check ensures that a token: Conforms to type by containing a ( typ ) claim value of Bearer for an access token Is valid (not expired) Has an issuer that matches a validIssuerURI You specify a validIssuerURI attribute when you configure the listener, so that any tokens not issued by the authorization server are rejected. The authorization server does not need to be contacted during fast local JWT token validation. You activate fast local JWT token validation by specifying a jwksEndpointUri attribute, the endpoint exposed by the OAuth 2.0 authorization server. The endpoint contains the public keys used to validate signed JWT tokens, which are sent as credentials by Kafka clients. Note All communication with the authorization server should be performed using TLS encryption. You can configure a certificate truststore as an OpenShift Secret in your AMQ Streams project namespace, and use a tlsTrustedCertificates attribute to point to the OpenShift Secret containing the truststore file. You might want to configure a userNameClaim to properly extract a username from the JWT token. If you want to use Kafka ACL authorization, you need to identify the user by their username during authentication. (The sub claim in JWT tokens is typically a unique ID, not a username.) Example configuration for fast local JWT token validation apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: #... listeners: - name: tls port: 9093 type: internal tls: true authentication: type: oauth validIssuerUri: < https://<auth-server-address>/auth/realms/tls > jwksEndpointUri: < https://<auth-server-address>/auth/realms/tls/protocol/openid-connect/certs > userNameClaim: preferred_username maxSecondsWithoutReauthentication: 3600 tlsTrustedCertificates: - secretName: oauth-server-cert certificate: ca.crt #... 4.4.2.4. OAuth 2.0 introspection endpoint configuration Token validation using an OAuth 2.0 introspection endpoint treats a received access token as opaque. The Kafka broker sends an access token to the introspection endpoint, which responds with the token information necessary for validation. Importantly, it returns up-to-date information if the specific access token is valid, and also information about when the token expires. To configure OAuth 2.0 introspection-based validation, you specify an introspectionEndpointUri attribute rather than the jwksEndpointUri attribute specified for fast local JWT token validation. Depending on the authorization server, you typically have to specify a clientId and clientSecret , because the introspection endpoint is usually protected. Example configuration for an introspection endpoint apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: listeners: - name: tls port: 9093 type: internal tls: true authentication: type: oauth clientId: kafka-broker clientSecret: secretName: my-cluster-oauth key: clientSecret validIssuerUri: < https://<auth-server-address>/auth/realms/tls > introspectionEndpointUri: < https://<auth-server-address>/auth/realms/tls/protocol/openid-connect/token/introspect > userNameClaim: preferred_username maxSecondsWithoutReauthentication: 3600 tlsTrustedCertificates: - secretName: oauth-server-cert certificate: ca.crt 4.4.3. Session re-authentication for Kafka brokers You can configure oauth listeners to use Kafka session re-authentication for OAuth 2.0 sessions between Kafka clients and Kafka brokers. This mechanism enforces the expiry of an authenticated session between the client and the broker after a defined period of time. When a session expires, the client immediately starts a new session by reusing the existing connection rather than dropping it. Session re-authentication is disabled by default. To enable it, you set a time value for maxSecondsWithoutReauthentication in the oauth listener configuration. The same property is used to configure session re-authentication for OAUTHBEARER and PLAIN authentication. For an example configuration, see Section 4.4.6.2, "Configuring OAuth 2.0 support for Kafka brokers" . Session re-authentication must be supported by the Kafka client libraries used by the client. Session re-authentication can be used with fast local JWT or introspection endpoint token validation. Client re-authentication When the broker's authenticated session expires, the client must re-authenticate to the existing session by sending a new, valid access token to the broker, without dropping the connection. If token validation is successful, a new client session is started using the existing connection. If the client fails to re-authenticate, the broker will close the connection if further attempts are made to send or receive messages. Java clients that use Kafka client library 2.2 or later automatically re-authenticate if the re-authentication mechanism is enabled on the broker. Session re-authentication also applies to refresh tokens, if used. When the session expires, the client refreshes the access token by using its refresh token. The client then uses the new access token to re-authenticate to the existing session. Session expiry for OAUTHBEARER and PLAIN When session re-authentication is configured, session expiry works differently for OAUTHBEARER and PLAIN authentication. For OAUTHBEARER and PLAIN, using the client ID and secret method: The broker's authenticated session will expire at the configured maxSecondsWithoutReauthentication . The session will expire earlier if the access token expires before the configured time. For PLAIN using the long-lived access token method: The broker's authenticated session will expire at the configured maxSecondsWithoutReauthentication . Re-authentication will fail if the access token expires before the configured time. Although session re-authentication is attempted, PLAIN has no mechanism for refreshing tokens. If maxSecondsWithoutReauthentication is not configured, OAUTHBEARER and PLAIN clients can remain connected to brokers indefinitely, without needing to re-authenticate. Authenticated sessions do not end with access token expiry. However, this can be considered when configuring authorization, for example, by using keycloak authorization or installing a custom authorizer. Additional resources Section 4.4.2, "OAuth 2.0 Kafka broker configuration" Section 4.4.6.2, "Configuring OAuth 2.0 support for Kafka brokers" KafkaListenerAuthenticationOAuth schema reference KIP-368 4.4.4. OAuth 2.0 Kafka client configuration A Kafka client is configured with either: The credentials required to obtain a valid access token from an authorization server (client ID and Secret) A valid long-lived access token or refresh token, obtained using tools provided by an authorization server The only information ever sent to the Kafka broker is an access token. The credentials used to authenticate with the authorization server to obtain the access token are never sent to the broker. When a client obtains an access token, no further communication with the authorization server is needed. The simplest mechanism is authentication with a client ID and Secret. Using a long-lived access token, or a long-lived refresh token, adds more complexity because there is an additional dependency on authorization server tools. Note If you are using long-lived access tokens, you may need to configure the client in the authorization server to increase the maximum lifetime of the token. If the Kafka client is not configured with an access token directly, the client exchanges credentials for an access token during Kafka session initiation by contacting the authorization server. The Kafka client exchanges either: Client ID and Secret Client ID, refresh token, and (optionally) a Secret 4.4.5. OAuth 2.0 client authentication flow In this section, we explain and visualize the communication flow between Kafka client, Kafka broker, and authorization server during Kafka session initiation. The flow depends on the client and server configuration. When a Kafka client sends an access token as credentials to a Kafka broker, the token needs to be validated. Depending on the authorization server used, and the configuration options available, you may prefer to use: Fast local token validation based on JWT signature checking and local token introspection, without contacting the authorization server An OAuth 2.0 introspection endpoint provided by the authorization server Using fast local token validation requires the authorization server to provide a JWKS endpoint with public certificates that are used to validate signatures on the tokens. Another option is to use an OAuth 2.0 introspection endpoint on the authorization server. Each time a new Kafka broker connection is established, the broker passes the access token received from the client to the authorization server, and checks the response to confirm whether or not the token is valid. Kafka client credentials can also be configured for: Direct local access using a previously generated long-lived access token Contact with the authorization server for a new access token to be issued Note An authorization server might only allow the use of opaque access tokens, which means that local token validation is not possible. 4.4.5.1. Example client authentication flows Here you can see the communication flows, for different configurations of Kafka clients and brokers, during Kafka session authentication. Client using client ID and secret, with broker delegating validation to authorization server Client using client ID and secret, with broker performing fast local token validation Client using long-lived access token, with broker delegating validation to authorization server Client using long-lived access token, with broker performing fast local validation Client using client ID and secret, with broker delegating validation to authorization server Kafka client requests access token from authorization server, using client ID and secret, and optionally a refresh token. Authorization server generates a new access token. Kafka client authenticates with the Kafka broker using the SASL OAUTHBEARER mechanism to pass the access token. Kafka broker validates the access token by calling a token introspection endpoint on authorization server, using its own client ID and secret. Kafka client session is established if the token is valid. Client using client ID and secret, with broker performing fast local token validation Kafka client authenticates with authorization server from the token endpoint, using a client ID and secret, and optionally a refresh token. Authorization server generates a new access token. Kafka client authenticates with the Kafka broker using the SASL OAUTHBEARER mechanism to pass the access token. Kafka broker validates the access token locally using a JWT token signature check, and local token introspection. Client using long-lived access token, with broker delegating validation to authorization server Kafka client authenticates with the Kafka broker using the SASL OAUTHBEARER mechanism to pass the long-lived access token. Kafka broker validates the access token by calling a token introspection endpoint on authorization server, using its own client ID and secret. Kafka client session is established if the token is valid. Client using long-lived access token, with broker performing fast local validation Kafka client authenticates with the Kafka broker using the SASL OAUTHBEARER mechanism to pass the long-lived access token. Kafka broker validates the access token locally using JWT token signature check, and local token introspection. Warning Fast local JWT token signature validation is suitable only for short-lived tokens as there is no check with the authorization server if a token has been revoked. Token expiration is written into the token, but revocation can happen at any time, so cannot be accounted for without contacting the authorization server. Any issued token would be considered valid until it expires. 4.4.6. Configuring OAuth 2.0 authentication OAuth 2.0 is used for interaction between Kafka clients and AMQ Streams components. In order to use OAuth 2.0 for AMQ Streams, you must: Deploy an authorization server and configure the deployment to integrate with AMQ Streams Deploy or update the Kafka cluster with Kafka broker listeners configured to use OAuth 2.0 Update your Java-based Kafka clients to use OAuth 2.0 Update Kafka component clients to use OAuth 2.0 4.4.6.1. Configuring Red Hat Single Sign-On as an OAuth 2.0 authorization server This procedure describes how to deploy Red Hat Single Sign-On as an authorization server and configure it for integration with AMQ Streams. The authorization server provides a central point for authentication and authorization, and management of users, clients, and permissions. Red Hat Single Sign-On has a concept of realms where a realm represents a separate set of users, clients, permissions, and other configuration. You can use a default master realm , or create a new one. Each realm exposes its own OAuth 2.0 endpoints, which means that application clients and application servers all need to use the same realm. To use OAuth 2.0 with AMQ Streams, you use a deployment of Red Hat Single Sign-On to create and manage authentication realms. Note If you already have Red Hat Single Sign-On deployed, you can skip the deployment step and use your current deployment. Before you begin You will need to be familiar with using Red Hat Single Sign-On. For deployment and administration instructions, see: Red Hat Single Sign-On for OpenShift Server Administration Guide Prerequisites AMQ Streams and Kafka is running For the Red Hat Single Sign-On deployment: Check the Red Hat Single Sign-On Supported Configurations Installation requires a user with a cluster-admin role, such as system:admin Procedure Deploy Red Hat Single Sign-On to your OpenShift cluster. Check the progress of the deployment in your OpenShift web console. Log in to the Red Hat Single Sign-On Admin Console to create the OAuth 2.0 policies for AMQ Streams. Login details are provided when you deploy Red Hat Single Sign-On. Create and enable a realm. You can use an existing master realm. Adjust the session and token timeouts for the realm, if required. Create a client called kafka-broker . From the Settings tab, set: Access Type to Confidential Standard Flow Enabled to OFF to disable web login for this client Service Accounts Enabled to ON to allow this client to authenticate in its own name Click Save before continuing. From the Credentials tab, take a note of the secret for using in your AMQ Streams Kafka cluster configuration. Repeat the client creation steps for any application client that will connect to your Kafka brokers. Create a definition for each new client. You will use the names as client IDs in your configuration. What to do After deploying and configuring the authorization server, configure the Kafka brokers to use OAuth 2.0 . 4.4.6.2. Configuring OAuth 2.0 support for Kafka brokers This procedure describes how to configure Kafka brokers so that the broker listeners are enabled to use OAuth 2.0 authentication using an authorization server. We advise use of OAuth 2.0 over an encrypted interface through configuration of TLS listeners. Plain listeners are not recommended. If the authorization server is using certificates signed by the trusted CA and matching the OAuth 2.0 server hostname, TLS connection works using the default settings. Otherwise, you may need to configure the truststore with prober certificates or disable the certificate hostname validation. When configuring the Kafka broker you have two options for the mechanism used to validate the access token during OAuth 2.0 authentication of the newly connected Kafka client: Configuring fast local JWT token validation Configuring token validation using an introspection endpoint Before you start For more information on the configuration of OAuth 2.0 authentication for Kafka broker listeners, see: KafkaListenerAuthenticationOAuth schema reference Managing access to Kafka Prerequisites AMQ Streams and Kafka are running An OAuth 2.0 authorization server is deployed Procedure Update the Kafka broker configuration ( Kafka.spec.kafka ) of your Kafka resource in an editor. oc edit kafka my-cluster Configure the Kafka broker listeners configuration. The configuration for each type of listener does not have to be the same, as they are independent. The examples here show the configuration options as configured for external listeners. Example 1: Configuring fast local JWT token validation #... - name: external port: 9094 type: loadbalancer tls: true authentication: type: oauth 1 validIssuerUri: < https://<auth-server-address>/auth/realms/external > 2 jwksEndpointUri: < https://<auth-server-address>/auth/realms/external/protocol/openid-connect/certs > 3 userNameClaim: preferred_username 4 maxSecondsWithoutReauthentication: 3600 5 tlsTrustedCertificates: 6 - secretName: oauth-server-cert certificate: ca.crt disableTlsHostnameVerification: true 7 jwksExpirySeconds: 360 8 jwksRefreshSeconds: 300 9 jwksMinRefreshPauseSeconds: 1 10 1 Listener type set to oauth . 2 URI of the token issuer used for authentication. 3 URI of the JWKS certificate endpoint used for local JWT validation. 4 The token claim (or key) that contains the actual user name in the token. The user name is the principal used to identify the user. The userNameClaim value will depend on the authentication flow and the authorization server used. 5 (Optional) Activates the Kafka re-authentication mechanism that enforces session expiry to the same length of time as the access token. If the specified value is less than the time left for the access token to expire, then the client will have to re-authenticate before the actual token expiry. By default, the session does not expire when the access token expires, and the client does not attempt re-authentication. 6 (Optional) Trusted certificates for TLS connection to the authorization server. 7 (Optional) Disable TLS hostname verification. Default is false . 8 The duration the JWKS certificates are considered valid before they expire. Default is 360 seconds. If you specify a longer time, consider the risk of allowing access to revoked certificates. 9 The period between refreshes of JWKS certificates. The interval must be at least 60 seconds shorter than the expiry interval. Default is 300 seconds. 10 The minimum pause in seconds between consecutive attempts to refresh JWKS public keys. When an unknown signing key is encountered, the JWKS keys refresh is scheduled outside the regular periodic schedule with at least the specified pause since the last refresh attempt. The refreshing of keys follows the rule of exponential backoff, retrying on unsuccessful refreshes with ever increasing pause, until it reaches jwksRefreshSeconds . The default value is 1. Example 2: Configuring token validation using an introspection endpoint - name: external port: 9094 type: loadbalancer tls: true authentication: type: oauth validIssuerUri: < https://<auth-server-address>/auth/realms/external > introspectionEndpointUri: < https://<auth-server-address>/auth/realms/external/protocol/openid-connect/token/introspect > 1 clientId: kafka-broker 2 clientSecret: 3 secretName: my-cluster-oauth key: clientSecret userNameClaim: preferred_username 4 maxSecondsWithoutReauthentication: 3600 5 1 URI of the token introspection endpoint. 2 Client ID to identify the client. 3 Client Secret and client ID is used for authentication. 4 The token claim (or key) that contains the actual user name in the token. The user name is the principal used to identify the user. The userNameClaim value will depend on the authorization server used. 5 (Optional) Activates the Kafka re-authentication mechanism that enforces session expiry to the same length of time as the access token. If the specified value is less than the time left for the access token to expire, then the client will have to re-authenticate before the actual token expiry. By default, the session does not expire when the access token expires, and the client does not attempt re-authentication. Depending on how you apply OAuth 2.0 authentication, and the type of authorization server, there are additional (optional) configuration settings you can use: # ... authentication: type: oauth # ... checkIssuer: false 1 checkAudience: true 2 fallbackUserNameClaim: client_id 3 fallbackUserNamePrefix: client-account- 4 validTokenType: bearer 5 userInfoEndpointUri: https://OAUTH-SERVER-ADDRESS/auth/realms/external/protocol/openid-connect/userinfo 6 enableOauthBearer: false 7 enablePlain: true 8 tokenEndpointUri: https://OAUTH-SERVER-ADDRESS/auth/realms/external/protocol/openid-connect/token 9 customClaimCheck: "@.custom == 'custom-value'" 10 clientAudience: AUDIENCE 11 clientScope: SCOPE 12 1 If your authorization server does not provide an iss claim, it is not possible to perform an issuer check. In this situation, set checkIssuer to false and do not specify a validIssuerUri . Default is true . 2 If your authorization server provides an aud (audience) claim, and you want to enforce an audience check, set checkAudience to true . Audience checks identify the intended recipients of tokens. As a result, the Kafka broker will reject tokens that do not have its clientId in their aud claim. Default is false . 3 An authorization server may not provide a single attribute to identify both regular users and clients. When a client authenticates in its own name, the server might provide a client ID . When a user authenticates using a username and password, to obtain a refresh token or an access token, the server might provide a username attribute in addition to a client ID. Use this fallback option to specify the username claim (attribute) to use if a primary user ID attribute is not available. 4 In situations where fallbackUserNameClaim is applicable, it may also be necessary to prevent name collisions between the values of the username claim, and those of the fallback username claim. Consider a situation where a client called producer exists, but also a regular user called producer exists. In order to differentiate between the two, you can use this property to add a prefix to the user ID of the client. 5 (Only applicable when using introspectionEndpointUri ) Depending on the authorization server you are using, the introspection endpoint may or may not return the token type attribute, or it may contain different values. You can specify a valid token type value that the response from the introspection endpoint has to contain. 6 (Only applicable when using introspectionEndpointUri ) The authorization server may be configured or implemented in such a way to not provide any identifiable information in an Introspection Endpoint response. In order to obtain the user ID, you can configure the URI of the userinfo endpoint as a fallback. The userNameClaim , fallbackUserNameClaim , and fallbackUserNamePrefix settings are applied to the response of userinfo endpoint. 7 Set this to false`to disable the OAUTHBEARER mechanism on the listener. At least one of PLAIN or OAUTHBEARER has to be enabled. Default is `true . 8 Set to true to enable PLAIN authentication on the listener, which is supported by all clients on all platforms. The Kafka client must enable the PLAIN mechanism and set the username and password . PLAIN can be used to authenticate either by using the OAuth access token, or the OAuth clientId and secret (the client credentials). The behavior is additionally controlled by whether tokenEndpointUri is specified or not. Default is false . If tokenEndpointUri is specified and the client sets password to start with the string USDaccessToken: , the server interprets the password as the access token and the username as the account username. Otherwise, the username is interpreted as the clientId and the password as the client secret , which the broker uses to obtain the access token in the client's name. If tokenEndpointUri is not specified, the password is always interpreted as an access token and the username is always interpreted as the account username, which must match the principal id extracted from the token. This is known as 'no-client-credentials' mode because the client must always obtain the access token by itself, and can't use clientId and secret . 9 Additional configuration for PLAIN mechanism to allow clients to authenticate by passing clientId and secret as username and password as described in the point. If not specified the clients can authenticate over PLAIN only by passing an access token as password parameter. 10 Additional custom rules can be imposed on the JWT access token during validation by setting this to a JsonPath filter query. If the access token does not contain the necessary data, it is rejected. When using the introspectionEndpointUri , the custom check is applied to the introspection endpoint response JSON. 11 (Optional) An audience parameter passed to the token endpoint. An audience is used when obtaining an access token for inter-broker authentication. It is also used in the name of a client for OAuth 2.0 over PLAIN client authentication using a clientId and secret . This only affects the ability to obtain the token, and the content of the token, depending on the authorization server. It does not affect token validation rules by the listener. 12 (Optional) A scope parameter passed to the token endpoint. A scope is used when obtaining an access token for inter-broker authentication. It is also used in the name of a client for OAuth 2.0 over PLAIN client authentication using a clientId and secret . This only affects the ability to obtain the token, and the content of the token, depending on the authorization server. It does not affect token validation rules by the listener. Save and exit the editor, then wait for rolling updates to complete. Check the update in the logs or by watching the pod state transitions: oc logs -f USD{POD_NAME} -c USD{CONTAINER_NAME} oc get pod -w The rolling update configures the brokers to use OAuth 2.0 authentication. What to do Configure your Kafka clients to use OAuth 2.0 4.4.6.3. Configuring Kafka Java clients to use OAuth 2.0 This procedure describes how to configure Kafka producer and consumer APIs to use OAuth 2.0 for interaction with Kafka brokers. Add a client callback plugin to your pom.xml file, and configure the system properties. Prerequisites AMQ Streams and Kafka are running An OAuth 2.0 authorization server is deployed and configured for OAuth access to Kafka brokers Kafka brokers are configured for OAuth 2.0 Procedure Add the client library with OAuth 2.0 support to the pom.xml file for the Kafka client: <dependency> <groupId>io.strimzi</groupId> <artifactId>kafka-oauth-client</artifactId> <version>{oauth-version}</version> </dependency> Configure the system properties for the callback: For example: System.setProperty(ClientConfig.OAUTH_TOKEN_ENDPOINT_URI, " https://<auth-server-address>/auth/realms/master/protocol/openid-connect/token "); 1 System.setProperty(ClientConfig.OAUTH_CLIENT_ID, " <client-name> "); 2 System.setProperty(ClientConfig.OAUTH_CLIENT_SECRET, " <client-secret> "); 3 1 URI of the authorization server token endpoint. 2 Client ID, which is the name used when creating the client in the authorization server. 3 Client secret created when creating the client in the authorization server. Enable the SASL OAUTHBEARER mechanism on a TLS encrypted connection in the Kafka client configuration: For example: props.put("sasl.jaas.config", "org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required;"); props.put("security.protocol", "SASL_SSL"); 1 props.put("sasl.mechanism", "OAUTHBEARER"); props.put("sasl.login.callback.handler.class", "io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler"); 1 Here we use SASL_SSL for use over TLS connections. Use SASL_PLAINTEXT over unencrypted connections. Verify that the Kafka client can access the Kafka brokers. What to do Configure Kafka components to use OAuth 2.0 4.4.6.4. Configuring OAuth 2.0 for Kafka components This procedure describes how to configure Kafka components to use OAuth 2.0 authentication using an authorization server. You can configure authentication for: Kafka Connect Kafka MirrorMaker Kafka Bridge In this scenario, the Kafka component and the authorization server are running in the same cluster. Before you start For more information on the configuration of OAuth 2.0 authentication for Kafka components, see: KafkaClientAuthenticationOAuth schema reference Prerequisites AMQ Streams and Kafka are running An OAuth 2.0 authorization server is deployed and configured for OAuth access to Kafka brokers Kafka brokers are configured for OAuth 2.0 Procedure Create a client secret and mount it to the component as an environment variable. For example, here we are creating a client Secret for the Kafka Bridge: apiVersion: kafka.strimzi.io/v1beta2 kind: Secret metadata: name: my-bridge-oauth type: Opaque data: clientSecret: MGQ1OTRmMzYtZTllZS00MDY2LWI5OGEtMTM5MzM2NjdlZjQw 1 1 The clientSecret key must be in base64 format. Create or edit the resource for the Kafka component so that OAuth 2.0 authentication is configured for the authentication property. For OAuth 2.0 authentication, you can use: Client ID and secret Client ID and refresh token Access token TLS KafkaClientAuthenticationOAuth schema reference provides examples of each . For example, here OAuth 2.0 is assigned to the Kafka Bridge client using a client ID and secret, and TLS: apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # ... authentication: type: oauth 1 tokenEndpointUri: https://<auth-server-address>/auth/realms/master/protocol/openid-connect/token 2 clientId: kafka-bridge clientSecret: secretName: my-bridge-oauth key: clientSecret tlsTrustedCertificates: 3 - secretName: oauth-server-cert certificate: tls.crt 1 Authentication type set to oauth . 2 URI of the token endpoint for authentication. 3 Trusted certificates for TLS connection to the authorization server. Depending on how you apply OAuth 2.0 authentication, and the type of authorization server, there are additional configuration options you can use: # ... spec: # ... authentication: # ... disableTlsHostnameVerification: true 1 checkAccessTokenType: false 2 accessTokenIsJwt: false 3 scope: any 4 audience: kafka 5 1 (Optional) Disable TLS hostname verification. Default is false . 2 If the authorization server does not return a typ (type) claim inside the JWT token, you can apply checkAccessTokenType: false to skip the token type check. Default is true . 3 If you are using opaque tokens, you can apply accessTokenIsJwt: false so that access tokens are not treated as JWT tokens. 4 (Optional) The scope for requesting the token from the token endpoint. An authorization server may require a client to specify the scope. In this case it is any . 5 (Optional) The audience for requesting the token from the token endpoint. An authorization server may require a client to specify the audience. In this case it is kafka . Apply the changes to the deployment of your Kafka resource. oc apply -f your-file Check the update in the logs or by watching the pod state transitions: oc logs -f USD{POD_NAME} -c USD{CONTAINER_NAME} oc get pod -w The rolling updates configure the component for interaction with Kafka brokers using OAuth 2.0 authentication. 4.5. Using OAuth 2.0 token-based authorization If you are using OAuth 2.0 with Red Hat Single Sign-On for token-based authentication, you can also use Red Hat Single Sign-On to configure authorization rules to constrain client access to Kafka brokers. Authentication establishes the identity of a user. Authorization decides the level of access for that user. AMQ Streams supports the use of OAuth 2.0 token-based authorization through Red Hat Single Sign-On Authorization Services , which allows you to manage security policies and permissions centrally. Security policies and permissions defined in Red Hat Single Sign-On are used to grant access to resources on Kafka brokers. Users and clients are matched against policies that permit access to perform specific actions on Kafka brokers. Kafka allows all users full access to brokers by default, and also provides the AclAuthorizer plugin to configure authorization based on Access Control Lists (ACLs). ZooKeeper stores ACL rules that grant or deny access to resources based on username . However, OAuth 2.0 token-based authorization with Red Hat Single Sign-On offers far greater flexibility on how you wish to implement access control to Kafka brokers. In addition, you can configure your Kafka brokers to use OAuth 2.0 authorization and ACLs. Additional resources Using OAuth 2.0 token-based authentication Kafka Authorization Red Hat Single Sign-On documentation 4.5.1. OAuth 2.0 authorization mechanism OAuth 2.0 authorization in AMQ Streams uses Red Hat Single Sign-On server Authorization Services REST endpoints to extend token-based authentication with Red Hat Single Sign-On by applying defined security policies on a particular user, and providing a list of permissions granted on different resources for that user. Policies use roles and groups to match permissions to users. OAuth 2.0 authorization enforces permissions locally based on the received list of grants for the user from Red Hat Single Sign-On Authorization Services. 4.5.1.1. Kafka broker custom authorizer A Red Hat Single Sign-On authorizer ( KeycloakRBACAuthorizer ) is provided with AMQ Streams. To be able to use the Red Hat Single Sign-On REST endpoints for Authorization Services provided by Red Hat Single Sign-On, you configure a custom authorizer on the Kafka broker. The authorizer fetches a list of granted permissions from the authorization server as needed, and enforces authorization locally on the Kafka Broker, making rapid authorization decisions for each client request. 4.5.2. Configuring OAuth 2.0 authorization support This procedure describes how to configure Kafka brokers to use OAuth 2.0 authorization using Red Hat Single Sign-On Authorization Services. Before you begin Consider the access you require or want to limit for certain users. You can use a combination of Red Hat Single Sign-On groups , roles , clients , and users to configure access in Red Hat Single Sign-On. Typically, groups are used to match users based on organizational departments or geographical locations. And roles are used to match users based on their function. With Red Hat Single Sign-On, you can store users and groups in LDAP, whereas clients and roles cannot be stored this way. Storage and access to user data may be a factor in how you choose to configure authorization policies. Note Super users always have unconstrained access to a Kafka broker regardless of the authorization implemented on the Kafka broker. Prerequisites AMQ Streams must be configured to use OAuth 2.0 with Red Hat Single Sign-On for token-based authentication . You use the same Red Hat Single Sign-On server endpoint when you set up authorization. OAuth 2.0 authentication must be configured with the maxSecondsWithoutReauthentication option to enable re-authentication. Procedure Access the Red Hat Single Sign-On Admin Console or use the Red Hat Single Sign-On Admin CLI to enable Authorization Services for the Kafka broker client you created when setting up OAuth 2.0 authentication. Use Authorization Services to define resources, authorization scopes, policies, and permissions for the client. Bind the permissions to users and clients by assigning them roles and groups. Configure the Kafka brokers to use Red Hat Single Sign-On authorization by updating the Kafka broker configuration ( Kafka.spec.kafka ) of your Kafka resource in an editor. oc edit kafka my-cluster Configure the Kafka broker kafka configuration to use keycloak authorization, and to be able to access the authorization server and Authorization Services. For example: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... authorization: type: keycloak 1 tokenEndpointUri: < https://<auth-server-address>/auth/realms/external/protocol/openid-connect/token > 2 clientId: kafka 3 delegateToKafkaAcls: false 4 disableTlsHostnameVerification: false 5 superUsers: 6 - CN=fred - sam - CN=edward tlsTrustedCertificates: 7 - secretName: oauth-server-cert certificate: ca.crt grantsRefreshPeriodSeconds: 60 8 grantsRefreshPoolSize: 5 9 #... 1 Type keycloak enables Red Hat Single Sign-On authorization. 2 URI of the Red Hat Single Sign-On token endpoint. For production, always use HTTPs. When you configure token-based oauth authentication, you specify a jwksEndpointUri as the URI for local JWT validation. The hostname for the tokenEndpointUri URI must be the same. 3 The client ID of the OAuth 2.0 client definition in Red Hat Single Sign-On that has Authorization Services enabled. Typically, kafka is used as the ID. 4 (Optional) Delegate authorization to Kafka AclAuthorizer if access is denied by Red Hat Single Sign-On Authorization Services policies. Default is false . 5 (Optional) Disable TLS hostname verification. Default is false . 6 (Optional) Designated super users . 7 (Optional) Trusted certificates for TLS connection to the authorization server. 8 (Optional) The time between two consecutive grants refresh runs. That is the maximum time for active sessions to detect any permissions changes for the user on Red Hat Single Sign-On. The default value is 60. 9 (Optional) The number of threads to use to refresh (in parallel) the grants for the active sessions. The default value is 5. Save and exit the editor, then wait for rolling updates to complete. Check the update in the logs or by watching the pod state transitions: oc logs -f USD{POD_NAME} -c kafka oc get pod -w The rolling update configures the brokers to use OAuth 2.0 authorization. Verify the configured permissions by accessing Kafka brokers as clients or users with specific roles, making sure they have the necessary access, or do not have the access they are not supposed to have. 4.5.3. Managing policies and permissions in Red Hat Single Sign-On Authorization Services This section describes the authorization models used by Red Hat Single Sign-On Authorization Services and Kafka, and defines the important concepts in each model. To grant permissions to access Kafka, you can map Red Hat Single Sign-On Authorization Services objects to Kafka resources by creating an OAuth client specification in Red Hat Single Sign-On. Kafka permissions are granted to user accounts or service accounts using Red Hat Single Sign-On Authorization Services rules. Examples are shown of the different user permissions required for common Kafka operations, such as creating and listing topics. 4.5.3.1. Kafka and Red Hat Single Sign-On authorization models overview Kafka and Red Hat Single Sign-On Authorization Services use different authorization models. Kafka authorization model Kafka's authorization model uses resource types . When a Kafka client performs an action on a broker, the broker uses the configured KeycloakRBACAuthorizer to check the client's permissions, based on the action and resource type. Kafka uses five resource types to control access: Topic , Group , Cluster , TransactionalId , and DelegationToken . Each resource type has a set of available permissions. Topic Create Write Read Delete Describe DescribeConfigs Alter AlterConfigs Group Read Describe Delete Cluster Create Describe Alter DescribeConfigs AlterConfigs IdempotentWrite ClusterAction TransactionalId Describe Write DelegationToken Describe Red Hat Single Sign-On Authorization Services model The Red Hat Single Sign-On Authorization Services model has four concepts for defining and granting permissions: resources , authorization scopes , policies , and permissions . Resources A resource is a set of resource definitions that are used to match resources with permitted actions. A resource might be an individual topic, for example, or all topics with names starting with the same prefix. A resource definition is associated with a set of available authorization scopes, which represent a set of all actions available on the resource. Often, only a subset of these actions is actually permitted. Authorization scopes An authorization scope is a set of all the available actions on a specific resource definition. When you define a new resource, you add scopes from the set of all scopes. Policies A policy is an authorization rule that uses criteria to match against a list of accounts. Policies can match: Service accounts based on client ID or roles User accounts based on username, groups, or roles. Permissions A permission grants a subset of authorization scopes on a specific resource definition to a set of users. Additional resources Kafka authorization model 4.5.3.2. Map Red Hat Single Sign-On Authorization Services to the Kafka authorization model The Kafka authorization model is used as a basis for defining the Red Hat Single Sign-On roles and resources that will control access to Kafka. To grant Kafka permissions to user accounts or service accounts, you first create an OAuth client specification in Red Hat Single Sign-On for the Kafka broker. You then specify Red Hat Single Sign-On Authorization Services rules on the client. Typically, the client id of the OAuth client that represents the broker is kafka (the example files provided with AMQ Streams use kafka as the OAuth client id). Note If you have multiple Kafka clusters, you can use a single OAuth client ( kafka ) for all of them. This gives you a single, unified space in which to define and manage authorization rules. However, you can also use different OAuth client ids (for example, my-cluster-kafka or cluster-dev-kafka ) and define authorization rules for each cluster within each client configuration. The kafka client definition must have the Authorization Enabled option enabled in the Red Hat Single Sign-On Admin Console. All permissions exist within the scope of the kafka client. If you have different Kafka clusters configured with different OAuth client IDs, they each need a separate set of permissions even though they're part of the same Red Hat Single Sign-On realm. When the Kafka client uses OAUTHBEARER authentication, the Red Hat Single Sign-On authorizer ( KeycloakRBACAuthorizer ) uses the access token of the current session to retrieve a list of grants from the Red Hat Single Sign-On server. To retrieve the grants, the authorizer evaluates the Red Hat Single Sign-On Authorization Services policies and permissions. Authorization scopes for Kafka permissions An initial Red Hat Single Sign-On configuration usually involves uploading authorization scopes to create a list of all possible actions that can be performed on each Kafka resource type. This step is performed once only, before defining any permissions. You can add authorization scopes manually instead of uploading them. Authorization scopes must contain all the possible Kafka permissions regardless of the resource type: Create Write Read Delete Describe Alter DescribeConfig AlterConfig ClusterAction IdempotentWrite Note If you're certain you won't need a permission (for example, IdempotentWrite ), you can omit it from the list of authorization scopes. However, that permission won't be available to target on Kafka resources. Resource patterns for permissions checks Resource patterns are used for pattern matching against the targeted resources when performing permission checks. The general pattern format is RESOURCE-TYPE:PATTERN-NAME . The resource types mirror the Kafka authorization model. The pattern allows for two matching options: Exact matching (when the pattern does not end with * ) Prefix matching (when the pattern ends with * ) Example patterns for resources Additionally, the general pattern format can be prefixed by kafka-cluster: CLUSTER-NAME followed by a comma, where CLUSTER-NAME refers to the metadata.name in the Kafka custom resource. Example patterns for resources with cluster prefix When the kafka-cluster prefix is missing, it is assumed to be kafka-cluster:* . When defining a resource, you can associate it with a list of possible authorization scopes which are relevant to the resource. Set whatever actions make sense for the targeted resource type. Though you may add any authorization scope to any resource, only the scopes supported by the resource type are considered for access control. Policies for applying access permission Policies are used to target permissions to one or more user accounts or service accounts. Targeting can refer to: Specific user or service accounts Realm roles or client roles User groups JavaScript rules to match a client IP address A policy is given a unique name and can be reused to target multiple permissions to multiple resources. Permissions to grant access Use fine-grained permissions to pull together the policies, resources, and authorization scopes that grant access to users. The name of each permission should clearly define which permissions it grants to which users. For example, Dev Team B can read from topics starting with x . Additional resources For more information about how to configure permissions through Red Hat Single Sign-On Authorization Services, see Section 4.5.4, "Trying Red Hat Single Sign-On Authorization Services" . 4.5.3.3. Example permissions required for Kafka operations The following examples demonstrate the user permissions required for performing common operations on Kafka. Create a topic To create a topic, the Create permission is required for the specific topic, or for Cluster:kafka-cluster . bin/kafka-topics.sh --create --topic my-topic \ --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties List topics If a user has the Describe permission on a specified topic, the topic is listed. bin/kafka-topics.sh --list \ --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties Display topic details To display a topic's details, Describe and DescribeConfigs permissions are required on the topic. bin/kafka-topics.sh --describe --topic my-topic \ --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties Produce messages to a topic To produce messages to a topic, Describe and Write permissions are required on the topic. If the topic hasn't been created yet, and topic auto-creation is enabled, the permissions to create a topic are required. bin/kafka-console-producer.sh --topic my-topic \ --broker-list my-cluster-kafka-bootstrap:9092 --producer.config=/tmp/config.properties Consume messages from a topic To consume messages from a topic, Describe and Read permissions are required on the topic. Consuming from the topic normally relies on storing the consumer offsets in a consumer group, which requires additional Describe and Read permissions on the consumer group. Two resources are needed for matching. For example: bin/kafka-console-consumer.sh --topic my-topic --group my-group-1 --from-beginning \ --bootstrap-server my-cluster-kafka-bootstrap:9092 --consumer.config /tmp/config.properties Produce messages to a topic using an idempotent producer As well as the permissions for producing to a topic, an additional IdempotentWrite permission is required on the Cluster resource. Two resources are needed for matching. For example: bin/kafka-console-producer.sh --topic my-topic \ --broker-list my-cluster-kafka-bootstrap:9092 --producer.config=/tmp/config.properties --producer-property enable.idempotence=true --request-required-acks -1 List consumer groups When listing consumer groups, only the groups on which the user has the Describe permissions are returned. Alternatively, if the user has the Describe permission on the Cluster:kafka-cluster , all the consumer groups are returned. bin/kafka-consumer-groups.sh --list \ --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties Display consumer group details To display a consumer group's details, the Describe permission is required on the group and the topics associated with the group. bin/kafka-consumer-groups.sh --describe --group my-group-1 \ --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties Change topic configuration To change a topic's configuration, the Describe and Alter permissions are required on the topic. bin/kafka-topics.sh --alter --topic my-topic --partitions 2 \ --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties Display Kafka broker configuration In order to use kafka-configs.sh to get a broker's configuration, the DescribeConfigs permission is required on the Cluster:kafka-cluster . bin/kafka-configs.sh --entity-type brokers --entity-name 0 --describe --all \ --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties Change Kafka broker configuration To change a Kafka broker's configuration, DescribeConfigs and AlterConfigs permissions are required on Cluster:kafka-cluster . bin/kafka-configs --entity-type brokers --entity-name 0 --alter --add-config log.cleaner.threads=2 \ --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties Delete a topic To delete a topic, the Describe and Delete permissions are required on the topic. bin/kafka-topics.sh --delete --topic my-topic \ --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties Select a lead partition To run leader selection for topic partitions, the Alter permission is required on the Cluster:kafka-cluster . bin/kafka-leader-election.sh --topic my-topic --partition 0 --election-type PREFERRED / --bootstrap-server my-cluster-kafka-bootstrap:9092 --admin.config /tmp/config.properties Reassign partitions To generate a partition reassignment file, Describe permissions are required on the topics involved. bin/kafka-reassign-partitions.sh --topics-to-move-json-file /tmp/topics-to-move.json --broker-list "0,1" --generate \ --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config /tmp/config.properties > /tmp/partition-reassignment.json To execute the partition reassignment, Describe and Alter permissions are required on Cluster:kafka-cluster . Also, Describe permissions are required on the topics involved. bin/kafka-reassign-partitions.sh --reassignment-json-file /tmp/partition-reassignment.json --execute \ --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config /tmp/config.properties To verify partition reassignment, Describe , and AlterConfigs permissions are required on Cluster:kafka-cluster , and on each of the topics involved. bin/kafka-reassign-partitions.sh --reassignment-json-file /tmp/partition-reassignment.json --verify \ --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config /tmp/config.properties 4.5.4. Trying Red Hat Single Sign-On Authorization Services This example explains how to use Red Hat Single Sign-On Authorization Services with keycloak authorization. Use Red Hat Single Sign-On Authorization Services to enforce access restrictions on Kafka clients. Red Hat Single Sign-On Authorization Services use authorization scopes, policies and permissions to define and apply access control to resources. Red Hat Single Sign-On Authorization Services REST endpoints provide a list of granted permissions on resources for authenticated users. The list of grants (permissions) is fetched from the Red Hat Single Sign-On server as the first action after an authenticated session is established by the Kafka client. The list is refreshed in the background so that changes to the grants are detected. Grants are cached and enforced locally on the Kafka broker for each user session to provide fast authorization decisions. AMQ Streams provides two example files with the deployment artifacts for setting up Red Hat Single Sign-On: kafka-ephemeral-oauth-single-keycloak-authz.yaml An example Kafka custom resource configured for OAuth 2.0 token-based authorization using Red Hat Single Sign-On. You can use the custom resource to deploy a Kafka cluster that uses keycloak authorization and token-based oauth authentication. kafka-authz-realm.json An example Red Hat Single Sign-On realm configured with sample groups, users, roles and clients. You can import the realm into a Red Hat Single Sign-On instance to set up fine-grained permissions to access Kafka. If you want to try the example with Red Hat Single Sign-On, use these files to perform the tasks outlined in this section in the order shown. Accessing the Red Hat Single Sign-On Admin Console Deploying a Kafka cluster with Red Hat Single Sign-On authorization Preparing TLS connectivity for a CLI Kafka client session Checking authorized access to Kafka using a CLI Kafka client session Authentication When you configure token-based oauth authentication, you specify a jwksEndpointUri as the URI for local JWT validation. When you configure keycloak authorization, you specify a tokenEndpointUri as the URI of the Red Hat Single Sign-On token endpoint. The hostname for both URIs must be the same. Targeted permissions with group or role policies In Red Hat Single Sign-On, confidential clients with service accounts enabled can authenticate to the server in their own name using a client ID and a secret. This is convenient for microservices that typically act in their own name, and not as agents of a particular user (like a web site). Service accounts can have roles assigned like regular users. They cannot, however, have groups assigned. As a consequence, if you want to target permissions to microservices using service accounts, you cannot use group policies, and should instead use role policies. Conversely, if you want to limit certain permissions only to regular user accounts where authentication with a username and password is required, you can achieve that as a side effect of using the group policies rather than the role policies. This is what is used in this example for permissions that start with ClusterManager . Performing cluster management is usually done interactively using CLI tools. It makes sense to require the user to log in before using the resulting access token to authenticate to the Kafka broker. In this case, the access token represents the specific user, rather than the client application. 4.5.4.1. Accessing the Red Hat Single Sign-On Admin Console Set up Red Hat Single Sign-On, then connect to its Admin Console and add the preconfigured realm. Use the example kafka-authz-realm.json file to import the realm. You can check the authorization rules defined for the realm in the Admin Console. The rules grant access to the resources on the Kafka cluster configured to use the example Red Hat Single Sign-On realm. Prerequisites A running OpenShift cluster. The AMQ Streams examples/security/keycloak-authorization/kafka-authz-realm.json file that contains the preconfigured realm. Procedure Install the Red Hat Single Sign-On server using the Red Hat Single Sign-On Operator as described in Server Installation and Configuration in the Red Hat Single Sign-On documentation. Wait until the Red Hat Single Sign-On instance is running. Get the external hostname to be able to access the Admin Console. NS=sso oc get ingress keycloak -n USDNS In this example, we assume the Red Hat Single Sign-On server is running in the sso namespace. Get the password for the admin user. oc get -n USDNS pod keycloak-0 -o yaml | less The password is stored as a secret, so get the configuration YAML file for the Red Hat Single Sign-On instance to identify the name of the secret ( secretKeyRef.name ). Use the name of the secret to obtain the clear text password. SECRET_NAME=credential-keycloak oc get -n USDNS secret USDSECRET_NAME -o yaml | grep PASSWORD | awk '{print USD2}' | base64 -D In this example, we assume the name of the secret is credential-keycloak . Log in to the Admin Console with the username admin and the password you obtained. Use https:// HOSTNAME to access the OpenShift ingress. You can now upload the example realm to Red Hat Single Sign-On using the Admin Console. Click Add Realm to import the example realm. Add the examples/security/keycloak-authorization/kafka-authz-realm.json file, and then click Create . You now have kafka-authz as your current realm in the Admin Console. The default view displays the Master realm. In the Red Hat Single Sign-On Admin Console, go to Clients > kafka > Authorization > Settings and check that Decision Strategy is set to Affirmative . An affirmative policy means that at least one policy must be satisfied for a client to access the Kafka cluster. In the Red Hat Single Sign-On Admin Console, go to Groups , Users , Roles and Clients to view the realm configuration. Groups Groups are used to create user groups and set user permissions. Groups are sets of users with a name assigned. They are used to compartmentalize users into geographical, organizational or departmental units. Groups can be linked to an LDAP identity provider. You can make a user a member of a group through a custom LDAP server admin user interface, for example, to grant permissions on Kafka resources. Users Users are used to create users. For this example, alice and bob are defined. alice is a member of the ClusterManager group and bob is a member of ClusterManager-my-cluster group. Users can be stored in an LDAP identity provider. Roles Roles mark users or clients as having certain permissions. Roles are a concept analogous to groups. They are usually used to tag users with organizational roles and have the requisite permissions. Roles cannot be stored in an LDAP identity provider. If LDAP is a requirement, you can use groups instead, and add Red Hat Single Sign-On roles to the groups so that when users are assigned a group they also get a corresponding role. Clients Clients can have specific configurations. For this example, kafka , kafka-cli , team-a-client , and team-b-client clients are configured. The kafka client is used by Kafka brokers to perform the necessary OAuth 2.0 communication for access token validation. This client also contains the authorization services resource definitions, policies, and authorization scopes used to perform authorization on the Kafka brokers. The authorization configuration is defined in the kafka client from the Authorization tab, which becomes visible when Authorization Enabled is switched on from the Settings tab. The kafka-cli client is a public client that is used by the Kafka command line tools when authenticating with username and password to obtain an access token or a refresh token. The team-a-client and team-b-client clients are confidential clients representing services with partial access to certain Kafka topics. In the Red Hat Single Sign-On Admin Console, go to Authorization > Permissions to see the granted permissions that use the resources and policies defined for the realm. For example, the kafka client has the following permissions: Dev Team A The Dev Team A realm role can write to topics that start with x_ on any cluster. This combines a resource called Topic:x_* , Describe and Write scopes, and the Dev Team A policy. The Dev Team A policy matches all users that have a realm role called Dev Team A . Dev Team B The Dev Team B realm role can read from topics that start with x_ on any cluster. This combines Topic:x_* , Group:x_* resources, Describe and Read scopes, and the Dev Team B policy. The Dev Team B policy matches all users that have a realm role called Dev Team B . Matching users and clients have the ability to read from topics, and update the consumed offsets for topics and consumer groups that have names starting with x_ . 4.5.4.2. Deploying a Kafka cluster with Red Hat Single Sign-On authorization Deploy a Kafka cluster configured to connect to the Red Hat Single Sign-On server. Use the example kafka-ephemeral-oauth-single-keycloak-authz.yaml file to deploy the Kafka cluster as a Kafka custom resource. The example deploys a single-node Kafka cluster with keycloak authorization and oauth authentication. Prerequisites The Red Hat Single Sign-On authorization server is deployed to your OpenShift cluster and loaded with the example realm. The Cluster Operator is deployed to your OpenShift cluster. The AMQ Streams examples/security/keycloak-authorization/kafka-ephemeral-oauth-single-keycloak-authz.yaml custom resource. Procedure Use the hostname of the Red Hat Single Sign-On instance you deployed to prepare a truststore certificate for Kafka brokers to communicate with the Red Hat Single Sign-On server. SSO_HOST= SSO-HOSTNAME SSO_HOST_PORT=USDSSO_HOST:443 STOREPASS=storepass echo "Q" | openssl s_client -showcerts -connect USDSSO_HOST_PORT 2>/dev/null | awk ' /BEGIN CERTIFICATE/,/END CERTIFICATE/ { print USD0 } ' > /tmp/sso.crt The certificate is required as OpenShift ingress is used to make a secure (HTTPS) connection. Deploy the certificate to OpenShift as a secret. oc create secret generic oauth-server-cert --from-file=/tmp/sso.crt -n USDNS Set the hostname as an environment variable SSO_HOST= SSO-HOSTNAME Create and deploy the example Kafka cluster. cat examples/security/keycloak-authorization/kafka-ephemeral-oauth-single-keycloak-authz.yaml | sed -E 's#\USD{SSO_HOST}'"#USDSSO_HOST#" | oc create -n USDNS -f - 4.5.4.3. Preparing TLS connectivity for a CLI Kafka client session Create a new pod for an interactive CLI session. Set up a truststore with a Red Hat Single Sign-On certificate for TLS connectivity. The truststore is to connect to Red Hat Single Sign-On and the Kafka broker. Prerequisites The Red Hat Single Sign-On authorization server is deployed to your OpenShift cluster and loaded with the example realm. In the Red Hat Single Sign-On Admin Console, check the roles assigned to the clients are displayed in Clients > Service Account Roles . The Kafka cluster configured to connect with Red Hat Single Sign-On is deployed to your OpenShift cluster. Procedure Run a new interactive pod container using the AMQ Streams Kafka image to connect to a running Kafka broker. NS=sso oc run -ti --restart=Never --image=registry.redhat.io/amq7/amq-streams-kafka-28-rhel8:1.8.4 kafka-cli -n USDNS -- /bin/sh Note If oc times out waiting on the image download, subsequent attempts may result in an AlreadyExists error. Attach to the pod container. oc attach -ti kafka-cli -n USDNS Use the hostname of the Red Hat Single Sign-On instance to prepare a certificate for client connection using TLS. SSO_HOST= SSO-HOSTNAME SSO_HOST_PORT=USDSSO_HOST:443 STOREPASS=storepass echo "Q" | openssl s_client -showcerts -connect USDSSO_HOST_PORT 2>/dev/null | awk ' /BEGIN CERTIFICATE/,/END CERTIFICATE/ { print USD0 } ' > /tmp/sso.crt Create a truststore for TLS connection to the Kafka brokers. keytool -keystore /tmp/truststore.p12 -storetype pkcs12 -alias sso -storepass USDSTOREPASS -import -file /tmp/sso.crt -noprompt Use the Kafka bootstrap address as the hostname of the Kafka broker and the tls listener port (9093) to prepare a certificate for the Kafka broker. KAFKA_HOST_PORT=my-cluster-kafka-bootstrap:9093 STOREPASS=storepass echo "Q" | openssl s_client -showcerts -connect USDKAFKA_HOST_PORT 2>/dev/null | awk ' /BEGIN CERTIFICATE/,/END CERTIFICATE/ { print USD0 } ' > /tmp/my-cluster-kafka.crt Add the certificate for the Kafka broker to the truststore. keytool -keystore /tmp/truststore.p12 -storetype pkcs12 -alias my-cluster-kafka -storepass USDSTOREPASS -import -file /tmp/my-cluster-kafka.crt -noprompt Keep the session open to check authorized access. 4.5.4.4. Checking authorized access to Kafka using a CLI Kafka client session Check the authorization rules applied through the Red Hat Single Sign-On realm using an interactive CLI session. Apply the checks using Kafka's example producer and consumer clients to create topics with user and service accounts that have different levels of access. Use the team-a-client and team-b-client clients to check the authorization rules. Use the alice admin user to perform additional administrative tasks on Kafka. The AMQ Streams Kafka image used in this example contains Kafka producer and consumer binaries. Prerequisites ZooKeeper and Kafka are running in the OpenShift cluster to be able to send and receive messages. The interactive CLI Kafka client session is started. Apache Kafka download . Setting up client and admin user configuration Prepare a Kafka configuration file with authentication properties for the team-a-client client. SSO_HOST= SSO-HOSTNAME cat > /tmp/team-a-client.properties << EOF security.protocol=SASL_SSL ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.mechanism=OAUTHBEARER sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ oauth.client.id="team-a-client" \ oauth.client.secret="team-a-client-secret" \ oauth.ssl.truststore.location="/tmp/truststore.p12" \ oauth.ssl.truststore.password="USDSTOREPASS" \ oauth.ssl.truststore.type="PKCS12" \ oauth.token.endpoint.uri="https://USDSSO_HOST/auth/realms/kafka-authz/protocol/openid-connect/token" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler EOF The SASL OAUTHBEARER mechanism is used. This mechanism requires a client ID and client secret, which means the client first connects to the Red Hat Single Sign-On server to obtain an access token. The client then connects to the Kafka broker and uses the access token to authenticate. Prepare a Kafka configuration file with authentication properties for the team-b-client client. cat > /tmp/team-b-client.properties << EOF security.protocol=SASL_SSL ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.mechanism=OAUTHBEARER sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ oauth.client.id="team-b-client" \ oauth.client.secret="team-b-client-secret" \ oauth.ssl.truststore.location="/tmp/truststore.p12" \ oauth.ssl.truststore.password="USDSTOREPASS" \ oauth.ssl.truststore.type="PKCS12" \ oauth.token.endpoint.uri="https://USDSSO_HOST/auth/realms/kafka-authz/protocol/openid-connect/token" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler EOF Authenticate admin user alice by using curl and performing a password grant authentication to obtain a refresh token. USERNAME=alice PASSWORD=alice-password GRANT_RESPONSE=USD(curl -X POST "https://USDSSO_HOST/auth/realms/kafka-authz/protocol/openid-connect/token" -H 'Content-Type: application/x-www-form-urlencoded' -d "grant_type=password&username=USDUSERNAME&password=USDPASSWORD&client_id=kafka-cli&scope=offline_access" -s -k) REFRESH_TOKEN=USD(echo USDGRANT_RESPONSE | awk -F "refresh_token\":\"" '{printf USD2}' | awk -F "\"" '{printf USD1}') The refresh token is an offline token that is long-lived and does not expire. Prepare a Kafka configuration file with authentication properties for the admin user alice . cat > /tmp/alice.properties << EOF security.protocol=SASL_SSL ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.mechanism=OAUTHBEARER sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ oauth.refresh.token="USDREFRESH_TOKEN" \ oauth.client.id="kafka-cli" \ oauth.ssl.truststore.location="/tmp/truststore.p12" \ oauth.ssl.truststore.password="USDSTOREPASS" \ oauth.ssl.truststore.type="PKCS12" \ oauth.token.endpoint.uri="https://USDSSO_HOST/auth/realms/kafka-authz/protocol/openid-connect/token" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler EOF The kafka-cli public client is used for the oauth.client.id in the sasl.jaas.config . Since it's a public client it does not require a secret. The client authenticates with the refresh token that was authenticated in the step. The refresh token requests an access token behind the scenes, which is then sent to the Kafka broker for authentication. Producing messages with authorized access Use the team-a-client configuration to check that you can produce messages to topics that start with a_ or x_ . Write to topic my-topic . bin/kafka-console-producer.sh --broker-list my-cluster-kafka-bootstrap:9093 --topic my-topic \ --producer.config=/tmp/team-a-client.properties First message This request returns a Not authorized to access topics: [my-topic] error. team-a-client has a Dev Team A role that gives it permission to perform any supported actions on topics that start with a_ , but can only write to topics that start with x_ . The topic named my-topic matches neither of those rules. Write to topic a_messages . bin/kafka-console-producer.sh --broker-list my-cluster-kafka-bootstrap:9093 --topic a_messages \ --producer.config /tmp/team-a-client.properties First message Second message Messages are produced to Kafka successfully. Press CTRL+C to exit the CLI application. Check the Kafka container log for a debug log of Authorization GRANTED for the request. oc logs my-cluster-kafka-0 -f -n USDNS Consuming messages with authorized access Use the team-a-client configuration to consume messages from topic a_messages . Fetch messages from topic a_messages . bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic a_messages \ --from-beginning --consumer.config /tmp/team-a-client.properties The request returns an error because the Dev Team A role for team-a-client only has access to consumer groups that have names starting with a_ . Update the team-a-client properties to specify the custom consumer group it is permitted to use. bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic a_messages \ --from-beginning --consumer.config /tmp/team-a-client.properties --group a_consumer_group_1 The consumer receives all the messages from the a_messages topic. Administering Kafka with authorized access The team-a-client is an account without any cluster-level access, but it can be used with some administrative operations. List topics. bin/kafka-topics.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/team-a-client.properties --list The a_messages topic is returned. List consumer groups. bin/kafka-consumer-groups.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/team-a-client.properties --list The a_consumer_group_1 consumer group is returned. Fetch details on the cluster configuration. bin/kafka-configs.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/team-a-client.properties \ --entity-type brokers --describe --entity-default The request returns an error because the operation requires cluster level permissions that team-a-client does not have. Using clients with different permissions Use the team-b-client configuration to produce messages to topics that start with b_ . Write to topic a_messages . bin/kafka-console-producer.sh --broker-list my-cluster-kafka-bootstrap:9093 --topic a_messages \ --producer.config /tmp/team-b-client.properties Message 1 This request returns a Not authorized to access topics: [a_messages] error. Write to topic b_messages . bin/kafka-console-producer.sh --broker-list my-cluster-kafka-bootstrap:9093 --topic b_messages \ --producer.config /tmp/team-b-client.properties Message 1 Message 2 Message 3 Messages are produced to Kafka successfully. Write to topic x_messages . bin/kafka-console-producer.sh --broker-list my-cluster-kafka-bootstrap:9093 --topic x_messages \ --producer.config /tmp/team-b-client.properties Message 1 A Not authorized to access topics: [x_messages] error is returned, The team-b-client can only read from topic x_messages . Write to topic x_messages using team-a-client . bin/kafka-console-producer.sh --broker-list my-cluster-kafka-bootstrap:9093 --topic x_messages \ --producer.config /tmp/team-a-client.properties Message 1 This request returns a Not authorized to access topics: [x_messages] error. The team-a-client can write to the x_messages topic, but it does not have a permission to create a topic if it does not yet exist. Before team-a-client can write to the x_messages topic, an admin power user must create it with the correct configuration, such as the number of partitions and replicas. Managing Kafka with an authorized admin user Use admin user alice to manage Kafka. alice has full access to manage everything on any Kafka cluster. Create the x_messages topic as alice . bin/kafka-topics.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/alice.properties \ --topic x_messages --create --replication-factor 1 --partitions 1 The topic is created successfully. List all topics as alice . bin/kafka-topics.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/alice.properties --list bin/kafka-topics.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/team-a-client.properties --list bin/kafka-topics.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/team-b-client.properties --list Admin user alice can list all the topics, whereas team-a-client and team-b-client can only list the topics they have access to. The Dev Team A and Dev Team B roles both have Describe permission on topics that start with x_ , but they cannot see the other team's topics because they do not have Describe permissions on them. Use the team-a-client to produce messages to the x_messages topic: bin/kafka-console-producer.sh --broker-list my-cluster-kafka-bootstrap:9093 --topic x_messages \ --producer.config /tmp/team-a-client.properties Message 1 Message 2 Message 3 As alice created the x_messages topic, messages are produced to Kafka successfully. Use the team-b-client to produce messages to the x_messages topic. bin/kafka-console-producer.sh --broker-list my-cluster-kafka-bootstrap:9093 --topic x_messages \ --producer.config /tmp/team-b-client.properties Message 4 Message 5 This request returns a Not authorized to access topics: [x_messages] error. Use the team-b-client to consume messages from the x_messages topic: bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages \ --from-beginning --consumer.config /tmp/team-b-client.properties --group x_consumer_group_b The consumer receives all the messages from the x_messages topic. Use the team-a-client to consume messages from the x_messages topic. bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages \ --from-beginning --consumer.config /tmp/team-a-client.properties --group x_consumer_group_a This request returns a Not authorized to access topics: [x_messages] error. Use the team-a-client to consume messages from a consumer group that begins with a_ . bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages \ --from-beginning --consumer.config /tmp/team-a-client.properties --group a_consumer_group_a This request returns a Not authorized to access topics: [x_messages] error. Dev Team A has no Read access on topics that start with a x_ . Use alice to produce messages to the x_messages topic. bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages \ --from-beginning --consumer.config /tmp/alice.properties Messages are produced to Kafka successfully. alice can read from or write to any topic. Use alice to read the cluster configuration. bin/kafka-configs.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/alice.properties \ --entity-type brokers --describe --entity-default The cluster configuration for this example is empty. Additional resources Server Installation and Configuration Map Red Hat Single Sign-On Authorization Services to the Kafka authorization model | [
"listeners: - name: plain port: 9092 type: internal tls: true authentication: type: scram-sha-512 - name: tls port: 9093 type: internal tls: true authentication: type: tls - name: external port: 9094 type: loadbalancer tls: true authentication: type: tls",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # authorization: type: simple superUsers: - CN=client_1 - user_2 - CN=client_3 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: type: tls #",
"apiVersion: v1 kind: Secret metadata: name: my-user labels: strimzi.io/kind: KafkaUser strimzi.io/cluster: my-cluster type: Opaque data: ca.crt: # Public key of the client CA user.crt: # User certificate that contains the public key of the user user.key: # Private key of the user user.p12: # PKCS #12 archive file for storing certificates and keys user.password: # Password for protecting the PKCS #12 archive file",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: type: scram-sha-512 #",
"apiVersion: v1 kind: Secret metadata: name: my-user labels: strimzi.io/kind: KafkaUser strimzi.io/cluster: my-cluster type: Opaque data: password: Z2VuZXJhdGVkcGFzc3dvcmQ= 1 sasl.jaas.config: b3JnLmFwYWNoZS5rYWZrYS5jb21tb24uc2VjdXJpdHkuc2NyYW0uU2NyYW1Mb2dpbk1vZHVsZSByZXF1aXJlZCB1c2VybmFtZT0ibXktdXNlciIgcGFzc3dvcmQ9ImdlbmVyYXRlZHBhc3N3b3JkIjsK 2",
"echo \"Z2VuZXJhdGVkcGFzc3dvcmQ=\" | base64 --decode",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: # quotas: producerByteRate: 1048576 1 consumerByteRate: 2097152 2 requestPercentage: 55 3 controllerMutationRate: 10 4",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # authorization: 1 type: simple superUsers: 2 - CN=client_1 - user_2 - CN=client_3 listeners: - name: tls port: 9093 type: internal tls: true authentication: type: tls 3 # zookeeper: #",
"apply -f KAFKA-CONFIG-FILE",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: 1 type: tls authorization: type: simple 2 acls: - resource: type: topic name: my-topic patternType: literal operation: Read - resource: type: topic name: my-topic patternType: literal operation: Describe - resource: type: group name: my-group patternType: literal operation: Read",
"apply -f USER-CONFIG-FILE",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # listeners: - name: tls port: 9093 type: internal tls: true authentication: type: tls networkPolicyPeers: - podSelector: matchLabels: app: kafka-client # zookeeper: #",
"apply -f your-file",
"authentication: type: oauth # enableOauthBearer: true",
"authentication: type: oauth # enablePlain: true tokenEndpointUri: https:// OAUTH-SERVER-ADDRESS /auth/realms/external/protocol/openid-connect/token",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # listeners: - name: tls port: 9093 type: internal tls: true authentication: type: oauth #",
"listeners: - name: external port: 9094 type: loadbalancer tls: true authentication: type: oauth #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # listeners: - name: tls port: 9093 type: internal tls: true authentication: type: oauth validIssuerUri: < https://<auth-server-address>/auth/realms/tls > jwksEndpointUri: < https://<auth-server-address>/auth/realms/tls/protocol/openid-connect/certs > userNameClaim: preferred_username maxSecondsWithoutReauthentication: 3600 tlsTrustedCertificates: - secretName: oauth-server-cert certificate: ca.crt #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: listeners: - name: tls port: 9093 type: internal tls: true authentication: type: oauth clientId: kafka-broker clientSecret: secretName: my-cluster-oauth key: clientSecret validIssuerUri: < https://<auth-server-address>/auth/realms/tls > introspectionEndpointUri: < https://<auth-server-address>/auth/realms/tls/protocol/openid-connect/token/introspect > userNameClaim: preferred_username maxSecondsWithoutReauthentication: 3600 tlsTrustedCertificates: - secretName: oauth-server-cert certificate: ca.crt",
"edit kafka my-cluster",
"# - name: external port: 9094 type: loadbalancer tls: true authentication: type: oauth 1 validIssuerUri: < https://<auth-server-address>/auth/realms/external > 2 jwksEndpointUri: < https://<auth-server-address>/auth/realms/external/protocol/openid-connect/certs > 3 userNameClaim: preferred_username 4 maxSecondsWithoutReauthentication: 3600 5 tlsTrustedCertificates: 6 - secretName: oauth-server-cert certificate: ca.crt disableTlsHostnameVerification: true 7 jwksExpirySeconds: 360 8 jwksRefreshSeconds: 300 9 jwksMinRefreshPauseSeconds: 1 10",
"- name: external port: 9094 type: loadbalancer tls: true authentication: type: oauth validIssuerUri: < https://<auth-server-address>/auth/realms/external > introspectionEndpointUri: < https://<auth-server-address>/auth/realms/external/protocol/openid-connect/token/introspect > 1 clientId: kafka-broker 2 clientSecret: 3 secretName: my-cluster-oauth key: clientSecret userNameClaim: preferred_username 4 maxSecondsWithoutReauthentication: 3600 5",
"authentication: type: oauth # checkIssuer: false 1 checkAudience: true 2 fallbackUserNameClaim: client_id 3 fallbackUserNamePrefix: client-account- 4 validTokenType: bearer 5 userInfoEndpointUri: https://OAUTH-SERVER-ADDRESS/auth/realms/external/protocol/openid-connect/userinfo 6 enableOauthBearer: false 7 enablePlain: true 8 tokenEndpointUri: https://OAUTH-SERVER-ADDRESS/auth/realms/external/protocol/openid-connect/token 9 customClaimCheck: \"@.custom == 'custom-value'\" 10 clientAudience: AUDIENCE 11 clientScope: SCOPE 12",
"logs -f USD{POD_NAME} -c USD{CONTAINER_NAME} get pod -w",
"<dependency> <groupId>io.strimzi</groupId> <artifactId>kafka-oauth-client</artifactId> <version>{oauth-version}</version> </dependency>",
"System.setProperty(ClientConfig.OAUTH_TOKEN_ENDPOINT_URI, \" https://<auth-server-address>/auth/realms/master/protocol/openid-connect/token \"); 1 System.setProperty(ClientConfig.OAUTH_CLIENT_ID, \" <client-name> \"); 2 System.setProperty(ClientConfig.OAUTH_CLIENT_SECRET, \" <client-secret> \"); 3",
"props.put(\"sasl.jaas.config\", \"org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required;\"); props.put(\"security.protocol\", \"SASL_SSL\"); 1 props.put(\"sasl.mechanism\", \"OAUTHBEARER\"); props.put(\"sasl.login.callback.handler.class\", \"io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler\");",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Secret metadata: name: my-bridge-oauth type: Opaque data: clientSecret: MGQ1OTRmMzYtZTllZS00MDY2LWI5OGEtMTM5MzM2NjdlZjQw 1",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # authentication: type: oauth 1 tokenEndpointUri: https://<auth-server-address>/auth/realms/master/protocol/openid-connect/token 2 clientId: kafka-bridge clientSecret: secretName: my-bridge-oauth key: clientSecret tlsTrustedCertificates: 3 - secretName: oauth-server-cert certificate: tls.crt",
"spec: # authentication: # disableTlsHostnameVerification: true 1 checkAccessTokenType: false 2 accessTokenIsJwt: false 3 scope: any 4 audience: kafka 5",
"apply -f your-file",
"logs -f USD{POD_NAME} -c USD{CONTAINER_NAME} get pod -w",
"edit kafka my-cluster",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # authorization: type: keycloak 1 tokenEndpointUri: < https://<auth-server-address>/auth/realms/external/protocol/openid-connect/token > 2 clientId: kafka 3 delegateToKafkaAcls: false 4 disableTlsHostnameVerification: false 5 superUsers: 6 - CN=fred - sam - CN=edward tlsTrustedCertificates: 7 - secretName: oauth-server-cert certificate: ca.crt grantsRefreshPeriodSeconds: 60 8 grantsRefreshPoolSize: 5 9 #",
"logs -f USD{POD_NAME} -c kafka get pod -w",
"Topic:my-topic Topic:orders-* Group:orders-* Cluster:*",
"kafka-cluster:my-cluster,Topic:* kafka-cluster:*,Group:b_*",
"bin/kafka-topics.sh --create --topic my-topic --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties",
"bin/kafka-topics.sh --list --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties",
"bin/kafka-topics.sh --describe --topic my-topic --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties",
"bin/kafka-console-producer.sh --topic my-topic --broker-list my-cluster-kafka-bootstrap:9092 --producer.config=/tmp/config.properties",
"Topic:my-topic Group:my-group-*",
"bin/kafka-console-consumer.sh --topic my-topic --group my-group-1 --from-beginning --bootstrap-server my-cluster-kafka-bootstrap:9092 --consumer.config /tmp/config.properties",
"Topic:my-topic Cluster:kafka-cluster",
"bin/kafka-console-producer.sh --topic my-topic --broker-list my-cluster-kafka-bootstrap:9092 --producer.config=/tmp/config.properties --producer-property enable.idempotence=true --request-required-acks -1",
"bin/kafka-consumer-groups.sh --list --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties",
"bin/kafka-consumer-groups.sh --describe --group my-group-1 --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties",
"bin/kafka-topics.sh --alter --topic my-topic --partitions 2 --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties",
"bin/kafka-configs.sh --entity-type brokers --entity-name 0 --describe --all --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties",
"bin/kafka-configs --entity-type brokers --entity-name 0 --alter --add-config log.cleaner.threads=2 --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties",
"bin/kafka-topics.sh --delete --topic my-topic --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties",
"bin/kafka-leader-election.sh --topic my-topic --partition 0 --election-type PREFERRED / --bootstrap-server my-cluster-kafka-bootstrap:9092 --admin.config /tmp/config.properties",
"bin/kafka-reassign-partitions.sh --topics-to-move-json-file /tmp/topics-to-move.json --broker-list \"0,1\" --generate --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config /tmp/config.properties > /tmp/partition-reassignment.json",
"bin/kafka-reassign-partitions.sh --reassignment-json-file /tmp/partition-reassignment.json --execute --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config /tmp/config.properties",
"bin/kafka-reassign-partitions.sh --reassignment-json-file /tmp/partition-reassignment.json --verify --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config /tmp/config.properties",
"NS=sso get ingress keycloak -n USDNS",
"get -n USDNS pod keycloak-0 -o yaml | less",
"SECRET_NAME=credential-keycloak get -n USDNS secret USDSECRET_NAME -o yaml | grep PASSWORD | awk '{print USD2}' | base64 -D",
"Dev Team A can write to topics that start with x_ on any cluster Dev Team B can read from topics that start with x_ on any cluster Dev Team B can update consumer group offsets that start with x_ on any cluster ClusterManager of my-cluster Group has full access to cluster config on my-cluster ClusterManager of my-cluster Group has full access to consumer groups on my-cluster ClusterManager of my-cluster Group has full access to topics on my-cluster",
"SSO_HOST= SSO-HOSTNAME SSO_HOST_PORT=USDSSO_HOST:443 STOREPASS=storepass echo \"Q\" | openssl s_client -showcerts -connect USDSSO_HOST_PORT 2>/dev/null | awk ' /BEGIN CERTIFICATE/,/END CERTIFICATE/ { print USD0 } ' > /tmp/sso.crt",
"create secret generic oauth-server-cert --from-file=/tmp/sso.crt -n USDNS",
"SSO_HOST= SSO-HOSTNAME",
"cat examples/security/keycloak-authorization/kafka-ephemeral-oauth-single-keycloak-authz.yaml | sed -E 's#\\USD{SSO_HOST}'\"#USDSSO_HOST#\" | oc create -n USDNS -f -",
"NS=sso run -ti --restart=Never --image=registry.redhat.io/amq7/amq-streams-kafka-28-rhel8:1.8.4 kafka-cli -n USDNS -- /bin/sh",
"attach -ti kafka-cli -n USDNS",
"SSO_HOST= SSO-HOSTNAME SSO_HOST_PORT=USDSSO_HOST:443 STOREPASS=storepass echo \"Q\" | openssl s_client -showcerts -connect USDSSO_HOST_PORT 2>/dev/null | awk ' /BEGIN CERTIFICATE/,/END CERTIFICATE/ { print USD0 } ' > /tmp/sso.crt",
"keytool -keystore /tmp/truststore.p12 -storetype pkcs12 -alias sso -storepass USDSTOREPASS -import -file /tmp/sso.crt -noprompt",
"KAFKA_HOST_PORT=my-cluster-kafka-bootstrap:9093 STOREPASS=storepass echo \"Q\" | openssl s_client -showcerts -connect USDKAFKA_HOST_PORT 2>/dev/null | awk ' /BEGIN CERTIFICATE/,/END CERTIFICATE/ { print USD0 } ' > /tmp/my-cluster-kafka.crt",
"keytool -keystore /tmp/truststore.p12 -storetype pkcs12 -alias my-cluster-kafka -storepass USDSTOREPASS -import -file /tmp/my-cluster-kafka.crt -noprompt",
"SSO_HOST= SSO-HOSTNAME cat > /tmp/team-a-client.properties << EOF security.protocol=SASL_SSL ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.mechanism=OAUTHBEARER sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.client.id=\"team-a-client\" oauth.client.secret=\"team-a-client-secret\" oauth.ssl.truststore.location=\"/tmp/truststore.p12\" oauth.ssl.truststore.password=\"USDSTOREPASS\" oauth.ssl.truststore.type=\"PKCS12\" oauth.token.endpoint.uri=\"https://USDSSO_HOST/auth/realms/kafka-authz/protocol/openid-connect/token\" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler EOF",
"cat > /tmp/team-b-client.properties << EOF security.protocol=SASL_SSL ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.mechanism=OAUTHBEARER sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.client.id=\"team-b-client\" oauth.client.secret=\"team-b-client-secret\" oauth.ssl.truststore.location=\"/tmp/truststore.p12\" oauth.ssl.truststore.password=\"USDSTOREPASS\" oauth.ssl.truststore.type=\"PKCS12\" oauth.token.endpoint.uri=\"https://USDSSO_HOST/auth/realms/kafka-authz/protocol/openid-connect/token\" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler EOF",
"USERNAME=alice PASSWORD=alice-password GRANT_RESPONSE=USD(curl -X POST \"https://USDSSO_HOST/auth/realms/kafka-authz/protocol/openid-connect/token\" -H 'Content-Type: application/x-www-form-urlencoded' -d \"grant_type=password&username=USDUSERNAME&password=USDPASSWORD&client_id=kafka-cli&scope=offline_access\" -s -k) REFRESH_TOKEN=USD(echo USDGRANT_RESPONSE | awk -F \"refresh_token\\\":\\\"\" '{printf USD2}' | awk -F \"\\\"\" '{printf USD1}')",
"cat > /tmp/alice.properties << EOF security.protocol=SASL_SSL ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.mechanism=OAUTHBEARER sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.refresh.token=\"USDREFRESH_TOKEN\" oauth.client.id=\"kafka-cli\" oauth.ssl.truststore.location=\"/tmp/truststore.p12\" oauth.ssl.truststore.password=\"USDSTOREPASS\" oauth.ssl.truststore.type=\"PKCS12\" oauth.token.endpoint.uri=\"https://USDSSO_HOST/auth/realms/kafka-authz/protocol/openid-connect/token\" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler EOF",
"bin/kafka-console-producer.sh --broker-list my-cluster-kafka-bootstrap:9093 --topic my-topic --producer.config=/tmp/team-a-client.properties First message",
"bin/kafka-console-producer.sh --broker-list my-cluster-kafka-bootstrap:9093 --topic a_messages --producer.config /tmp/team-a-client.properties First message Second message",
"logs my-cluster-kafka-0 -f -n USDNS",
"bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic a_messages --from-beginning --consumer.config /tmp/team-a-client.properties",
"bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic a_messages --from-beginning --consumer.config /tmp/team-a-client.properties --group a_consumer_group_1",
"bin/kafka-topics.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/team-a-client.properties --list",
"bin/kafka-consumer-groups.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/team-a-client.properties --list",
"bin/kafka-configs.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/team-a-client.properties --entity-type brokers --describe --entity-default",
"bin/kafka-console-producer.sh --broker-list my-cluster-kafka-bootstrap:9093 --topic a_messages --producer.config /tmp/team-b-client.properties Message 1",
"bin/kafka-console-producer.sh --broker-list my-cluster-kafka-bootstrap:9093 --topic b_messages --producer.config /tmp/team-b-client.properties Message 1 Message 2 Message 3",
"bin/kafka-console-producer.sh --broker-list my-cluster-kafka-bootstrap:9093 --topic x_messages --producer.config /tmp/team-b-client.properties Message 1",
"bin/kafka-console-producer.sh --broker-list my-cluster-kafka-bootstrap:9093 --topic x_messages --producer.config /tmp/team-a-client.properties Message 1",
"bin/kafka-topics.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/alice.properties --topic x_messages --create --replication-factor 1 --partitions 1",
"bin/kafka-topics.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/alice.properties --list bin/kafka-topics.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/team-a-client.properties --list bin/kafka-topics.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/team-b-client.properties --list",
"bin/kafka-console-producer.sh --broker-list my-cluster-kafka-bootstrap:9093 --topic x_messages --producer.config /tmp/team-a-client.properties Message 1 Message 2 Message 3",
"bin/kafka-console-producer.sh --broker-list my-cluster-kafka-bootstrap:9093 --topic x_messages --producer.config /tmp/team-b-client.properties Message 4 Message 5",
"bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages --from-beginning --consumer.config /tmp/team-b-client.properties --group x_consumer_group_b",
"bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages --from-beginning --consumer.config /tmp/team-a-client.properties --group x_consumer_group_a",
"bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages --from-beginning --consumer.config /tmp/team-a-client.properties --group a_consumer_group_a",
"bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages --from-beginning --consumer.config /tmp/alice.properties",
"bin/kafka-configs.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/alice.properties --entity-type brokers --describe --entity-default"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_amq_streams_on_openshift/assembly-securing-access-str |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate and prioritize your feedback regarding our documentation. Provide as much detail as possible, so that your request can be quickly addressed. Prerequisites You are logged in to the Red Hat Customer Portal. Procedure To provide feedback, perform the following steps: Click the following link: Create Issue Describe the issue or enhancement in the Summary text box. Provide details about the issue or requested enhancement in the Description text box. Type your name in the Reporter text box. Click the Create button. This action creates a documentation ticket and routes it to the appropriate documentation team. Thank you for taking the time to provide feedback. | null | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/assessing_and_reporting_malware_signatures_on_rhel_systems/proc-providing-feedback-on-redhat-documentation |
Chapter 6. Reference | Chapter 6. Reference 6.1. Artifact Repository Mirrors A repository in Maven holds build artifacts and dependencies of various types (all the project jars, library jar, plugins or any other project specific artifacts). It also specifies locations from where to download artifacts from, while performing the S2I build. Besides using central repositories, it is a common practice for organizations to deploy a local custom repository (mirror). Benefits of using a mirror are: Availability of a synchronized mirror, which is geographically closer and faster. Ability to have greater control over the repository content. Possibility to share artifacts across different teams (developers, CI), without the need to rely on public servers and repositories. Improved build times. Often, a repository manager can serve as local cache to a mirror. Assuming that the repository manager is already deployed and reachable externally at http://10.0.0.1:8080/repository/internal/ , the S2I build can then use this manager by supplying the MAVEN_MIRROR_URL environment variable to the build configuration of the application as follows: Identify the name of the build configuration to apply MAVEN_MIRROR_URL variable against: USD oc get bc -o name buildconfig/sso Update build configuration of sso with a MAVEN_MIRROR_URL environment variable USD oc set env bc/sso \ -e MAVEN_MIRROR_URL="http://10.0.0.1:8080/repository/internal/" buildconfig "sso" updated Verify the setting USD oc set env bc/sso --list # buildconfigs sso MAVEN_MIRROR_URL=http://10.0.0.1:8080/repository/internal/ Schedule new build of the application Note During application build, you will notice that Maven dependencies are pulled from the repository manager, instead of the default public repositories. Also, after the build is finished, you will see that the mirror is filled with all the dependencies that were retrieved and used during the build. 6.2. Environment Variables 6.2.1. Information Environment Variables The following information environment variables are designed to convey information about the image and should not be modified by the user: Table 6.1. Information Environment Variables Variable Name Description Example Value AB_JOLOKIA_AUTH_OPENSHIFT - true AB_JOLOKIA_HTTPS - true AB_JOLOKIA_PASSWORD_RANDOM - true JBOSS_IMAGE_NAME Image name, same as "name" label. rh-sso-7/sso74-openshift-rhel8 JBOSS_IMAGE_VERSION Image version, same as "version" label. 7.4 JBOSS_MODULES_SYSTEM_PKGS - org.jboss.logmanager,jdk.nashorn.api 6.2.2. Configuration Environment Variables Configuration environment variables are designed to conveniently adjust the image without requiring a rebuild, and should be set by the user as desired. Table 6.2. Configuration Environment Variables Variable Name Description Example Value AB_JOLOKIA_AUTH_OPENSHIFT Switch on client authentication for OpenShift TLS communication. The value of this parameter can be a relative distinguished name which must be contained in a presented client's certificate. Enabling this parameter will automatically switch Jolokia into https communication mode. The default CA cert is set to /var/run/secrets/kubernetes.io/serviceaccount/ca.crt . true AB_JOLOKIA_CONFIG If set uses this file (including path) as Jolokia JVM agent properties (as described in Jolokia's reference manual ). If not set, the /opt/jolokia/etc/jolokia.properties file will be created using the settings as defined in this document, otherwise the rest of the settings in this document are ignored. /opt/jolokia/custom.properties AB_JOLOKIA_DISCOVERY_ENABLED Enable Jolokia discovery. Defaults to false . true AB_JOLOKIA_HOST Host address to bind to. Defaults to 0.0.0.0 . 127.0.0.1 AB_JOLOKIA_HTTPS Switch on secure communication with https. By default self-signed server certificates are generated if no serverCert configuration is given in AB_JOLOKIA_OPTS . NOTE: If the values is set to an empty string, https is turned off . If the value is set to a non empty string, https is turned on . true AB_JOLOKIA_ID Agent ID to use (USDHOSTNAME by default, which is the container id). openjdk-app-1-xqlsj AB_JOLOKIA_OFF If set disables activation of Jolokia (i.e. echos an empty value). By default, Jolokia is enabled. NOTE: If the values is set to an empty string, https is turned off . If the value is set to a non empty string, https is turned on . true AB_JOLOKIA_OPTS Additional options to be appended to the agent configuration. They should be given in the format "key=value, key=value, ...<200b> " backlog=20 AB_JOLOKIA_PASSWORD Password for basic authentication. By default authentication is switched off. mypassword AB_JOLOKIA_PASSWORD_RANDOM If set, a random value is generated for AB_JOLOKIA_PASSWORD , and it is saved in the /opt/jolokia/etc/jolokia.pw file. true AB_JOLOKIA_PORT Port to use (Default: 8778 ). 5432 AB_JOLOKIA_USER User for basic authentication. Defaults to jolokia . myusername CONTAINER_CORE_LIMIT A calculated core limit as described in CFS Bandwidth Control. 2 GC_ADAPTIVE_SIZE_POLICY_WEIGHT The weighting given to the current Garbage Collection (GC) time versus GC times. 90 GC_MAX_HEAP_FREE_RATIO Maximum percentage of heap free after GC to avoid shrinking. 40 GC_MAX_METASPACE_SIZE The maximum metaspace size. 100 GC_TIME_RATIO_MIN_HEAP_FREE_RATIO Minimum percentage of heap free after GC to avoid expansion. 20 GC_TIME_RATIO Specifies the ratio of the time spent outside the garbage collection (for example, the time spent for application execution) to the time spent in the garbage collection. 4 JAVA_DIAGNOSTICS Set this to get some diagnostics information to standard out when things are happening. true JAVA_INITIAL_MEM_RATIO This is used to calculate a default initial heap memory based the maximal heap memory. The default is 100 which means 100% of the maximal heap is used for the initial heap size. You can skip this mechanism by setting this value to 0 in which case no -Xms option is added. 100 JAVA_MAX_MEM_RATIO It is used to calculate a default maximal heap memory based on a containers restriction. If used in a Docker container without any memory constraints for the container then this option has no effect. If there is a memory constraint then -Xmx is set to a ratio of the container available memory as set here. The default is 50 which means 50% of the available memory is used as an upper boundary. You can skip this mechanism by setting this value to 0 in which case no -Xmx option is added. 40 JAVA_OPTS_APPEND Server startup options. -Dkeycloak.migration.action=export -Dkeycloak.migration.provider=dir -Dkeycloak.migration.dir=/tmp MQ_SIMPLE_DEFAULT_PHYSICAL_DESTINATION For backwards compatability, set to true to use MyQueue and MyTopic as physical destination name defaults instead of queue/MyQueue and topic/MyTopic . false OPENSHIFT_KUBE_PING_LABELS Clustering labels selector. app=sso-app OPENSHIFT_KUBE_PING_NAMESPACE Clustering project namespace. myproject SCRIPT_DEBUG If set to true , ensurses that the bash scripts are executed with the -x option, printing the commands and their arguments as they are executed. true SSO_ADMIN_PASSWORD Password of the administrator account for the master realm of the Red Hat Single Sign-On server. Required. If no value is specified, it is auto generated and displayed as an OpenShift Instructional message when the template is instantiated. adm-password SSO_ADMIN_USERNAME Username of the administrator account for the master realm of the Red Hat Single Sign-On server. Required. If no value is specified, it is auto generated and displayed as an OpenShift Instructional message when the template is instantiated. admin SSO_HOSTNAME Custom hostname for the Red Hat Single Sign-On server. Not set by default . If not set, the request hostname SPI provider, which uses the request headers to determine the hostname of the Red Hat Single Sign-On server is used. If set, the fixed hostname SPI provider, with the hostname of the Red Hat Single Sign-On server set to the provided variable value, is used. See dedicated Customizing Hostname for the Red Hat Single Sign-On Server section for additional steps to be performed, when SSO_HOSTNAME variable is set. rh-sso-server.openshift.example.com SSO_REALM Name of the realm to be created in the Red Hat Single Sign-On server if this environment variable is provided. demo SSO_SERVICE_PASSWORD The password for the Red Hat Single Sign-On service user. mgmt-password SSO_SERVICE_USERNAME The username used to access the Red Hat Single Sign-On service. This is used by clients to create the application client(s) within the specified Red Hat Single Sign-On realm. This user is created if this environment variable is provided. sso-mgmtuser SSO_TRUSTSTORE The name of the truststore file within the secret. truststore.jks SSO_TRUSTSTORE_DIR Truststore directory. /etc/sso-secret-volume SSO_TRUSTSTORE_PASSWORD The password for the truststore and certificate. mykeystorepass SSO_TRUSTSTORE_SECRET The name of the secret containing the truststore file. Used for sso-truststore-volume volume. truststore-secret Available application templates for Red Hat Single Sign-On for OpenShift can combine the aforementioned configuration variables with common OpenShift variables (for example APPLICATION_NAME or SOURCE_REPOSITORY_URL ), product specific variables (e.g. HORNETQ_CLUSTER_PASSWORD ), or configuration variables typical to database images (e.g. POSTGRESQL_MAX_CONNECTIONS ) yet. All of these different types of configuration variables can be adjusted as desired to achieve the deployed Red Hat Single Sign-On-enabled application will align with the intended use case as much as possible. The list of configuration variables, available for each category of application templates for Red Hat Single Sign-On-enabled applications, is described below. 6.2.3. Template variables for all Red Hat Single Sign-On images Table 6.3. Configuration Variables Available For All Red Hat Single Sign-On Images Variable Description APPLICATION_NAME The name for the application. DB_MAX_POOL_SIZE Sets xa-pool/max-pool-size for the configured datasource. DB_TX_ISOLATION Sets transaction-isolation for the configured datasource. DB_USERNAME Database user name. HOSTNAME_HTTP Custom hostname for http service route. Leave blank for default hostname, e.g.: <application-name>.<project>.<default-domain-suffix> . HOSTNAME_HTTPS Custom hostname for https service route. Leave blank for default hostname, e.g.: <application-name>.<project>.<default-domain-suffix> . HTTPS_KEYSTORE The name of the keystore file within the secret. If defined along with HTTPS_PASSWORD and HTTPS_NAME , enable HTTPS and set the SSL certificate key file to a relative path under USDJBOSS_HOME/standalone/configuration . HTTPS_KEYSTORE_TYPE The type of the keystore file (JKS or JCEKS). HTTPS_NAME The name associated with the server certificate (e.g. jboss ). If defined along with HTTPS_PASSWORD and HTTPS_KEYSTORE , enable HTTPS and set the SSL name. HTTPS_PASSWORD The password for the keystore and certificate (e.g. mykeystorepass ). If defined along with HTTPS_NAME and HTTPS_KEYSTORE , enable HTTPS and set the SSL key password. HTTPS_SECRET The name of the secret containing the keystore file. IMAGE_STREAM_NAMESPACE Namespace in which the ImageStreams for Red Hat Middleware images are installed. These ImageStreams are normally installed in the openshift namespace. You should only need to modify this if you've installed the ImageStreams in a different namespace/project. JGROUPS_CLUSTER_PASSWORD JGroups cluster password. JGROUPS_ENCRYPT_KEYSTORE The name of the keystore file within the secret. JGROUPS_ENCRYPT_NAME The name associated with the server certificate (e.g. secret-key ). JGROUPS_ENCRYPT_PASSWORD The password for the keystore and certificate (e.g. password ). JGROUPS_ENCRYPT_SECRET The name of the secret containing the keystore file. SSO_ADMIN_USERNAME Username of the administrator account for the master realm of the Red Hat Single Sign-On server. Required. If no value is specified, it is auto generated and displayed as an OpenShift instructional message when the template is instantiated. SSO_ADMIN_PASSWORD Password of the administrator account for the master realm of the Red Hat Single Sign-On server. Required. If no value is specified, it is auto generated and displayed as an OpenShift instructional message when the template is instantiated. SSO_REALM Name of the realm to be created in the Red Hat Single Sign-On server if this environment variable is provided. SSO_SERVICE_USERNAME The username used to access the Red Hat Single Sign-On service. This is used by clients to create the application client(s) within the specified Red Hat Single Sign-On realm. This user is created if this environment variable is provided. SSO_SERVICE_PASSWORD The password for the Red Hat Single Sign-On service user. SSO_TRUSTSTORE The name of the truststore file within the secret. SSO_TRUSTSTORE_SECRET The name of the secret containing the truststore file. Used for sso-truststore-volume volume. SSO_TRUSTSTORE_PASSWORD The password for the truststore and certificate. 6.2.4. Template variables specific to sso74-postgresql , sso74-postgresql-persistent , and sso74-x509-postgresql-persistent Table 6.4. Configuration Variables Specific To Red Hat Single Sign-On-enabled PostgreSQL Applications With Ephemeral Or Persistent Storage Variable Description DB_USERNAME Database user name. DB_PASSWORD Database user password. DB_JNDI Database JNDI name used by application to resolve the datasource, e.g. java:/jboss/datasources/postgresql POSTGRESQL_MAX_CONNECTIONS The maximum number of client connections allowed. This also sets the maximum number of prepared transactions. POSTGRESQL_SHARED_BUFFERS Configures how much memory is dedicated to PostgreSQL for caching data. 6.2.5. Template variables for general eap64 and eap71 S2I images Table 6.5. Configuration Variables For EAP 6.4 and EAP 7 Applications Built Via S2I Variable Description APPLICATION_NAME The name for the application. ARTIFACT_DIR Artifacts directory. AUTO_DEPLOY_EXPLODED Controls whether exploded deployment content should be automatically deployed. CONTEXT_DIR Path within Git project to build; empty for root project directory. GENERIC_WEBHOOK_SECRET Generic build trigger secret. GITHUB_WEBHOOK_SECRET GitHub trigger secret. HORNETQ_CLUSTER_PASSWORD HornetQ cluster administrator password. HORNETQ_QUEUES Queue names. HORNETQ_TOPICS Topic names. HOSTNAME_HTTP Custom host name for http service route. Leave blank for default host name, e.g.: <application-name>.<project>.<default-domain-suffix> . HOSTNAME_HTTPS Custom host name for https service route. Leave blank for default host name, e.g.: <application-name>.<project>.<default-domain-suffix> . HTTPS_KEYSTORE_TYPE The type of the keystore file (JKS or JCEKS). HTTPS_KEYSTORE The name of the keystore file within the secret. If defined along with HTTPS_PASSWORD and HTTPS_NAME , enable HTTPS and set the SSL certificate key file to a relative path under USDJBOSS_HOME/standalone/configuration . HTTPS_NAME The name associated with the server certificate (e.g. jboss ). If defined along with HTTPS_PASSWORD and HTTPS_KEYSTORE , enable HTTPS and set the SSL name. HTTPS_PASSWORD The password for the keystore and certificate (e.g. mykeystorepass ). If defined along with HTTPS_NAME and HTTPS_KEYSTORE , enable HTTPS and set the SSL key password. HTTPS_SECRET The name of the secret containing the keystore file. IMAGE_STREAM_NAMESPACE Namespace in which the ImageStreams for Red Hat Middleware images are installed. These ImageStreams are normally installed in the openshift namespace. You should only need to modify this if you've installed the ImageStreams in a different namespace/project. JGROUPS_CLUSTER_PASSWORD JGroups cluster password. JGROUPS_ENCRYPT_KEYSTORE The name of the keystore file within the secret. JGROUPS_ENCRYPT_NAME The name associated with the server certificate (e.g. secret-key ). JGROUPS_ENCRYPT_PASSWORD The password for the keystore and certificate (e.g. password ). JGROUPS_ENCRYPT_SECRET The name of the secret containing the keystore file. SOURCE_REPOSITORY_REF Git branch/tag reference. SOURCE_REPOSITORY_URL Git source URI for application. 6.2.6. Template variables specific to eap64-sso-s2i and eap71-sso-s2i for automatic client registration Table 6.6. Configuration Variables For EAP 6.4 and EAP 7 Red Hat Single Sign-On-enabled Applications Built Via S2I Variable Description SSO_URL Red Hat Single Sign-On server location. SSO_REALM Name of the realm to be created in the Red Hat Single Sign-On server if this environment variable is provided. SSO_USERNAME The username used to access the Red Hat Single Sign-On service. This is used to create the application client(s) within the specified Red Hat Single Sign-On realm. This should match the SSO_SERVICE_USERNAME specified through one of the sso74- templates. SSO_PASSWORD The password for the Red Hat Single Sign-On service user. SSO_PUBLIC_KEY Red Hat Single Sign-On public key. Public key is recommended to be passed into the template to avoid man-in-the-middle security attacks. SSO_SECRET The Red Hat Single Sign-On client secret for confidential access. SSO_SERVICE_URL Red Hat Single Sign-On service location. SSO_TRUSTSTORE_SECRET The name of the secret containing the truststore file. Used for sso-truststore-volume volume. SSO_TRUSTSTORE The name of the truststore file within the secret. SSO_TRUSTSTORE_PASSWORD The password for the truststore and certificate. SSO_BEARER_ONLY Red Hat Single Sign-On client access type. SSO_DISABLE_SSL_CERTIFICATE_VALIDATION If true SSL communication between EAP and the Red Hat Single Sign-On Server is insecure (i.e. certificate validation is disabled with curl) SSO_ENABLE_CORS Enable CORS for Red Hat Single Sign-On applications. 6.2.7. Template variables specific to eap64-sso-s2i and eap71-sso-s2i for automatic client registration with SAML clients Table 6.7. Configuration Variables For EAP 6.4 and EAP 7 Red Hat Single Sign-On-enabled Applications Built Via S2I Using SAML Protocol Variable Description SSO_SAML_CERTIFICATE_NAME The name associated with the server certificate. SSO_SAML_KEYSTORE_PASSWORD The password for the keystore and certificate. SSO_SAML_KEYSTORE The name of the keystore file within the secret. SSO_SAML_KEYSTORE_SECRET The name of the secret containing the keystore file. SSO_SAML_LOGOUT_PAGE Red Hat Single Sign-On logout page for SAML applications. 6.3. Exposed Ports Port Number Description 8443 HTTPS 8778 Jolokia monitoring | [
"oc get bc -o name buildconfig/sso",
"oc set env bc/sso -e MAVEN_MIRROR_URL=\"http://10.0.0.1:8080/repository/internal/\" buildconfig \"sso\" updated",
"oc set env bc/sso --list buildconfigs sso MAVEN_MIRROR_URL=http://10.0.0.1:8080/repository/internal/"
] | https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.4/html/red_hat_single_sign-on_for_openshift_on_openjdk/reference |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/configuring_openshift_data_foundation_disaster_recovery_for_openshift_workloads/making-open-source-more-inclusive |
8.6. Receive-Side Scaling (RSS) | 8.6. Receive-Side Scaling (RSS) Receive-Side Scaling (RSS), also known as multi-queue receive, distributes network receive processing across several hardware-based receive queues, allowing inbound network traffic to be processed by multiple CPUs. RSS can be used to relieve bottlenecks in receive interrupt processing caused by overloading a single CPU, and to reduce network latency. To determine whether your network interface card supports RSS, check whether multiple interrupt request queues are associated with the interface in /proc/interrupts . For example, if you are interested in the p1p1 interface: The preceding output shows that the NIC driver created 6 receive queues for the p1p1 interface ( p1p1-0 through p1p1-5 ). It also shows how many interrupts were processed by each queue, and which CPU serviced the interrupt. In this case, there are 6 queues because by default, this particular NIC driver creates one queue per CPU, and this system has 6 CPUs. This is a fairly common pattern amongst NIC drivers. Alternatively, you can check the output of ls -1 /sys/devices/*/*/ device_pci_address /msi_irqs after the network driver is loaded. For example, if you are interested in a device with a PCI address of 0000:01:00.0 , you can list the interrupt request queues of that device with the following command: RSS is enabled by default. The number of queues (or the CPUs that should process network activity) for RSS are configured in the appropriate network device driver. For the bnx2x driver, it is configured in num_queues . For the sfc driver, it is configured in the rss_cpus parameter. Regardless, it is typically configured in /sys/class/net/ device /queues/ rx-queue / , where device is the name of the network device (such as eth1 ) and rx-queue is the name of the appropriate receive queue. When configuring RSS, Red Hat recommends limiting the number of queues to one per physical CPU core. Hyper-threads are often represented as separate cores in analysis tools, but configuring queues for all cores including logical cores such as hyper-threads has not proven beneficial to network performance. When enabled, RSS distributes network processing equally between available CPUs based on the amount of processing each CPU has queued. However, you can use the ethtool --show-rxfh-indir and --set-rxfh-indir parameters to modify how network activity is distributed, and weight certain types of network activity as more important than others. The irqbalance daemon can be used in conjunction with RSS to reduce the likelihood of cross-node memory transfers and cache line bouncing. This lowers the latency of processing network packets. If both irqbalance and RSS are in use, lowest latency is achieved by ensuring that irqbalance directs interrupts associated with a network device to the appropriate RSS queue. | [
"egrep 'CPU|p1p1' /proc/interrupts CPU0 CPU1 CPU2 CPU3 CPU4 CPU5 89: 40187 0 0 0 0 0 IR-PCI-MSI-edge p1p1-0 90: 0 790 0 0 0 0 IR-PCI-MSI-edge p1p1-1 91: 0 0 959 0 0 0 IR-PCI-MSI-edge p1p1-2 92: 0 0 0 3310 0 0 IR-PCI-MSI-edge p1p1-3 93: 0 0 0 0 622 0 IR-PCI-MSI-edge p1p1-4 94: 0 0 0 0 0 2475 IR-PCI-MSI-edge p1p1-5",
"ls -1 /sys/devices/*/*/0000:01:00.0/msi_irqs 101 102 103 104 105 106 107 108 109"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/performance_tuning_guide/network-rss |
25.2. Prerequisites for Using Vaults | 25.2. Prerequisites for Using Vaults To enable vaults, install the Key Recovery Authority (KRA) Certificate System component on one or more of the servers in your IdM domain: Note To make the Vault service highly available, install the KRA on two IdM servers or more. | [
"ipa-kra-install"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/vault-prereqs |
Chapter 2. Uploading images to GCP with RHEL image builder | Chapter 2. Uploading images to GCP with RHEL image builder With RHEL image builder, you can build a gce image, provide credentials for your user or GCP service account, and then upload the gce image directly to the GCP environment. 2.1. Configuring and uploading a gce image to GCP by using the CLI Set up a configuration file with credentials to upload your gce image to GCP by using the RHEL image builder CLI. Warning You cannot manually import gce image to GCP, because the image will not boot. You must use either gcloud or RHEL image builder to upload it. Prerequisites You have a valid Google account and credentials to upload your image to GCP. The credentials can be from a user account or a service account. The account associated with the credentials must have at least the following IAM roles assigned: roles/storage.admin - to create and delete storage objects roles/compute.storageAdmin - to import a VM image to Compute Engine. You have an existing GCP bucket. Procedure Use a text editor to create a gcp-config.toml configuration file with the following content: GCP_BUCKET points to an existing bucket. It is used to store the intermediate storage object of the image which is being uploaded. GCP_STORAGE_REGION is both a regular Google storage region and a dual or multi region. OBJECT_KEY is the name of an intermediate storage object. It must not exist before the upload, and it is deleted when the upload process is done. If the object name does not end with .tar.gz , the extension is automatically added to the object name. GCP_CREDENTIALS is a Base64 -encoded scheme of the credentials JSON file downloaded from GCP. The credentials determine which project the GCP uploads the image to. Note Specifying GCP_CREDENTIALS in the gcp-config.toml file is optional if you use a different mechanism to authenticate with GCP. For other authentication methods, see Authenticating with GCP . Retrieve the GCP_CREDENTIALS from the JSON file downloaded from GCP. Create a compose with an additional image name and cloud provider profile: The image build, upload, and cloud registration processes can take up to ten minutes to complete. Verification Verify that the image status is FINISHED: Additional resources Identity and Access Management Create storage buckets 2.2. How RHEL image builder sorts the authentication order of different GCP credentials You can use several different types of credentials with RHEL image builder to authenticate with GCP. If RHEL image builder configuration is set to authenticate with GCP using multiple sets of credentials, it uses the credentials in the following order of preference: Credentials specified with the composer-cli command in the configuration file. Credentials configured in the osbuild-composer worker configuration. Application Default Credentials from the Google GCP SDK library, which tries to automatically find a way to authenticate by using the following options: If the GOOGLE_APPLICATION_CREDENTIALS environment variable is set, Application Default Credentials tries to load and use credentials from the file pointed to by the variable. Application Default Credentials tries to authenticate by using the service account attached to the resource that is running the code. For example, Google Compute Engine VM. Note You must use the GCP credentials to determine which GCP project to upload the image to. Therefore, unless you want to upload all of your images to the same GCP project, you always must specify the credentials in the gcp-config.toml configuration file with the composer-cli command. 2.2.1. Specifying GCP credentials with the composer-cli command You can specify GCP authentication credentials in the upload target configuration gcp-config.toml file. Use a Base64 -encoded scheme of the Google account credentials JSON file to save time. Procedure Get the encoded content of the Google account credentials file with the path stored in GOOGLE_APPLICATION_CREDENTIALS environment variable, by running the following command: In the upload target configuration gcp-config.toml file, set the credentials: 2.2.2. Specifying credentials in the osbuild-composer worker configuration You can configure GCP authentication credentials to be used for GCP globally for all image builds. This way, if you want to import images to the same GCP project, you can use the same credentials for all image uploads to GCP. Procedure In the /etc/osbuild-worker/osbuild-worker.toml worker configuration, set the following credential value: | [
"provider = \"gcp\" [settings] bucket = \"GCP_BUCKET\" region = \"GCP_STORAGE_REGION\" object = \"OBJECT_KEY\" credentials = \"GCP_CREDENTIALS\"",
"sudo base64 -w 0 cee-gcp-nasa-476a1fa485b7.json",
"sudo composer-cli compose start BLUEPRINT-NAME gce IMAGE_KEY gcp-config.toml",
"sudo composer-cli compose status",
"base64 -w 0 \"USD{GOOGLE_APPLICATION_CREDENTIALS}\"",
"provider = \"gcp\" [settings] provider = \"gcp\" [settings] credentials = \"GCP_CREDENTIALS\"",
"[gcp] credentials = \" PATH_TO_GCP_ACCOUNT_CREDENTIALS \""
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/deploying_rhel_9_on_google_cloud_platform/assembly_uploading-images-to-gcp-with-image-builder_cloud-content-gcp |
Chapter 5. Customer privacy | Chapter 5. Customer privacy Various Microsoft products have a feature that reports usage statistics, analytics, and various other metrics to Microsoft over the network. Microsoft calls this Telemetry. Red Hat is disabling telemetry because we do not recommend sending customer data to anyone without explicit permission. | null | https://docs.redhat.com/en/documentation/net/9.0/html/release_notes_for_.net_9.0_rpm_packages/customer-privacy_release-notes-for-dotnet-rpms |
Chapter 3. Installing a cluster quickly on GCP | Chapter 3. Installing a cluster quickly on GCP In OpenShift Container Platform version 4.14, you can install a cluster on Google Cloud Platform (GCP) that uses the default configuration options. 3.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured a GCP project to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. 3.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 3.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 3.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 3.5. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Remove any existing GCP credentials that do not use the service account key for the GCP account that you configured for your cluster and that are stored in the following locations: The GOOGLE_CREDENTIALS , GOOGLE_CLOUD_KEYFILE_JSON , or GCLOUD_KEYFILE_JSON environment variables The ~/.gcp/osServiceAccount.json file The gcloud cli default credentials Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. 2 To view different installation details, specify warn , debug , or error instead of info . When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Provide values at the prompts: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select gcp as the platform to target. If you have not configured the service account key for your GCP account on your host, you must obtain it from GCP and paste the contents of the file or enter the absolute path to the file. Select the project ID to provision the cluster in. The default value is specified by the service account that you configured. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Enter a descriptive name for your cluster. If you provide a name that is longer than 6 characters, only the first 6 characters will be used in the infrastructure ID that is generated from the cluster name. Paste the pull secret from Red Hat OpenShift Cluster Manager . Optional: You can reduce the number of permissions for the service account that you used to install the cluster. If you assigned the Owner role to your service account, you can remove that role and replace it with the Viewer role. If you included the Service Account Key Admin role, you can remove it. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 3.6. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.14. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.14 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 3.7. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 3.8. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 3.9. steps Customize your cluster . If necessary, you can opt out of remote health reporting . | [
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin"
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_gcp/installing-gcp-default |
Chapter 145. KafkaNodePoolSpec schema reference | Chapter 145. KafkaNodePoolSpec schema reference Used in: KafkaNodePool Property Description replicas The number of pods in the pool. integer storage Storage configuration (disk). Cannot be updated. The type depends on the value of the storage.type property within the given object, which must be one of [ephemeral, persistent-claim, jbod]. EphemeralStorage , PersistentClaimStorage , JbodStorage roles The roles that the nodes in this pool will have when KRaft mode is enabled. Supported values are 'broker' and 'controller'. This field is required. When KRaft mode is disabled, the only allowed value if broker . string (one or more of [controller, broker]) array resources CPU and memory resources to reserve. For more information, see the external documentation for core/v1 resourcerequirements . ResourceRequirements jvmOptions JVM Options for pods. JvmOptions template Template for pool resources. The template allows users to specify how the resources belonging to this pool are generated. KafkaNodePoolTemplate | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-KafkaNodePoolSpec-reference |
Chapter 4. Exporting inventory data | Chapter 4. Exporting inventory data You can use the export service for inventory to export a list of systems and their data from your Insights inventory. You can specify CSV or JSON as the output format. The export process takes place asynchronously, so it runs in the background. The service is available in both the Insights UI and through the export service API. The exported content includes the following information about each system in your inventory: host_id fqdn (Fully Qualified Domain Name) display_name group_id group_name state os_release updated subscription_manager_id satellite_id tags host_type Note The export service currently exports information about all systems in your inventory. Support for filters will be available in a future release. The Inventory export service works differently from the export function in other services, such as Advisor. Some of the differences are: Inventory export operates asynchronously Exports the entire inventory to one continuous file (no pagination in the export file) Retains generated files for 7 days Uses token-based service accounts for authorization if using the export service API Important Your RBAC permissions affect the system information you can export. You must have inventory:hosts:read permission for a system to export system information. 4.1. Inventory data files The inventory export process creates and downloads a zip file. The zip file contains the following files: id .suffix - the export data file, with the file name format of id .json for JSON files, or id .csv for CSV files. For example: f26a57ac-1efc-4831-9c26-c818b6060ddf.json README.md - the export manifest for the JSON/CSV file, which lists the downloaded files, any errors, and instructions for obtaining help meta.json - describes the export operation - requestor, date, Organization ID, and file metadata (such as the filename of the JSON/CSV file) 4.2. Exporting system inventory from the Insights UI You can export inventory data from the Insights UI. The inventory data export service works differently from the export service for other Insights services, such as Advisor. Prerequisites RBAC permissions for the systems you want to view and export Inventory:hosts:read (inventory:hosts:read * for all systems in inventory) A User Access role for workspaces. For more information about User Access roles, see User access to workspaces . Procedure Navigate to Inventory > Systems. The list of systems displays. Click the Export icon to the options icon (...). The drop-down menu displays. Select CSV or JSON as the export format. A status message displays: Preparing export. Once complete, your download will start automatically. When the download completes, a browser window automatically opens to display the results. If you remain on the Systems page after requesting the download, status messages from Insights appear with updates on the progress of the export operation. 4.3. Exporting system inventory using the export API You can use the Export API to export your inventory data. Use the REST API entry point: console.redhat.com/api/export/v1 . The Export Service API supports the GET, POST, and DELETE HTTP methods. The API offers the following services: POST /exports GET /exports GET /exports/ id DELETE /exports/ id GET /exports/ id /status The API works asynchronously. You can submit the POST /exports request for export from the Export API and receive a reply with an ID for that export. You can then use that ID to monitor the progress of the export operation with the GET /exports/ id /status request. When the generated export is complete, you can download it (GET /exports/ id ) or delete it (DELETE /exports/ id ). Successful requests return the following responses: 200 - Success 202 - Successfully deleted (for the DELETE method) For more information about the operations, schemas, and objects, see Consoledot Export Service . 4.3.1. Requesting the system inventory export Before you can request the exported data file, you need to obtain a unique ID for the download. To obtain the ID, issue a POST request. The server returns a response that includes the ID. Use the ID in any request that requires the id parameter, such as GET /exports/ id . Prerequisites Token-based service account with the appropriate permissions for your systems RBAC permissions for the systems you want to view and export Inventory:hosts:read (inventory:hosts:read * for all systems in inventory) A User Access role for workspaces. For more information about User Access roles, see User access to workspaces . Procedure Create a request for the export service, or use this sample request code: { "name": "Inventory Export", "format": "json", "sources": [ { "application": "urn:redhat:application:inventory", "resource": "urn:redhat:application:inventory:export:systems" } ] } Note You can request CSV or JSON as your export format. In the Hybrid Cloud Console, navigate to the API documentation: https://console.redhat.com/docs/api/export . Note You can use the API documentation to experiment and run queries against the API before writing your own custom client and/or use the APIs in your automation. Select POST /export. Remove the existing sample code in the Request Body window and paste the request code into the window. Click Execute . This request initiates the export process. The curl request and server response appear, along with the result codes for the POST operation. Look for the id field in the server response. Copy and save the string value for id . Use this value for id in your requests. Optional. Issue the GET /exports request. The server returns the curl request, request URL, and response codes. Optional. To request the status of the export request, issue the GET /exports/ id /status request. When the export has completed, issue the GET /exports/ id request, with the ID string that you copied in place of id . The server returns a link to download the export file (the payload). Click Download File . When the download completes, a notification message appears in your browser. Click the browser notification to locate the downloaded zip file. Note The server retains export files for 7 days. 4.3.2. Deleting export files To delete exported files, issue the DELETE /exports/ id request. Additional resources Knowledge Base article about inventory export: Ability to export a list of registered inventory systems Export service API for multiple sources: https://developers.redhat.com/api-catalog/api/export-service Export service API doc within the console: https://console.redhat.com/docs/api/export For the latest OpenAPI specifications, see https://swagger.io/specification/ 4.3.3. Automating inventory export using Ansible playbooks You can use an Ansible playbook to automate the inventory export process. The playbook is a generic playbook for the export service that uses token-based service accounts for authentication. Procedure Navigate to https://github.com/jeromemarc/insights-inventory-export . Download the inventory-export.yml playbook. Run the playbook. The playbook does everything from requesting the export id , to requesting download status, to requesting the downloaded payload. Additional resources For more information about service accounts, refer to the KB article: Transition of Red Hat Hybrid Cloud Console APIs from basic authentication to token-based authentication via service accounts . 4.3.4. Using the inventory export service for multiple Insights services You can use the inventory export service for multiple services, such as inventory and notifications. To request multiple services, include source information for each service that you want to request in your POST /exports request. For example: { "name": "Inventory Export multiple sources", "format": "json", "sources": [ { "application": "urn:redhat:application:inventory", "resource": "urn:redhat:application:inventory:export:systems", "filters": {} }, { "application": "urn:redhat:application:notifications", "resource": "urn:redhat:application:notifications:export:events", "filters": {} } ] } The POST /exports request returns a unique id for each export. The GET /exports request returns a zip file that includes multiple JSON or CSV files, one for each service that you request. | [
"{ \"name\": \"Inventory Export\", \"format\": \"json\", \"sources\": [ { \"application\": \"urn:redhat:application:inventory\", \"resource\": \"urn:redhat:application:inventory:export:systems\" } ] }",
"{ \"name\": \"Inventory Export multiple sources\", \"format\": \"json\", \"sources\": [ { \"application\": \"urn:redhat:application:inventory\", \"resource\": \"urn:redhat:application:inventory:export:systems\", \"filters\": {} }, { \"application\": \"urn:redhat:application:notifications\", \"resource\": \"urn:redhat:application:notifications:export:events\", \"filters\": {} } ] }"
] | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/viewing_and_managing_system_inventory_with_fedramp/assembly-exporting-inventory-data_user-access |
4.3.5. Tracking Most Frequently Used System Calls | 4.3.5. Tracking Most Frequently Used System Calls timeout.stp from Section 4.3.4, "Monitoring Polling Applications" helps you identify which applications are polling by pointing out which ones used the following system calls most frequently: poll select epoll itimer futex nanosleep signal However, in some systems, a different system call might be responsible for excessive polling. If you suspect that a polling application is using a different system call to poll, you need to identify first the top system calls used by the system. To do this, use topsys.stp . topsys.stp topsys.stp lists the top 20 system calls used by the system per 5-second interval. It also lists how many times each system call was used during that period. Refer to Example 4.15, "topsys.stp Sample Output" for a sample output. Example 4.15. topsys.stp Sample Output | [
"#! /usr/bin/env stap # This script continuously lists the top 20 systemcalls in the interval 5 seconds # global syscalls_count probe syscall.* { syscalls_count[name]++ } function print_systop () { printf (\"%25s %10s\\n\", \"SYSCALL\", \"COUNT\") foreach (syscall in syscalls_count- limit 20) { printf(\"%25s %10d\\n\", syscall, syscalls_count[syscall]) } delete syscalls_count } probe timer.s(5) { print_systop () printf(\"--------------------------------------------------------------\\n\") }",
"-------------------------------------------------------------- SYSCALL COUNT gettimeofday 1857 read 1821 ioctl 1568 poll 1033 close 638 open 503 select 455 write 391 writev 335 futex 303 recvmsg 251 socket 137 clock_gettime 124 rt_sigprocmask 121 sendto 120 setitimer 106 stat 90 time 81 sigreturn 72 fstat 66 --------------------------------------------------------------"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_beginners_guide/topsyssect |
5.4. Configuring IPv4 Settings | 5.4. Configuring IPv4 Settings Configuring IPv4 Settings with control-center Procedure Press the Super key to enter the Activities Overview, type Settings and then press Enter . Then, select the Network tab on the left-hand side, and the Network settings tool appears. Proceed to the section called "Configuring New Connections with control-center" . Select the connection that you want to edit and click on the gear wheel icon. The Editing dialog appears. Click the IPv4 menu entry. The IPv4 menu entry allows you to configure the method used to connect to a network, to enter IP address, DNS and route information as required. The IPv4 menu entry is available when you create and modify one of the following connection types: wired, wireless, mobile broadband, VPN or DSL. If you are using DHCP to obtain a dynamic IP address from a DHCP server, you can simply set Addresses to Automatic (DHCP) . If you need to configure static routes, see Section 4.3, "Configuring Static Routes with GUI" . Setting the Method for IPV4 Using nm-connection-editor You can use the nm-connection-editor to edit and configure connection settings. This procedure describes how you can configure the IPv4 settings: Procedure Enter nm-connection-editor in a terminal. For an existing connection type, click the gear wheel icon. Figure 5.2. Editing a connection Click IPv4 Settings . Figure 5.3. Configuring IPv4 Settings Available IPv4 Methods by Connection Type When you click the Method drop-down menu, depending on the type of connection you are configuring, you are able to select one of the following IPv4 connection methods. All of the methods are listed here according to which connection type, or types, they are associated with: Wired, Wireless and DSL Connection Methods Automatic (DHCP) - Choose this option if the network you are connecting to uses a DHCP server to assign IP addresses. You do not need to fill in the DHCP client ID field. Automatic (DHCP) addresses only - Choose this option if the network you are connecting to uses a DHCP server to assign IP addresses but you want to assign DNS servers manually. Manual - Choose this option if you want to assign IP addresses manually. Link-Local Only - Choose this option if the network you are connecting to does not have a DHCP server and you do not want to assign IP addresses manually. Random addresses will be assigned as per RFC 3927 with prefix 169.254/16 . Shared to other computers - Choose this option if the interface you are configuring is for sharing an Internet or WAN connection. The interface is assigned an address in the 10.42.x.1/24 range, a DHCP server and DNS server are started, and the interface is connected to the default network connection on the system with network address translation ( NAT ). Disabled - IPv4 is disabled for this connection. Mobile Broadband Connection Methods Automatic (PPP) - Choose this option if the network you are connecting to assigns your IP address and DNS servers automatically. Automatic (PPP) addresses only - Choose this option if the network you are connecting to assigns your IP address automatically, but you want to manually specify DNS servers. VPN Connection Methods Automatic (VPN) - Choose this option if the network you are connecting to assigns your IP address and DNS servers automatically. Automatic (VPN) addresses only - Choose this option if the network you are connecting to assigns your IP address automatically, but you want to manually specify DNS servers. DSL Connection Methods Automatic (PPPoE) - Choose this option if the network you are connecting to assigns your IP address and DNS servers automatically. Automatic (PPPoE) addresses only - Choose this option if the network you are connecting to assigns your IP address automatically, but you want to manually specify DNS servers. If you are using DHCP to obtain a dynamic IP address from a DHCP server, you can simply set Method to Automatic (DHCP) . If you need to configure static routes, click the Routes button and for more details on configuration options, see Section 4.3, "Configuring Static Routes with GUI" . | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-configuring_ipv4_settings |
7.137. nfs-utils-lib | 7.137. nfs-utils-lib 7.137.1. RHBA-2015:1312 - nfs-utils-lib bug fix update Updated nfs-utils-lib packages that fix one bug are now available for Red Hat Enterprise Linux 6. The nfs-utils-lib packages contain support libraries required by the programs in the nfs-utils packages. Bug Fixes BZ# 1129792 Prior to this update, the libnfsidmap library used "nobody@DEFAULTDOMAIN" when performing name lookup, but this did not match the behavior of the rpc.idmapd daemon. As a consequence, the nfsidmap utility did not properly handle situations when "nobody@DEFAULTDOMAIN" did not directly map to any user or group on the system. With this update, libnfsidmap uses the "Nobody-User" and "Nobody-Group" values in the /etc/idmapd.conf file when the default "nobody" user and group are set, and the described problem no longer occurs. BZ# 1223465 The nss_getpwnam() function previously failed to find the intended password entry when the DNS domain name contained both upper-case and lower-case characters. This update ensures that character case is ignored when comparing domain names, and nss_getpwnam() is able to retrieve passwords as expected. Users of nfs-utils-lib are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-nfs-utils-lib |
2.22. RHEA-2011:0626 - new package: osutil | 2.22. RHEA-2011:0626 - new package: osutil A new osutil package is now available. The Operating System Utilities Java Native Interface (JNI) package supplies various native operating system operations to Java programs. This new package adds JNI features that allow Red Hat Enterprise Linux 6 users to use the operating system utility libraries that are made available to java programs using JNI. Red Hat IPA and the Certificate System CA depend on JNI for their interface with the operating system. (BZ# 643543 ) Users are advised to upgrade to this updated package, which resolves this issue. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.1_technical_notes/osutil_new |
Chapter 8. Reference materials | Chapter 8. Reference materials To learn more about the compliance service, see the following resources: Assessing and Monitoring Security Policy Compliance of RHEL Systems with FedRAMP Red Hat Insights for Red Hat Enterprise Linux Documentation Red Hat Insights for Red Hat Enterprise Linux Product Support page | null | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/generating_compliance_service_reports_with_fedramp/assembly-compl-reference-materials |
Chapter 3. SELinux Contexts | Chapter 3. SELinux Contexts Processes and files are labeled with an SELinux context that contains additional information, such as an SELinux user, role, type, and, optionally, a level. When running SELinux, all of this information is used to make access control decisions. In Red Hat Enterprise Linux, SELinux provides a combination of Role-Based Access Control (RBAC), Type Enforcement (TE), and, optionally, Multi-Level Security (MLS). The following is an example showing SELinux context. SELinux contexts are used on processes, Linux users, and files, on Linux operating systems that run SELinux. Use the ls -Z command to view the SELinux context of files and directories: SELinux contexts follow the SELinux user:role:type:level syntax. The fields are as follows: SELinux user The SELinux user identity is an identity known to the policy that is authorized for a specific set of roles, and for a specific MLS/MCS range. Each Linux user is mapped to an SELinux user via SELinux policy. This allows Linux users to inherit the restrictions placed on SELinux users. The mapped SELinux user identity is used in the SELinux context for processes in that session, in order to define what roles and levels they can enter. Run the semanage login -l command as the Linux root user to view a list of mappings between SELinux and Linux user accounts (you need to have the policycoreutils-python package installed): Output may differ slightly from system to system. The Login Name column lists Linux users, and the SELinux User column lists which SELinux user the Linux user is mapped to. For processes, the SELinux user limits which roles and levels are accessible. The last column, MLS/MCS Range , is the level used by Multi-Level Security (MLS) and Multi-Category Security (MCS). role Part of SELinux is the Role-Based Access Control (RBAC) security model. The role is an attribute of RBAC. SELinux users are authorized for roles, and roles are authorized for domains. The role serves as an intermediary between domains and SELinux users. The roles that can be entered determine which domains can be entered; ultimately, this controls which object types can be accessed. This helps reduce vulnerability to privilege escalation attacks. type The type is an attribute of Type Enforcement. The type defines a domain for processes, and a type for files. SELinux policy rules define how types can access each other, whether it be a domain accessing a type, or a domain accessing another domain. Access is only allowed if a specific SELinux policy rule exists that allows it. level The level is an attribute of MLS and MCS. An MLS range is a pair of levels, written as lowlevel-highlevel if the levels differ, or lowlevel if the levels are identical ( s0-s0 is the same as s0 ). Each level is a sensitivity-category pair, with categories being optional. If there are categories, the level is written as sensitivity:category-set . If there are no categories, it is written as sensitivity . If the category set is a contiguous series, it can be abbreviated. For example, c0.c3 is the same as c0,c1,c2,c3 . The /etc/selinux/targeted/setrans.conf file maps levels ( s0:c0 ) to human-readable form (that is CompanyConfidential ). Do not edit setrans.conf with a text editor: use the semanage command to make changes. Refer to the semanage (8) manual page for further information. In Red Hat Enterprise Linux, targeted policy enforces MCS, and in MCS, there is just one sensitivity, s0 . MCS in Red Hat Enterprise Linux supports 1024 different categories: c0 through to c1023 . s0-s0:c0.c1023 is sensitivity s0 and authorized for all categories. MLS enforces the Bell-La Padula Mandatory Access Model, and is used in Labeled Security Protection Profile (LSPP) environments. To use MLS restrictions, install the selinux-policy-mls package, and configure MLS to be the default SELinux policy. The MLS policy shipped with Red Hat Enterprise Linux omits many program domains that were not part of the evaluated configuration, and therefore, MLS on a desktop workstation is unusable (no support for the X Window System); however, an MLS policy from the upstream SELinux Reference Policy can be built that includes all program domains. For more information on MLS configuration, refer to Section 5.11, "Multi-Level Security (MLS)" . 3.1. Domain Transitions A process in one domain transitions to another domain by executing an application that has the entrypoint type for the new domain. The entrypoint permission is used in SELinux policy, and controls which applications can be used to enter a domain. The following example demonstrates a domain transition: A user wants to change their password. To do this, they run the passwd application. The /usr/bin/passwd executable is labeled with the passwd_exec_t type: The passwd application accesses /etc/shadow , which is labeled with the shadow_t type: An SELinux policy rule states that processes running in the passwd_t domain are allowed to read and write to files labeled with the shadow_t type. The shadow_t type is only applied to files that are required for a password change. This includes /etc/gshadow , /etc/shadow , and their backup files. An SELinux policy rule states that the passwd_t domain has entrypoint permission to the passwd_exec_t type. When a user runs the passwd application, the user's shell process transitions to the passwd_t domain. With SELinux, since the default action is to deny, and a rule exists that allows (among other things) applications running in the passwd_t domain to access files labeled with the shadow_t type, the passwd application is allowed to access /etc/shadow , and update the user's password. This example is not exhaustive, and is used as a basic example to explain domain transition. Although there is an actual rule that allows subjects running in the passwd_t domain to access objects labeled with the shadow_t file type, other SELinux policy rules must be met before the subject can transition to a new domain. In this example, Type Enforcement ensures: The passwd_t domain can only be entered by executing an application labeled with the passwd_exec_t type; can only execute from authorized shared libraries, such as the lib_t type; and cannot execute any other applications. Only authorized domains, such as passwd_t , can write to files labeled with the shadow_t type. Even if other processes are running with superuser privileges, those processes cannot write to files labeled with the shadow_t type, as they are not running in the passwd_t domain. Only authorized domains can transition to the passwd_t domain. For example, the sendmail process running in the sendmail_t domain does not have a legitimate reason to execute passwd ; therefore, it can never transition to the passwd_t domain. Processes running in the passwd_t domain can only read and write to authorized types, such as files labeled with the etc_t or shadow_t types. This prevents the passwd application from being tricked into reading or writing arbitrary files. | [
"~]USD ls -Z file1 -rwxrw-r-- user1 group1 unconfined_u:object_r:user_home_t:s0 file1",
"~]# semanage login -l Login Name SELinux User MLS/MCS Range __default__ unconfined_u s0-s0:c0.c1023 root unconfined_u s0-s0:c0.c1023 system_u system_u s0-s0:c0.c1023",
"~]USD ls -Z /usr/bin/passwd -rwsr-xr-x root root system_u:object_r:passwd_exec_t:s0 /usr/bin/passwd",
"~]USD ls -Z /etc/shadow -r--------. root root system_u:object_r:shadow_t:s0 /etc/shadow"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security-enhanced_linux/chap-Security-Enhanced_Linux-SELinux_Contexts |
probe::nfsd.rename | probe::nfsd.rename Name probe::nfsd.rename - NFS server renaming a file for client Synopsis nfsd.rename Values tlen length of new file name fh file handler of old path flen length of old file name client_ip the ip address of client filename old file name tname new file name tfh file handler of new path | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-nfsd-rename |
Chapter 7. Configuring maximum memory usage for addresses | Chapter 7. Configuring maximum memory usage for addresses AMQ Broker transparently supports huge queues containing millions of messages, even if the machine that is hosting the broker is running with limited memory. In these situations, it might be not possible to store all of the queues in memory at any one time. To protect against excess memory consumption, you can configure the maximum memory usage that is allowed for each address on the broker. In addition, you can specify what action the broker takes when this limit is reached for a given address. In particular, when memory usage for an address reaches the configured limit, you can configure the broker to take one of the following actions: Page messages Silently drop messages Drop messages and notify the sending clients Block clients from sending messages The sections that follow show how to configure maximum memory usage for addresses and the corresponding actions that the broker can take when the limit for an address is reached. Important When you use transactions, the broker might allocate extra memory to ensure transactional consistency. In this case, the memory usage reported by the broker might not reflect the total number of bytes being used in memory. Therefore, if you configure the broker to page, drop, or block messages based on a specified maximum memory usage, you should not also use transactions. 7.1. Configuring message paging For any address that has a maximum memory usage limit specified, you can also specify what action the broker takes when that usage limit is reached. One of the options that you can configure is paging . If you configure the paging option, when the maximum size of an address is reached, the broker starts to store messages for that address on disk, in files known as page files . Each page file has a maximum size that you can configure. Each address that you configure in this way has a dedicated folder in your file system to store paged messages. Both queue browsers and consumers can navigate through page files when inspecting messages in a queue. However, a consumer that is using a very specific filter might not be able to consume a message that is stored in a page file until existing messages in the queue have been consumed first. For example, suppose that a consumer filter includes a string expression such as "color='red'" . If a message that meets this condition follows one million messages with the property "color='blue'" , the consumer cannot consume the message until those with "color='blue'" have been consumed first. The broker transfers (that is, depages ) messages from disk into memory when clients are ready to consume them. The broker removes a page file from disk when all messages in that file have been acknowledged. The procedures that follow show how to configure message paging. 7.1.1. Specifying a paging directory The following procedure shows how to specify the location of the paging directory. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Within the core element, add the paging-directory element. Specify a location for the paging directory in your file system. <configuration ...> <core ...> ... <paging-directory> /path/to/paging-directory </paging-directory> ... </core> </configuration> For each address that you subsequently configure for paging, the broker adds a dedicated directory within the paging directory that you have specified. 7.1.2. Configuring an address for paging The following procedure shows how to configure an address for paging. Prerequisites You should be familiar with how to configure addresses and address settings. For more information, see Chapter 4, Configuring addresses and queues . Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. For an address-setting element that you have configured for a matching address or set of addresses, add configuration elements to specify maximum memory usage and define paging behavior. For example: <address-settings> <address-setting match="my.paged.address"> ... <max-size-bytes>104857600</max-size-bytes> <page-size-bytes>10485760</page-size-bytes> <address-full-policy>PAGE</address-full-policy> ... </address-setting> </address-settings> max-size-bytes Maximum size, in bytes, of the memory allowed for the address before the broker executes the policy specified for address-full-policy . The default value is -1 , which means that there is no limit. The value that you specify also supports byte notation such as "K", "MB", and "GB". page-size-bytes Size, in bytes, of each page file used on the paging system. The default value is 10485760 (that is, 10 MiB). The value that you specify also supports byte notation such as "K", "MB", and "GB". address-full-policy Action that the broker takes when then the maximum size for an address has been reached. The default value is PAGE . Valid values are: PAGE The broker pages any further messages to disk. DROP The broker silently drops any further messages. FAIL The broker drops any further messages and issues exceptions to client message producers. BLOCK Client message producers block when they try to send further messages. Additional paging configuration elements that are not shown in the preceding example are described below. page-max-cache-size Number of page files that the broker keeps in memory to optimize IO during paging navigation. The default value is 5 . page-sync-timeout Time, in nanoseconds, between periodic page synchronizations. If you are using an asynchronous IO journal (that is, journal-type is set to ASYNCIO in the broker.xml configuration file), the default value is 3333333 . If you are using a standard Java NIO journal (that is, journal-type is set to NIO ), the default value is the configured value of the journal-buffer-timeout parameter. In the preceding example , when messages sent to the address my.paged.address exceed 104857600 bytes in memory, the broker begins paging. Note If you specify max-size-bytes in an address-setting element, the value applies to each matching address. Specifying this value does not mean that the total size of all matching addresses is limited to the value of max-size-bytes . 7.1.3. Configuring a global paging size Sometimes, configuring a memory limit per address is not practical, for example, when a broker manages many addresses that have different usage patterns. In these situations, you can specify a global memory limit. The global limit is the total amount of memory that the broker can use for all addresses. When this memory limit is reached, the broker executes the policy specified for address-full-policy for the address associated with a new incoming message. The following procedure shows how to configure a global paging size. Prerequisites You should be familiar with how to configure an address for paging. For more information, see Section 7.1.2, "Configuring an address for paging" . Procedure Stop the broker. On Linux: On Windows: Open the <broker_instance_dir> /etc/broker.xml configuration file. Within the core element, add the global-max-size element and specify a value. For example: <configuration> <core> ... <global-max-size>1GB</global-max-size> ... </core> </configuration> global-max-size Total amount of memory, in bytes, that the broker can use for all addresses. When this limit is reached, for the address associated with an incoming message, the broker executes the policy that is specified as a value for address-full-policy . The default value of global-max-size is half of the maximum memory available to the Java virtual machine (JVM) that is hosting the broker. The value for global-max-size is in bytes, but also supports byte notation (for example, "K", "Mb", "GB"). In the preceding example, the broker is configured to use a maximum of one gigabyte of available memory when processing messages. Start the broker. On Linux: On Windows: 7.1.4. Limiting disk usage during paging You can limit the amount of physical disk space that the broker can use before it blocks incoming messages rather than paging them. The following procedure shows how to set a limit for disk usage during paging. Procedure Stop the broker. On Linux: On Windows: Open the <broker_instance_dir> /etc/broker.xml configuration file. Within the core element add the max-disk-usage configuration element and specify a value. For example: <configuration> <core> ... <max-disk-usage>50</max-disk-usage> ... </core> </configuration> max-disk-usage Maximum percentage of the available disk space that the broker can use when paging messages. When this limit is reached, the broker blocks incoming messages rather than paging them. The default value is 90 . In the preceding example, the broker is limited to using fifty percent of disk space when paging messages. Start the broker. On Linux: On Windows: 7.2. Configuring message dropping Section 7.1.2, "Configuring an address for paging" shows how to configure an address for paging. As part of that procedure, you set the value of address-full-policy to PAGE . To drop messages (rather than paging them) when an address reaches its specified maximum size, set the value of the address-full-policy to one of the following: DROP When the maximum size of a given address has been reached, the broker silently drops any further messages. FAIL When the maximum size of a given address has been reached, the broker drops any further messages and issues exceptions to producers. 7.3. Configuring message blocking The following procedures show how to configure message blocking when a given address reaches the maximum size limit that you have specified. Note You can configure message blocking only for the Core, OpenWire, and AMQP protocols. 7.3.1. Blocking Core and OpenWire producers The following procedure shows how to configure message blocking for Core and OpenWire message producers when a given address reaches the maximum size limit that you have specified. Prerequisites You should be familiar with how to configure addresses and address settings. For more information, see Chapter 4, Configuring addresses and queues . Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. For an address-setting element that you have configured for a matching address or set of addresses, add configuration elements to define message blocking behavior. For example: <address-settings> <address-setting match="my.blocking.address"> ... <max-size-bytes>300000</max-size-bytes> <address-full-policy>BLOCK</address-full-policy> ... </address-setting> </address-settings> max-size-bytes Maximum size, in bytes, of the memory allowed for the address before the broker executes the policy specified for address-full-policy . The value that you specify also supports byte notation such as "K", "MB", and "GB". Note If you specify max-size-bytes in an address-setting element, the value applies to each matching address. Specifying this value does not mean that the total size of all matching addresses is limited to the value of max-size-bytes . address-full-policy Action that the broker takes when then the maximum size for an address has been reached. In the preceding example, when messages sent to the address my.blocking.address exceed 300000 bytes in memory, the broker begins blocking further messages from Core or OpenWire message producers. 7.3.2. Blocking AMQP producers Protocols such as Core and OpenWire use a window-size flow control system. In this system, credits represent bytes and are allocated to producers. If a producer wants to send a message, the producer must wait until it has sufficient credits for the size of the message. By contrast, AMQP flow control credits do not represent bytes. Instead, AMQP credits represent the number of messages a producer is permitted to send, regardless of message size. Therefore, it is possible, in some situations, for AMQP producers to significantly exceed the max-size-bytes value of an address. Therefore, to block AMQP producers, you must use a different configuration element, max-size-bytes-reject-threshold . For a matching address or set of addresses, this element specifies the maximum size, in bytes, of all AMQP messages in memory. When the total size of all messages in memory reaches the specified limit, the broker blocks AMQP producers from sending further messages. The following procedure shows how to configure message blocking for AMQP message producers. Prerequisites You should be familiar with how to configure addresses and address settings. For more information, see Chapter 4, Configuring addresses and queues . Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. For an address-setting element that you have configured for a matching address or set of addresses, specify the maximum size of all AMQP messages in memory. For example: <address-settings> <address-setting match="my.amqp.blocking.address"> ... <max-size-bytes-reject-threshold>300000</max-size-bytes-reject-threshold> ... </address-setting> </address-settings> max-size-bytes-reject-threshold Maximum size, in bytes, of the memory allowed for the address before the broker blocks further AMQP messages. The value that you specify also supports byte notation such as "K", "MB", and "GB". By default, max-size-bytes-reject-threshold is set to -1 , which means that there is no maximum size. Note If you specify max-size-bytes-reject-threshold in an address-setting element, the value applies to each matching address. Specifying this value does not mean that the total size of all matching addresses is limited to the value of max-size-bytes-reject-threshold . In the preceding example, when messages sent to the address my.amqp.blocking.address exceed 300000 bytes in memory, the broker begins blocking further messages from AMQP producers. 7.4. Understanding memory usage on multicast addresses When a message is routed to an address that has multicast queues bound to it, there is only one copy of the message in memory. Each queue has only a reference to the message. Because of this, the associated memory is released only after all queues referencing the message have delivered it. In this type of situation, if you have a slow consumer, the entire address might experience a negative performance impact. For example, consider this scenario: An address has ten queues that use the multicast routing type. Due to a slow consumer, one of the queues does not deliver its messages. The other nine queues continue to deliver messages and are empty. Messages continue to arrive to the address. The queue with the slow consumer continues to accumulate references to the messages, causing the broker to keep the messages in memory. When the maximum size of the address is reached, the broker starts to page messages. In this scenario because of a single slow consumer, consumers on all queues are forced to consume messages from the page system, requiring additional IO. Additional resources To learn how to configure flow control to regulate the flow of data between the broker and producers and consumers, see Flow control in the AMQ Core Protocol JMS documentation. | [
"<configuration ...> <core ...> <paging-directory> /path/to/paging-directory </paging-directory> </core> </configuration>",
"<address-settings> <address-setting match=\"my.paged.address\"> <max-size-bytes>104857600</max-size-bytes> <page-size-bytes>10485760</page-size-bytes> <address-full-policy>PAGE</address-full-policy> </address-setting> </address-settings>",
"<broker_instance_dir> /bin/artemis stop",
"<broker_instance_dir> \\bin\\artemis-service.exe stop",
"<configuration> <core> <global-max-size>1GB</global-max-size> </core> </configuration>",
"<broker_instance_dir> /bin/artemis run",
"<broker_instance_dir> \\bin\\artemis-service.exe start",
"<broker_instance_dir> /bin/artemis stop",
"<broker_instance_dir> \\bin\\artemis-service.exe stop",
"<configuration> <core> <max-disk-usage>50</max-disk-usage> </core> </configuration>",
"<broker_instance_dir> /bin/artemis run",
"<broker_instance_dir> \\bin\\artemis-service.exe start",
"<address-settings> <address-setting match=\"my.blocking.address\"> <max-size-bytes>300000</max-size-bytes> <address-full-policy>BLOCK</address-full-policy> </address-setting> </address-settings>",
"<address-settings> <address-setting match=\"my.amqp.blocking.address\"> <max-size-bytes-reject-threshold>300000</max-size-bytes-reject-threshold> </address-setting> </address-settings>"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/configuring_amq_broker/assembly-br-configuring-maximum-memory-usage-for-addresses_configuring |
Chapter 3. Metro-DR solution for OpenShift Data Foundation | Chapter 3. Metro-DR solution for OpenShift Data Foundation This section of the guide provides details of the Metro Disaster Recovery (Metro DR) steps and commands necessary to be able to failover an application from one OpenShift Container Platform cluster to another and then failback the same application to the original primary cluster. In this case the OpenShift Container Platform clusters will be created or imported using Red Hat Advanced Cluster Management (RHACM) and have distance limitations between the OpenShift Container Platform clusters of less than 10ms RTT latency. The persistent storage for applications is provided by an external Red Hat Ceph Storage (RHCS) cluster stretched between the two locations with the OpenShift Container Platform instances connected to this storage cluster. An arbiter node with a storage monitor service is required at a third location (different location than where OpenShift Container Platform instances are deployed) to establish quorum for the RHCS cluster in the case of a site outage. This third location can be in the range of ~100ms RTT from the storage cluster connected to the OpenShift Container Platform instances. This is a general overview of the Metro DR steps required to configure and execute OpenShift Disaster Recovery (ODR) capabilities using OpenShift Data Foundation and RHACM across two distinct OpenShift Container Platform clusters separated by distance. In addition to these two clusters called managed clusters, a third OpenShift Container Platform cluster is required that will be the Red Hat Advanced Cluster Management (RHACM) hub cluster. Important You can now easily set up Metropolitan disaster recovery solutions for workloads based on OpenShift virtualization technology using OpenShift Data Foundation. For more information, see the knowledgebase article . 3.1. Components of Metro-DR solution Metro-DR is composed of Red Hat Advanced Cluster Management for Kubernetes, Red Hat Ceph Storage and OpenShift Data Foundation components to provide application and data mobility across OpenShift Container Platform clusters. Red Hat Advanced Cluster Management for Kubernetes Red Hat Advanced Cluster Management (RHACM) provides the ability to manage multiple clusters and application lifecycles. Hence, it serves as a control plane in a multi-cluster environment. RHACM is split into two parts: RHACM Hub: components that run on the multi-cluster control plane. Managed clusters: components that run on the clusters that are managed. For more information about this product, see RHACM documentation and the RHACM "Manage Applications" documentation . Red Hat Ceph Storage Red Hat Ceph Storage is a massively scalable, open, software-defined storage platform that combines the most stable version of the Ceph storage system with a Ceph management platform, deployment utilities, and support services. It significantly lowers the cost of storing enterprise data and helps organizations manage exponential data growth. The software is a robust and modern petabyte-scale storage platform for public or private cloud deployments. For more product information, see Red Hat Ceph Storage . OpenShift Data Foundation OpenShift Data Foundation provides the ability to provision and manage storage for stateful applications in an OpenShift Container Platform cluster. It is backed by Ceph as the storage provider, whose lifecycle is managed by Rook in the OpenShift Data Foundation component stack and Ceph-CSI provides the provisioning and management of Persistent Volumes for stateful applications. OpenShift DR OpenShift DR is a disaster recovery orchestrator for stateful applications across a set of peer OpenShift clusters which are deployed and managed using RHACM and provides cloud-native interfaces to orchestrate the life-cycle of an application's state on Persistent Volumes. These include: Protecting an application and its state relationship across OpenShift clusters Failing over an application and its state to a peer cluster Relocate an application and its state to the previously deployed cluster OpenShift DR is split into three components: ODF Multicluster Orchestrator : Installed on the multi-cluster control plane (RHACM Hub), it orchestrates configuration and peering of OpenShift Data Foundation clusters for Metro and Regional DR relationships. OpenShift DR Hub Operator : Automatically installed as part of ODF Multicluster Orchestrator installation on the hub cluster to orchestrate failover or relocation of DR enabled applications. OpenShift DR Cluster Operator : Automatically installed on each managed cluster that is part of a Metro and Regional DR relationship to manage the lifecycle of all PVCs of an application. 3.2. Metro-DR deployment workflow This section provides an overview of the steps required to configure and deploy Metro-DR capabilities using the latest versions of Red Hat OpenShift Data Foundation, Red Hat Ceph Storage (RHCS) and Red Hat Advanced Cluster Management for Kubernetes (RHACM) version 2.10 or later, across two distinct OpenShift Container Platform clusters. In addition to two managed clusters, a third OpenShift Container Platform cluster will be required to deploy the Advanced Cluster Management. To configure your infrastructure, perform the below steps in the order given: Ensure requirements across the Hub, Primary and Secondary Openshift Container Platform clusters that are part of the DR solution are met. See Requirements for enabling Metro-DR . Ensure you meet the requirements for deploying Red Hat Ceph Storage stretch cluster with arbiter. See Requirements for deploying Red Hat Ceph Storage . Deploy and configure Red Hat Ceph Storage stretch mode. For instructions on enabling Ceph cluster on two different data centers using stretched mode functionality, see Deploying Red Hat Ceph Storage . Install OpenShift Data Foundation operator and create a storage system on Primary and Secondary managed clusters. See Installing OpenShift Data Foundation on managed clusters . Install the ODF Multicluster Orchestrator on the Hub cluster. See Installing ODF Multicluster Orchestrator on Hub cluster . Configure SSL access between the Hub, Primary and Secondary clusters. See Configuring SSL access across clusters . Create a DRPolicy resource for use with applications requiring DR protection across the Primary and Secondary clusters. See Creating Disaster Recovery Policy on Hub cluster . Note The Metro-DR solution can only have one DRpolicy. Testing your disaster recovery solution with: Subscription-based application: Create sample applications. See Creating sample application . Test failover and relocate operations using the sample application between managed clusters. See Subscription-based application failover and relocating subscription-based application . ApplicationSet-based application: Create sample applications. See Creating ApplicationSet-based applications . Test failover and relocate operations using the sample application between managed clusters. See ApplicationSet-based application failover and relocating ApplicationSet-based application . 3.3. Requirements for enabling Metro-DR The prerequisites to installing a disaster recovery solution supported by Red Hat OpenShift Data Foundation are as follows: You must have the following OpenShift clusters that have network reachability between them: Hub cluster where Red Hat Advanced Cluster Management (RHACM) for Kubernetes operator are installed. Primary managed cluster where OpenShift Data Foundation is running. Secondary managed cluster where OpenShift Data Foundation is running. Note For configuring hub recovery setup, you need a 4th cluster which acts as the passive hub. The primary managed cluster (Site-1) can be co-situated with the active RHACM hub cluster while the passive hub cluster is situated along with the secondary managed cluster (Site-2). Alternatively, the active RHACM hub cluster can be placed in a neutral site (Site-3) that is not impacted by the failures of either of the primary managed cluster at Site-1 or the secondary cluster at Site-2. In this situation, if a passive hub cluster is used it can be placed with the secondary cluster at Site-2. For more information, see Configuring passive hub cluster for hub recovery . Hub recovery is a Technology Preview feature and is subject to Technology Preview support limitations. Ensure that RHACM operator and MultiClusterHub is installed on the Hub cluster. See RHACM installation guide for instructions. After the operator is successfully installed, a popover with a message that the Web console update is available appears on the user interface. Click Refresh web console from this popover for the console changes to reflect. Important Ensure that application traffic routing and redirection are configured appropriately. On the Hub cluster Navigate to All Clusters Infrastructure Clusters . Import or create the Primary managed cluster and the Secondary managed cluster using the RHACM console. Choose the appropriate options for your environment. After the managed clusters are successfully created or imported, you can see the list of clusters that were imported or created on the console. For instructions, see Creating a cluster and Importing a target managed cluster to the hub cluster . Warning The Openshift Container Platform managed clusters and the Red Hat Ceph Storage (RHCS) nodes have distance limitations. The network latency between the sites must be below 10 milliseconds round-trip time (RTT). 3.4. Requirements for deploying Red Hat Ceph Storage stretch cluster with arbiter Red Hat Ceph Storage is an open-source enterprise platform that provides unified software-defined storage on standard, economical servers and disks. With block, object, and file storage combined into one platform, Red Hat Ceph Storage efficiently and automatically manages all your data, so you can focus on the applications and workloads that use it. This section provides a basic overview of the Red Hat Ceph Storage deployment. For more complex deployment, refer to the official documentation guide for Red Hat Ceph Storage 7 . Note Only Flash media is supported since it runs with min_size=1 when degraded. Use stretch mode only with all-flash OSDs. Using all-flash OSDs minimizes the time needed to recover once connectivity is restored, thus minimizing the potential for data loss. Important Erasure coded pools cannot be used with stretch mode. 3.4.1. Hardware requirements For information on minimum hardware requirements for deploying Red Hat Ceph Storage, see Minimum hardware recommendations for containerized Ceph . Table 3.1. Physical server locations and Ceph component layout for Red Hat Ceph Storage cluster deployment: Node name Datacenter Ceph components ceph1 DC1 OSD+MON+MGR ceph2 DC1 OSD+MON ceph3 DC1 OSD+MDS+RGW ceph4 DC2 OSD+MON+MGR ceph5 DC2 OSD+MON ceph6 DC2 OSD+MDS+RGW ceph7 DC3 MON 3.4.2. Software requirements Use the latest software version of Red Hat Ceph Storage 7 . For more information on the supported Operating System versions for Red Hat Ceph Storage, see knowledgebase article on Red Hat Ceph Storage: Supported configurations . 3.4.3. Network configuration requirements The recommended Red Hat Ceph Storage configuration is as follows: You must have two separate networks, one public network and one private network. You must have three different datacenters that support VLANS and subnets for Cephs private and public network for all datacenters. Note You can use different subnets for each of the datacenters. The latencies between the two datacenters running the Red Hat Ceph Storage Object Storage Devices (OSDs) cannot exceed 10 ms RTT. For the arbiter datacenter, this was tested with values as high up to 100 ms RTT to the other two OSD datacenters. Here is an example of a basic network configuration that we have used in this guide: DC1: Ceph public/private network: 10.0.40.0/24 DC2: Ceph public/private network: 10.0.40.0/24 DC3: Ceph public/private network: 10.0.40.0/24 For more information on the required network environment, see Ceph network configuration . 3.5. Deploying Red Hat Ceph Storage 3.5.1. Node pre-deployment steps Before installing the Red Hat Ceph Storage Ceph cluster, perform the following steps to fulfill all the requirements needed. Register all the nodes to the Red Hat Network or Red Hat Satellite and subscribe to a valid pool: subscription-manager register subscription-manager subscribe --pool=8a8XXXXXX9e0 Enable access for all the nodes in the Ceph cluster for the following repositories: rhel9-for-x86_64-baseos-rpms rhel9-for-x86_64-appstream-rpms subscription-manager repos --disable="*" --enable="rhel9-for-x86_64-baseos-rpms" --enable="rhel9-for-x86_64-appstream-rpms" Update the operating system RPMs to the latest version and reboot if needed: dnf update -y reboot Select a node from the cluster to be your bootstrap node. ceph1 is our bootstrap node in this example going forward. Only on the bootstrap node ceph1 , enable the ansible-2.9-for-rhel-9-x86_64-rpms and rhceph-6-tools-for-rhel-9-x86_64-rpms repositories: subscription-manager repos --enable="ansible-2.9-for-rhel-9-x86_64-rpms" --enable="rhceph-6-tools-for-rhel-9-x86_64-rpms" Configure the hostname using the bare/short hostname in all the hosts. hostnamectl set-hostname <short_name> Verify the hostname configuration for deploying Red Hat Ceph Storage with cephadm. USD hostname Example output: Modify /etc/hosts file and add the fqdn entry to the 127.0.0.1 IP by setting the DOMAIN variable with our DNS domain name. Check the long hostname with the fqdn using the hostname -f option. USD hostname -f Example output: Note To know more about why these changes are required, see Fully Qualified Domain Names vs Bare Host Names . Run the following steps on the bootstrap node. In our example, the bootstrap node is ceph1 . Install the cephadm-ansible RPM package: USD sudo dnf install -y cephadm-ansible Important To run the ansible playbooks, you must have ssh passwordless access to all the nodes that are configured to the Red Hat Ceph Storage cluster. Ensure that the configured user (for example, deployment-user ) has root privileges to invoke the sudo command without needing a password. To use a custom key, configure the selected user (for example, deployment-user ) ssh config file to specify the id/key that will be used for connecting to the nodes via ssh: cat <<EOF > ~/.ssh/config Host ceph* User deployment-user IdentityFile ~/.ssh/ceph.pem EOF Build the ansible inventory cat <<EOF > /usr/share/cephadm-ansible/inventory ceph1 ceph2 ceph3 ceph4 ceph5 ceph6 ceph7 [admin] ceph1 ceph4 EOF Note Here, the Hosts ( Ceph1 and Ceph4 ) belonging to two different data centers are configured as part of the [admin] group on the inventory file and are tagged as _admin by cephadm . Each of these admin nodes receive the admin ceph keyring during the bootstrap process so that when one data center is down, we can check using the other available admin node. Verify that ansible can access all nodes using the ping module before running the pre-flight playbook. USD ansible -i /usr/share/cephadm-ansible/inventory -m ping all -b Example output: Navigate to the /usr/share/cephadm-ansible directory. Run ansible-playbook with relative file paths. USD ansible-playbook -i /usr/share/cephadm-ansible/inventory /usr/share/cephadm-ansible/cephadm-preflight.yml --extra-vars "ceph_origin=rhcs" The preflight playbook Ansible playbook configures the RHCS dnf repository and prepares the storage cluster for bootstrapping. It also installs podman, lvm2, chronyd, and cephadm. The default location for cephadm-ansible and cephadm-preflight.yml is /usr/share/cephadm-ansible . For additional information, see Running the preflight playbook 3.5.2. Cluster bootstrapping and service deployment with cephadm utility The cephadm utility installs and starts a single Ceph Monitor daemon and a Ceph Manager daemon for a new Red Hat Ceph Storage cluster on the local node where the cephadm bootstrap command is run. In this guide we are going to bootstrap the cluster and deploy all the needed Red Hat Ceph Storage services in one step using a cluster specification yaml file. If you find issues during the deployment, it may be easier to troubleshoot the errors by dividing the deployment into two steps: Bootstrap Service deployment Note For additional information on the bootstrapping process, see Bootstrapping a new storage cluster . Procedure Create json file to authenticate against the container registry using a json file as follows: USD cat <<EOF > /root/registry.json { "url":"registry.redhat.io", "username":"User", "password":"Pass" } EOF Create a cluster-spec.yaml that adds the nodes to the Red Hat Ceph Storage cluster and also sets specific labels for where the services should run following table 3.1. cat <<EOF > /root/cluster-spec.yaml service_type: host addr: 10.0.40.78 ## <XXX.XXX.XXX.XXX> hostname: ceph1 ## <ceph-hostname-1> location: root: default datacenter: DC1 labels: - osd - mon - mgr --- service_type: host addr: 10.0.40.35 hostname: ceph2 location: datacenter: DC1 labels: - osd - mon --- service_type: host addr: 10.0.40.24 hostname: ceph3 location: datacenter: DC1 labels: - osd - mds - rgw --- service_type: host addr: 10.0.40.185 hostname: ceph4 location: root: default datacenter: DC2 labels: - osd - mon - mgr --- service_type: host addr: 10.0.40.88 hostname: ceph5 location: datacenter: DC2 labels: - osd - mon --- service_type: host addr: 10.0.40.66 hostname: ceph6 location: datacenter: DC2 labels: - osd - mds - rgw --- service_type: host addr: 10.0.40.221 hostname: ceph7 labels: - mon --- service_type: mon placement: label: "mon" --- service_type: mds service_id: cephfs placement: label: "mds" --- service_type: mgr service_name: mgr placement: label: "mgr" --- service_type: osd service_id: all-available-devices service_name: osd.all-available-devices placement: label: "osd" spec: data_devices: all: true --- service_type: rgw service_id: objectgw service_name: rgw.objectgw placement: count: 2 label: "rgw" spec: rgw_frontend_port: 8080 EOF Retrieve the IP for the NIC with the Red Hat Ceph Storage public network configured from the bootstrap node. After substituting 10.0.40.0 with the subnet that you have defined in your ceph public network, execute the following command. USD ip a | grep 10.0.40 Example output: Run the cephadm bootstrap command as the root user on the node that will be the initial Monitor node in the cluster. The IP_ADDRESS option is the node's IP address that you are using to run the cephadm bootstrap command. Note If you have configured a different user instead of root for passwordless SSH access, then use the --ssh-user= flag with the cepadm bootstrap command. If you are using non default/id_rsa ssh key names, then use --ssh-private-key and --ssh-public-key options with cephadm command. USD cephadm bootstrap --ssh-user=deployment-user --mon-ip 10.0.40.78 --apply-spec /root/cluster-spec.yaml --registry-json /root/registry.json Important If the local node uses fully-qualified domain names (FQDN), then add the --allow-fqdn-hostname option to cephadm bootstrap on the command line. Once the bootstrap finishes, you will see the following output from the cephadm bootstrap command: You can access the Ceph CLI with: sudo /usr/sbin/cephadm shell --fsid dd77f050-9afe-11ec-a56c-029f8148ea14 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring Consider enabling telemetry to help improve Ceph: ceph telemetry on For more information see: https://docs.ceph.com/docs/pacific/mgr/telemetry/ Verify the status of Red Hat Ceph Storage cluster deployment using the Ceph CLI client from ceph1: USD ceph -s Example output: Note It may take several minutes for all the services to start. It is normal to get a global recovery event while you do not have any OSDs configured. You can use ceph orch ps and ceph orch ls to further check the status of the services. Verify if all the nodes are part of the cephadm cluster. USD ceph orch host ls Example output: Note You can run Ceph commands directly from the host because ceph1 was configured in the cephadm-ansible inventory as part of the [admin] group. The Ceph admin keys were copied to the host during the cephadm bootstrap process. Check the current placement of the Ceph monitor services on the datacenters. USD ceph orch ps | grep mon | awk '{print USD1 " " USD2}' Example output: Check the current placement of the Ceph manager services on the datacenters. Example output: Check the ceph osd crush map layout to ensure that each host has one OSD configured and its status is UP . Also, double-check that each node is under the right datacenter bucket as specified in table 3.1 USD ceph osd tree Example output: Create and enable a new RDB block pool. Note The number 32 at the end of the command is the number of PGs assigned to this pool. The number of PGs can vary depending on several factors like the number of OSDs in the cluster, expected % used of the pool, etc. You can use the following calculator to determine the number of PGs needed: Ceph Placement Groups (PGs) per Pool Calculator . Verify that the RBD pool has been created. Example output: Verify that MDS services are active and have located one service on each datacenter. Example output: Create the CephFS volume. USD ceph fs volume create cephfs Note The ceph fs volume create command also creates the needed data and meta CephFS pools. For more information, see Configuring and Mounting Ceph File Systems . Check the Ceph status to verify how the MDS daemons have been deployed. Ensure that the state is active where ceph6 is the primary MDS for this filesystem and ceph3 is the secondary MDS. USD ceph fs status Example output: Verify that RGW services are active. USD ceph orch ps | grep rgw Example output: 3.5.3. Configuring Red Hat Ceph Storage stretch mode Once the Red Hat Ceph Storage cluster is fully deployed using cephadm , use the following procedure to configure the stretch cluster mode. The new stretch mode is designed to handle the 2-site case. Procedure Check the current election strategy being used by the monitors with the ceph mon dump command. By default in a ceph cluster, the connectivity is set to classic. ceph mon dump | grep election_strategy Example output: Change the monitor election to connectivity. ceph mon set election_strategy connectivity Run the ceph mon dump command again to verify the election_strategy value. USD ceph mon dump | grep election_strategy Example output: To know more about the different election strategies, see Configuring monitor election strategy . Set the location for all our Ceph monitors: ceph mon set_location ceph1 datacenter=DC1 ceph mon set_location ceph2 datacenter=DC1 ceph mon set_location ceph4 datacenter=DC2 ceph mon set_location ceph5 datacenter=DC2 ceph mon set_location ceph7 datacenter=DC3 Verify that each monitor has its appropriate location. USD ceph mon dump Example output: Create a CRUSH rule that makes use of this OSD crush topology by installing the ceph-base RPM package in order to use the crushtool command: USD dnf -y install ceph-base To know more about CRUSH ruleset, see Ceph CRUSH ruleset . Get the compiled CRUSH map from the cluster: USD ceph osd getcrushmap > /etc/ceph/crushmap.bin Decompile the CRUSH map and convert it to a text file in order to be able to edit it: USD crushtool -d /etc/ceph/crushmap.bin -o /etc/ceph/crushmap.txt Add the following rule to the CRUSH map by editing the text file /etc/ceph/crushmap.txt at the end of the file. USD vim /etc/ceph/crushmap.txt This example is applicable for active applications in both OpenShift Container Platform clusters. Note The rule id has to be unique. In the example, we only have one more crush rule with id 0 hence we are using id 1. If your deployment has more rules created, then use the free id. The CRUSH rule declared contains the following information: Rule name Description: A unique whole name for identifying the rule. Value: stretch_rule id Description: A unique whole number for identifying the rule. Value: 1 type Description: Describes a rule for either a storage drive replicated or erasure-coded. Value: replicated min_size Description: If a pool makes fewer replicas than this number, CRUSH will not select this rule. Value: 1 max_size Description: If a pool makes more replicas than this number, CRUSH will not select this rule. Value: 10 step take default Description: Takes the root bucket called default , and begins iterating down the tree. step choose firstn 0 type datacenter Description: Selects the datacenter bucket, and goes into its subtrees. step chooseleaf firstn 2 type host Description: Selects the number of buckets of the given type. In this case, it is two different hosts located in the datacenter it entered at the level. step emit Description: Outputs the current value and empties the stack. Typically used at the end of a rule, but may also be used to pick from different trees in the same rule. Compile the new CRUSH map from the file /etc/ceph/crushmap.txt and convert it to a binary file called /etc/ceph/crushmap2.bin : USD crushtool -c /etc/ceph/crushmap.txt -o /etc/ceph/crushmap2.bin Inject the new crushmap we created back into the cluster: USD ceph osd setcrushmap -i /etc/ceph/crushmap2.bin Example output: Note The number 17 is a counter and it will increase (18,19, and so on) depending on the changes you make to the crush map. Verify that the stretched rule created is now available for use. ceph osd crush rule ls Example output: Enable the stretch cluster mode. USD ceph mon enable_stretch_mode ceph7 stretch_rule datacenter In this example, ceph7 is the arbiter node, stretch_rule is the crush rule we created in the step and datacenter is the dividing bucket. Verify all our pools are using the stretch_rule CRUSH rule we have created in our Ceph cluster: USD for pool in USD(rados lspools);do echo -n "Pool: USD{pool}; ";ceph osd pool get USD{pool} crush_rule;done Example output: This indicates that a working Red Hat Ceph Storage stretched cluster with arbiter mode is now available. 3.6. Installing OpenShift Data Foundation on managed clusters To configure storage replication between the two OpenShift Container Platform clusters, OpenShift Data Foundation operator must be installed first on each managed cluster. Prerequisites Ensure that you have met the hardware requirements for OpenShift Data Foundation external deployments. For a detailed description of the hardware requirements, see External mode requirements . Procedure Install and configure the latest OpenShift Data Foundation cluster on each of the managed clusters. After installing the operator, create a StorageSystem using the option Full deployment type and Connect with external storage platform where your Backing storage type is Red Hat Ceph Storage . For detailed instructions, refer to Deploying OpenShift Data Foundation in external mode . Use the following flags with the ceph-external-cluster-details-exporter.py script. At a minimum, you must use the following three flags with the ceph-external-cluster-details-exporter.py script : --rbd-data-pool-name With the name of the RBD pool that was created during RHCS deployment for OpenShift Container Platform. For example, the pool can be called rbdpool . --rgw-endpoint Provide the endpoint in the format <ip_address>:<port> . It is the RGW IP of the RGW daemon running on the same site as the OpenShift Container Platform cluster that you are configuring. --run-as-user With a different client name for each site. The following flags are optional if default values were used during the RHCS deployment: --cephfs-filesystem-name With the name of the CephFS filesystem we created during RHCS deployment for OpenShift Container Platform, the default filesystem name is cephfs . --cephfs-data-pool-name With the name of the CephFS data pool we created during RHCS deployment for OpenShift Container Platform, the default pool is called cephfs.data . --cephfs-metadata-pool-name With the name of the CephFS metadata pool we created during RHCS deployment for OpenShift Container Platform, the default pool is called cephfs.meta . Run the following command on the bootstrap node ceph1 , to get the IP for the RGW endpoints in datacenter1 and datacenter2: Example output: Example output: Run the ceph-external-cluster-details-exporter.py with the parameters that are configured for the first OpenShift Container Platform managed cluster cluster1 on bootstrapped node ceph1 . Note Modify the <rgw-endpoint> XXX.XXX.XXX.XXX according to your environment. Run the ceph-external-cluster-details-exporter.py with the parameters that are configured for the first OpenShift Container Platform managed cluster cluster2 on bootstrapped node ceph1 . Note Modify the <rgw-endpoint> XXX.XXX.XXX.XXX according to your environment. Save the two files generated in the bootstrap cluster (ceph1) ocp-cluster1.json and ocp-cluster2.json to your local machine. Use the contents of file ocp-cluster1.json on the OpenShift Container Platform console on cluster1 where external OpenShift Data Foundation is being deployed. Use the contents of file ocp-cluster2.json on the OpenShift Container Platform console on cluster2 where external OpenShift Data Foundation is being deployed. Review the settings and then select Create StorageSystem . Validate the successful deployment of OpenShift Data Foundation on each managed cluster with the following command: For the Multicloud Gateway (MCG): Wait for the status result to be Ready for both queries on the Primary managed cluster and the Secondary managed cluster . On the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-external-storagecluster-storagesystem Resources . Verify that the Status of StorageCluster is Ready and has a green tick mark to it. 3.7. Installing OpenShift Data Foundation Multicluster Orchestrator operator OpenShift Data Foundation Multicluster Orchestrator is a controller that is installed from OpenShift Container Platform's OperatorHub on the Hub cluster. Procedure On the Hub cluster , navigate to OperatorHub and use the keyword filter to search for ODF Multicluster Orchestrator . Click ODF Multicluster Orchestrator tile. Keep all default settings and click Install . Ensure that the operator resources are installed in openshift-operators project and available to all namespaces. Note The ODF Multicluster Orchestrator also installs the Openshift DR Hub Operator on the RHACM hub cluster as a dependency. Verify that the operator Pods are in a Running state. The OpenShift DR Hub operator is also installed at the same time in openshift-operators namespace. Example output: 3.8. Configuring SSL access across clusters Configure network (SSL) access between the primary and secondary clusters so that metadata can be stored on the alternate cluster in a Multicloud Gateway (MCG) object bucket using a secure transport protocol and in the Hub cluster for verifying access to the object buckets. Note If all of your OpenShift clusters are deployed using a signed and valid set of certificates for your environment then this section can be skipped. Procedure Extract the ingress certificate for the Primary managed cluster and save the output to primary.crt . Extract the ingress certificate for the Secondary managed cluster and save the output to secondary.crt . Create a new ConfigMap file to hold the remote cluster's certificate bundle with filename cm-clusters-crt.yaml . Note There could be more or less than three certificates for each cluster as shown in this example file. Also, ensure that the certificate contents are correctly indented after you copy and paste from the primary.crt and secondary.crt files that were created before. Create the ConfigMap on the Primary managed cluster , Secondary managed cluster , and the Hub cluster . Example output: Patch default proxy resource on the Primary managed cluster , Secondary managed cluster , and the Hub cluster . Example output: 3.9. Creating Disaster Recovery Policy on Hub cluster Openshift Disaster Recovery Policy (DRPolicy) resource specifies OpenShift Container Platform clusters participating in the disaster recovery solution and the desired replication interval. DRPolicy is a cluster scoped resource that users can apply to applications that require Disaster Recovery solution. The ODF MultiCluster Orchestrator Operator facilitates the creation of each DRPolicy and the corresponding DRClusters through the Multicluster Web console . Prerequisites Ensure that there is a minimum set of two managed clusters. Procedure On the OpenShift console , navigate to All Clusters Data Services Data policies . Click Create DRPolicy . Enter Policy name . Ensure that each DRPolicy has a unique name (for example: ocp4perf1-ocp4perf2 ). Select two clusters from the list of managed clusters to which this new policy will be associated with. Replication policy is automatically set to sync based on the OpenShift clusters selected. Click Create . Verify that the DRPolicy is created successfully. Run this command on the Hub cluster for each of the DRPolicy resources created, where <drpolicy_name> is replaced with your unique name. Example output: When a DRPolicy is created, along with it, two DRCluster resources are also created. It could take up to 10 minutes for all three resources to be validated and for the status to show as Succeeded . Note Editing of SchedulingInterval , ReplicationClassSelector , VolumeSnapshotClassSelector and DRClusters field values are not supported in the DRPolicy. Verify the object bucket access from the Hub cluster to both the Primary managed cluster and the Secondary managed cluster . Get the names of the DRClusters on the Hub cluster. Example output: Check S3 access to each bucket created on each managed cluster. Use the DRCluster validation command, where <drcluster_name> is replaced with your unique name. Note Editing of Region and S3ProfileName field values are non supported in DRClusters. Example output: Note Make sure to run commands for both DRClusters on the Hub cluster . Verify that the OpenShift DR Cluster operator installation was successful on the Primary managed cluster and the Secondary managed cluster . Example output: You can also verify that OpenShift DR Cluster Operator is installed successfully on the OperatorHub of each managed cluster. Verify that the secret is propagated correctly on the Primary managed cluster and the Secondary managed cluster. Match the output with the s3SecretRef from the Hub cluster: 3.10. Configure DRClusters for fencing automation This configuration is required for enabling fencing prior to application failover. In order to prevent writes to the persistent volume from the cluster which is hit by a disaster, OpenShift DR instructs Red Hat Ceph Storage (RHCS) to fence the nodes of the cluster from the RHCS external storage. This section guides you on how to add the IPs or the IP Ranges for the nodes of the DRCluster. 3.10.1. Add node IP addresses to DRClusters Find the IP addresses for all of the OpenShift nodes in the managed clusters by running this command in the Primary managed cluster and the Secondary managed cluster . Example output: Once you have the IP addresses then the DRCluster resources can be modified for each managed cluster. Find the DRCluster names on the Hub Cluster. Example output: Edit each DRCluster to add your unique IP addresses after replacing <drcluster_name> with your unique name. Example output: Note There could be more than six IP addresses. Modify this DRCluster configuration also for IP addresses on the Secondary managed clusters in the peer DRCluster resource (e.g., ocp4perf2). 3.10.2. Add fencing annotations to DRClusters Add the following annotations to all the DRCluster resources. These annotations include details needed for the NetworkFence resource created later in these instructions (prior to testing application failover). Note Replace <drcluster_name> with your unique name. Example output: Make sure to add these annotations for both DRCluster resources (for example: ocp4perf1 and ocp4perf2 ). 3.11. Create sample application for testing disaster recovery solution OpenShift Data Foundation disaster recovery (DR) solution supports disaster recovery for Subscription-based and ApplicationSet-based applications that are managed by RHACM. For more details, see Subscriptions and ApplicationSet documentation. The following sections detail how to create an application and apply a DRPolicy to an application. Subscription-based applications OpenShift users that do not have cluster-admin permissions, see the knowledge article on how to assign necessary permissions to an application user for executing disaster recovery actions. ApplicationSet-based applications OpenShift users that do not have cluster-admin permissions cannot create ApplicationSet-based applications. 3.11.1. Subscription-based applications 3.11.1.1. Creating a sample Subscription-based application In order to test failover from the Primary managed cluster to the Secondary managed cluster and relocate , we need a sample application. Prerequisites When creating an application for general consumption, ensure that the application is deployed to ONLY one cluster. Use the sample application called busybox as an example. Ensure all external routes of the application are configured using either Global Traffic Manager (GTM) or Global Server Load Balancing (GLSB) service for traffic redirection when the application fails over or is relocated. As a best practice, group Red Hat Advanced Cluster Management (RHACM) subscriptions that belong together, refer to a single Placement Rule to DR protect them as a group. Further create them as a single application for a logical grouping of the subscriptions for future DR actions like failover and relocate. Note If unrelated subscriptions refer to the same Placement Rule for placement actions, they are also DR protected as the DR workflow controls all subscriptions that references the Placement Rule. Procedure On the Hub cluster, navigate to Applications and click Create application . Select type as Subscription . Enter your application Name (for example, busybox ) and Namespace (for example, busybox-sample ). In the Repository location for resources section, select Repository type Git . Enter the Git repository URL for the sample application, the github Branch and Path where the resources busybox Pod and PVC will be created. Use the sample application repository as https://github.com/red-hat-storage/ocm-ramen-samples where the Branch is release-4.15 and Path is busybox-odr-metro . Scroll down in the form until you see Deploy application resources on clusters with all specified labels . Select the global Cluster sets or the one that includes the correct managed clusters for your environment. Add a label <name> with its value set to the managed cluster name. Click Create which is at the top right hand corner. On the follow-on screen go to the Topology tab. You should see that there are all Green checkmarks on the application topology. Note To get more information, click on any of the topology elements and a window will appear on the right of the topology view. Validating the sample application deployment. Now that the busybox application has been deployed to your preferred Cluster, the deployment can be validated. Log in to your managed cluster where busybox was deployed by RHACM. Example output: 3.11.1.2. Apply Data policy to sample application Prerequisites Ensure that both managed clusters referenced in the Data policy are reachable. If not, the application will not be protected for disaster recovery until both clusters are online. Procedure On the Hub cluster, navigate to All Clusters Applications . Click the Actions menu at the end of application to view the list of available actions. Click Manage data policy Assign data policy . Select Policy and click . Select an Application resource and then use PVC label selector to select PVC label for the selected application resource. Note You can select more than one PVC label for the selected application resources. You can also use the Add application resource option to add multiple resources. After adding all the application resources, click . Review the Policy configuration details and click Assign . The newly assigned Data policy is displayed on the Manage data policy modal list view. Verify that you can view the assigned policy details on the Applications page. On the Applications page, navigate to the Data policy column and click the policy link to expand the view. Verify that you can see the number of policies assigned along with failover and relocate status. Click View more details to view the status of ongoing activities with the policy in use with the application. After you apply DRPolicy to the applications, confirm whether the ClusterDataProtected is set to True in the drpc yaml output. 3.11.2. ApplicationSet-based applications 3.11.2.1. Creating ApplicationSet-based applications Prerequisite Ensure that the Red Hat OpenShift GitOps operator is installed on the Hub cluster. For instructions, see RHACM documentation . Ensure that both Primary and Secondary managed clusters are registered to GitOps. For registration instructions, see Registering managed clusters to GitOps . Then check if the Placement used by GitOpsCluster resource to register both managed clusters, has the tolerations to deal with cluster unavailability. You can verify if the following tolerations are added to the Placement using the command oc get placement <placement-name> -n openshift-gitops -o yaml . In case the tolerations are not added, see Configuring application placement tolerations for Red Hat Advanced Cluster Management and OpenShift GitOps . Procedure On the Hub cluster, navigate to All Clusters Applications and click Create application . Choose application type as Argo CD ApplicationSet - Push model In General step 1, enter your Application set name . Select Argo server openshift-gitops and Requeue time as 180 seconds. Click . In the Repository location for resources section, select Repository type Git . Enter the Git repository URL for the sample application, the github Branch and Path where the resources busybox Pod and PVC will be created. Use the sample application repository as https://github.com/red-hat-storage/ocm-ramen-samples Select Revision as release-4.15 Choose Path as busybox-odr-metro . Enter Remote namespace value. (example, busybox-sample) and click . Select Sync policy settings and click . You can choose one or more options. Add a label <name> with its value set to the managed cluster name. Click . Review the setting details and click Submit . 3.11.2.2. Apply Data policy to sample ApplicationSet-based application Prerequisites Ensure that both managed clusters referenced in the Data policy are reachable. If not, the application will not be protected for disaster recovery until both clusters are online. Procedure On the Hub cluster, navigate to All Clusters Applications . Click the Actions menu at the end of application to view the list of available actions. Click Manage data policy Assign data policy . Select Policy and click . Select an Application resource and then use PVC label selector to select PVC label for the selected application resource. Note You can select more than one PVC label for the selected application resources. After adding all the application resources, click . Review the Policy configuration details and click Assign . The newly assigned Data policy is displayed on the Manage data policy modal list view. Verify that you can view the assigned policy details on the Applications page. On the Applications page, navigate to the Data policy column and click the policy link to expand the view. Verify that you can see the number of policies assigned along with failover and relocate status. After you apply DRPolicy to the applications, confirm whether the ClusterDataProtected is set to True in the drpc yaml output. 3.11.3. Deleting sample application This section provides instructions for deleting the sample application busybox using the RHACM console. Important When deleting a DR protected application, access to both clusters that belong to the DRPolicy is required. This is to ensure that all protected API resources and resources in the respective S3 stores are cleaned up as part of removing the DR protection. If access to one of the clusters is not healthy, deleting the DRPlacementControl resource for the application, on the hub, would remain in the Deleting state. Prerequisites These instructions to delete the sample application should not be executed until the failover and relocate testing is completed and the application is ready to be removed from RHACM and the managed clusters. Procedure On the RHACM console, navigate to Applications . Search for the sample application to be deleted (for example, busybox ). Click the Action Menu (...) to the application you want to delete. Click Delete application . When the Delete application is selected a new screen will appear asking if the application related resources should also be deleted. Select Remove application related resources checkbox to delete the Subscription and PlacementRule. Click Delete . This will delete the busybox application on the Primary managed cluster (or whatever cluster the application was running on). In addition to the resources deleted using the RHACM console, delete the DRPlacementControl if it is not auto-deleted after deleting the busybox application. Log in to the OpenShift Web console for the Hub cluster and navigate to Installed Operators for the project busybox-sample . For ApplicationSet applications, select the project as openshift-gitops . Click OpenShift DR Hub Operator and then click the DRPlacementControl tab. Click the Action Menu (...) to the busybox application DRPlacementControl that you want to delete. Click Delete DRPlacementControl . Click Delete . Note This process can be used to delete any application with a DRPlacementControl resource. 3.12. Subscription-based application failover between managed clusters Perform a failover when a managed cluster becomes unavailable, due to any reason. This failover method is application-based. Prerequisites If your setup has active and passive RHACM hub clusters, see Hub recovery using Red Hat Advanced Cluster Management . When the primary cluster is in a state other than Ready , check the actual status of the cluster as it might take some time to update. Navigate to the RHACM console Infrastructure Clusters Cluster list tab. Check the status of both the managed clusters individually before performing failover operation. However, failover operation can still be performed when the cluster you are failing over to is in a Ready state. Procedure Enable fencing on the Hub cluster . Open CLI terminal and edit the DRCluster resource , where <drcluster_name> is your unique name. Caution Once the managed cluster is fenced, all communication from applications to the OpenShift Data Foundation external storage cluster will fail and some Pods will be in an unhealthy state (for example: CreateContainerError , CrashLoopBackOff ) on the cluster that is now fenced. Example output: Verify the fencing status on the Hub cluster for the Primary managed cluster , replacing <drcluster_name> is your unique identifier. Example output: Verify that the IPs that belong to the OpenShift Container Platform cluster nodes are now in the blocklist. Example output On the Hub cluster, navigate to Applications . Click the Actions menu at the end of application row to view the list of available actions. Click Failover application . After the Failover application modal is shown, select policy and target cluster to which the associated application will failover in case of a disaster. Click the Select subscription group dropdown to verify the default selection or modify this setting. By default, the subscription group that replicates for the application resources is selected. Check the status of the Failover readiness . If the status is Ready with a green tick, it indicates that the target cluster is ready for failover to start. Proceed to step 7. If the status is Unknown or Not ready , then wait until the status changes to Ready . Click Initiate . The busybox application is now failing over to the Secondary-managed cluster . Close the modal window and track the status using the Data policy column on the Applications page. Verify that the activity status shows as FailedOver for the application. Navigate to the Applications Overview tab. In the Data policy column, click the policy link for the application you applied the policy to. On the Data policy popover, click the View more details link. 3.13. ApplicationSet-based application failover between managed clusters Perform a failover when a managed cluster becomes unavailable, due to any reason. This failover method is application-based. Prerequisites If your setup has active and passive RHACM hub clusters, see Hub recovery using Red Hat Advanced Cluster Management . When the primary cluster is in a state other than Ready , check the actual status of the cluster as it might take some time to update. Navigate to the RHACM console Infrastructure Clusters Cluster list tab. Check the status of both the managed clusters individually before performing failover operation. However, failover operation can still be performed when the cluster you are failing over to is in a Ready state. Procedure Enable fencing on the Hub cluster . Open CLI terminal and edit the DRCluster resource , where <drcluster_name> is your unique name. Caution Once the managed cluster is fenced, all communication from applications to the OpenShift Data Foundation external storage cluster will fail and some Pods will be in an unhealthy state (for example: CreateContainerError , CrashLoopBackOff ) on the cluster that is now fenced. Example output: Verify the fencing status on the Hub cluster for the Primary managed cluster , replacing <drcluster_name> is your unique identifier. Example output: Verify that the IPs that belong to the OpenShift Container Platform cluster nodes are now in the blocklist. Example output On the Hub cluster, navigate to Applications . Click the Actions menu at the end of application row to view the list of available actions. Click Failover application . When the Failover application modal is shown, verify the details presented are correct and check the status of the Failover readiness . If the status is Ready with a green tick, it indicates that the target cluster is ready for failover to start. Click Initiate . The busybox resources are now created on the target cluster. Close the modal window and track the status using the Data policy column on the Applications page. Verify that the activity status shows as FailedOver for the application. Navigate to the Applications Overview tab. In the Data policy column, click the policy link for the application you applied the policy to. On the Data policy popover, verify that you can see one or more policy names and the ongoing activities associated with the policy in use with the application. 3.14. Relocating Subscription-based application between managed clusters Relocate an application to its preferred location when all managed clusters are available. Prerequisite If your setup has active and passive RHACM hub clusters, see Hub recovery using Red Hat Advanced Cluster Management . When the primary cluster is in a state other than Ready , check the actual status of the cluster as it might take some time to update. Relocate can only be performed when both primary and preferred clusters are up and running. Navigate to RHACM console Infrastructure Clusters Cluster list tab. Check the status of both the managed clusters individually before performing relocate operation. Verify that applications were cleaned up from the cluster before unfencing it. Procedure Disable fencing on the Hub cluster. Edit the DRCluster resource for this cluster, replacing <drcluster_name> with a unique name. Example output: Gracefully reboot OpenShift Container Platform nodes that were Fenced . A reboot is required to resume the I/O operations after unfencing to avoid any further recovery orchestration failures. Reboot all nodes of the cluster by following the steps in the procedure, Rebooting a node gracefully . Note Make sure that all the nodes are initially cordoned and drained before you reboot and perform uncordon operations on the nodes. After all OpenShift nodes are rebooted and are in a Ready status, verify that all Pods are in a healthy state by running this command on the Primary managed cluster (or whatever cluster has been Unfenced). Example output: The output for this query should be zero Pods before proceeding to the step. Important If there are Pods still in an unhealthy status because of severed storage communication, troubleshoot and resolve before continuing. Because the storage cluster is external to OpenShift, it also has to be properly recovered after a site outage for OpenShift applications to be healthy. Alternatively, you can use the OpenShift Web Console dashboards and Overview tab to assess the health of applications and the external ODF storage cluster. The detailed OpenShift Data Foundation dashboard is found by navigating to Storage Data Foundation . Verify that the Unfenced cluster is in a healthy state. Validate the fencing status in the Hub cluster for the Primary-managed cluster, replacing <drcluster_name> with a unique name. Example output: Verify that the IPs that belong to the OpenShift Container Platform cluster nodes are NOT in the blocklist. Ensure that you do not see the IPs added during fencing. On the Hub cluster, navigate to Applications . Click the Actions menu at the end of application row to view the list of available actions. Click Relocate application . When the Relocate application modal is shown, select policy and target cluster to which the associated application will relocate to in case of a disaster. By default, the subscription group that will deploy the application resources is selected. Click the Select subscription group dropdown to verify the default selection or modify this setting. Check the status of the Relocation readiness . If the status is Ready with a green tick, it indicates that the target cluster is ready for relocation to start. Proceed to step 7. If the status is Unknown or Not ready , then wait until the status changes to Ready . Click Initiate . The busybox resources are now created on the target cluster. Close the modal window and track the status using the Data policy column on the Applications page. Verify that the activity status shows as Relocated for the application. Navigate to the Applications Overview tab. In the Data policy column, click the policy link for the application you applied the policy to. On the Data policy popover, click the View more details link. 3.15. Relocating an ApplicationSet-based application between managed clusters Relocate an application to its preferred location when all managed clusters are available. Prerequisite If your setup has active and passive RHACM hub clusters, see Hub recovery using Red Hat Advanced Cluster Management . When the primary cluster is in a state other than Ready , check the actual status of the cluster as it might take some time to update. Relocate can only be performed when both primary and preferred clusters are up and running. Navigate to RHACM console Infrastructure Clusters Cluster list tab. Check the status of both the managed clusters individually before performing relocate operation. Verify that applications were cleaned up from the cluster before unfencing it. Procedure Disable fencing on the Hub cluster. Edit the DRCluster resource for this cluster, replacing <drcluster_name> with a unique name. Example output: Gracefully reboot OpenShift Container Platform nodes that were Fenced . A reboot is required to resume the I/O operations after unfencing to avoid any further recovery orchestration failures. Reboot all nodes of the cluster by following the steps in the procedure, Rebooting a node gracefully . Note Make sure that all the nodes are initially cordoned and drained before you reboot and perform uncordon operations on the nodes. After all OpenShift nodes are rebooted and are in a Ready status, verify that all Pods are in a healthy state by running this command on the Primary managed cluster (or whatever cluster has been Unfenced). Example output: The output for this query should be zero Pods before proceeding to the step. Important If there are Pods still in an unhealthy status because of severed storage communication, troubleshoot and resolve before continuing. Because the storage cluster is external to OpenShift, it also has to be properly recovered after a site outage for OpenShift applications to be healthy. Alternatively, you can use the OpenShift Web Console dashboards and Overview tab to assess the health of applications and the external ODF storage cluster. The detailed OpenShift Data Foundation dashboard is found by navigating to Storage Data Foundation . Verify that the Unfenced cluster is in a healthy state. Validate the fencing status in the Hub cluster for the Primary-managed cluster, replacing <drcluster_name> with a unique name. Example output: Verify that the IPs that belong to the OpenShift Container Platform cluster nodes are NOT in the blocklist. Ensure that you do not see the IPs added during fencing. On the Hub cluster, navigate to Applications . Click the Actions menu at the end of application row to view the list of available actions. Click Relocate application . When the Relocate application modal is shown, select policy and target cluster to which the associated application will relocate to in case of a disaster. Click Initiate . The busybox resources are now created on the target cluster. Close the modal window and track the status using the Data policy column on the Applications page. Verify that the activity status shows as Relocated for the application. Navigate to the Applications Overview tab. In the Data policy column, click the policy link for the application you applied the policy to. On the Data policy popover, verify that you can see one or more policy names and the relocation status associated with the policy in use with the application. 3.16. Recovering to a replacement cluster with Metro-DR When there is a failure with the primary cluster, you get the options to either repair, wait for the recovery of the existing cluster, or replace the cluster entirely if the cluster is irredeemable. This solution guides you when replacing a failed primary cluster with a new cluster and enables failback (relocate) to this new cluster. In these instructions, we are assuming that a RHACM managed cluster must be replaced after the applications have been installed and protected. For purposes of this section, the RHACM managed cluster is the replacement cluster , while the cluster that is not replaced is the surviving cluster and the new cluster is the recovery cluster . Prerequisite Ensure that the Metro-DR environment has been configured with applications installed using Red Hat Advance Cluster Management (RHACM). Ensure that the applications are assigned a Data policy which protects them against cluster failure. Procedure Perform the following steps on the Hub cluster : Fence the replacement cluster by using the CLI terminal to edit the DRCluster resource, where <drcluster_name> is the replacement cluster name. Using the RHACM console, navigate to Applications and failover all protected applications from the failed cluster to the surviving cluster. Verify and ensure that all protected applications are now running on the surviving cluster. Note The PROGRESSION state for each application DRPlacementControl will show as Cleaning Up . This is expected if the replacement cluster is offline or down. Unfence the replacement cluster. Using the CLI terminal, edit the DRCluster resource, where <drcluster_name> is the replacement cluster name. Delete the DRCluster for the replacement cluster. Note Use --wait=false since the DRCluster will not be deleted until a later step. Disable disaster recovery on the Hub cluster for each protected application on the surviving cluster. For each application, edit the Placement and ensure that the surviving cluster is selected. Note For Subscription-based applications the associated Placement can be found in the same namespace on the hub cluster similar to the managed clusters. For ApplicationSets-based applications the associated Placement can be found in the openshift-gitops namespace on the hub cluster. Verify that the s3Profile is removed for the replacement cluster by running the following command on the surviving cluster for each protected application's VolumeReplicationGroup. After the protected application Placement resources are all configured to use the surviving cluster and replacement cluster s3Profile(s) removed from protected applications, all DRPlacementControl resources must be deleted from the Hub cluster . Note For Subscription-based applications the associated DRPlacementControl can be found in the same namespace as the managed clusters on the hub cluster. For ApplicationSets-based applications the associated DRPlacementControl can be found in the openshift-gitops namespace on the hub cluster. Verify that all DRPlacementControl resources are deleted before proceeding to the step. This command is a query across all namespaces. There should be no resources found. The last step is to edit each applications Placement and remove the annotation cluster.open-cluster-management.io/experimental-scheduling-disable: "true" . Repeat the process detailed in the last step and the sub-steps for every protected application on the surviving cluster. Disabling DR for protected applications is now completed. On the Hub cluster, run the following script to remove all disaster recovery configurations from the surviving cluster and the hub cluster . Note This script used the command oc delete project openshift-operators to remove the Disaster Recovery (DR) operators in this namespace on the hub cluster. If there are other non-DR operators in this namespace, you must install them again from OperatorHub. After the namespace openshift-operators is automatically created again, add the monitoring label back for collecting the disaster recovery metrics. On the surviving cluster, ensure that the object bucket created during the DR installation is deleted. Delete the object bucket if it was not removed by script. The name of the object bucket used for DR starts with odrbucket . On the RHACM console, navigate to Infrastructure Clusters view . Detach the replacement cluster. Create a new OpenShift cluster (recovery cluster) and import the new cluster into the RHACM console. For instructions, see Creating a cluster and Importing a target managed cluster to the hub cluster . Install OpenShift Data Foundation operator on the recovery cluster and connect it to the same external Ceph storage system as the surviving cluster. For detailed instructions, refer to Deploying OpenShift Data Foundation in external mode . Note Ensure that the OpenShift Data Foundation version is 4.15 (or greater) and the same version of OpenShift Data Foundation is on the surviving cluster. On the hub cluster, install the ODF Multicluster Orchestrator operator from OperatorHub. For instructions, see chapter on Installing OpenShift Data Foundation Multicluster Orchestrator operator . Using the RHACM console, navigate to Data Services Data policies . Select Create DRPolicy and name your policy. Select the recovery cluster and the surviving cluster . Create the policy. For instructions see chapter on Creating Disaster Recovery Policy on Hub cluster . Proceed to the step only after the status of DRPolicy changes to Validated . Apply the DRPolicy to the applications on the surviving cluster that were originally protected before the replacement cluster failed. Relocate the newly protected applications on the surviving cluster back to the new recovery (primary) cluster. Using the RHACM console, navigate to the Applications menu to perform the relocation. 3.17. Hub recovery using Red Hat Advanced Cluster Management [Technology preview] When your setup has active and passive Red Hat Advanced Cluster Management for Kubernetes (RHACM) hub clusters, and in case where the active hub is down, you can use the passive hub to failover or relocate the disaster recovery protected workloads. Important Hub recovery is a Technology Preview feature and is subject to Technology Preview support limitations. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information, see Technology Preview Features Support Scope . 3.17.1. Configuring passive hub cluster To perform hub recovery in case the active hub is down or unreachable, follow the procedure in this section to configure the passive hub cluster and then failover or relocate the disaster recovery protected workloads. Procedure Ensure that RHACM operator and MultiClusterHub is installed on the passive hub cluster. See RHACM installation guide for instructions. After the operator is successfully installed, a popover with a message that the Web console update is available appears on the user interface. Click Refresh web console from this popover for the console changes to reflect. Before hub recovery, configure backup and restore. See Backup and restore topics of RHACM Business continuity guide. Install the multicluster orchestrator (MCO) operator along with Red Hat OpenShift GitOps operator on the passive RHACM hub prior to the restore. For instructions to restore your RHACM hub, see Installing OpenShift Data Foundation Multicluster Orchestrator operator . Ensure that .spec.cleanupBeforeRestore is set to None for the Restore.cluster.open-cluster-management.io resource. For details, see Restoring passive resources while checking for backups chapter of RHACM documentation. If SSL access across clusters was configured manually during setup, then re-configure SSL access across clusters. For instructions, see Configuring SSL access across clusters chapter. On the passive hub, add the monitoring label for collecting the disaster recovery metrics. For alert details, see Disaster recovery alerts chapter. 3.17.2. Switching to passive hub cluster Use this procedure when active hub is down or unreachable. Procedure Restore the backups on the passive hub cluster. For information, see Restoring a hub cluster from backup. Important Recovering a failed hub to its passive instance will only restore applications and their DR protected state to its last scheduled backup. Any application that was DR protected after the last scheduled backup would need to be protected again on the new hub. Verify that the Primary and Seconday managed clusters are successfully imported into the RHACM console and they are accessible. If any of the managed clusters are down or unreachable then they will not be successfully imported. Wait until DRPolicy validation succeeds. Verify that the DRPolicy is created successfully. Run this command on the Hub cluster for each of the DRPolicy resources created, where <drpolicy_name> is replaced with a unique name. Example output: Refresh the RHACM console to make the DR monitoring dashboard tab accessible if it was enabled on the Active hub cluster. If only the active hub cluster is down, restore the hub by performing hub recovery, and restoring the backups on the passive hub. If the managed clusters are still accessible, no further action is required. If the primary managed cluster is down, along with the active hub cluster, you need to fail over the workloads from the primary managed cluster to the secondary managed cluster. For failover instructions, based on your workload type, see Subscription-based applications or ApplicationSet-based applications . Verify that the failover is successful. When the Primary managed cluster is down, then the PROGRESSION status for the workload would be in Cleaning Up phase until the down managed cluster is back online and successfully imported into the RHACM console. On the passive hub cluster, run the following command to check the PROGRESSION status. Example output: | [
"subscription-manager register subscription-manager subscribe --pool=8a8XXXXXX9e0",
"subscription-manager repos --disable=\"*\" --enable=\"rhel9-for-x86_64-baseos-rpms\" --enable=\"rhel9-for-x86_64-appstream-rpms\"",
"dnf update -y reboot",
"subscription-manager repos --enable=\"ansible-2.9-for-rhel-9-x86_64-rpms\" --enable=\"rhceph-6-tools-for-rhel-9-x86_64-rpms\"",
"hostnamectl set-hostname <short_name>",
"hostname",
"ceph1",
"DOMAIN=\"example.domain.com\" cat <<EOF >/etc/hosts 127.0.0.1 USD(hostname).USD{DOMAIN} USD(hostname) localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 USD(hostname).USD{DOMAIN} USD(hostname) localhost6 localhost6.localdomain6 EOF",
"hostname -f",
"ceph1.example.domain.com",
"sudo dnf install -y cephadm-ansible",
"cat <<EOF > ~/.ssh/config Host ceph* User deployment-user IdentityFile ~/.ssh/ceph.pem EOF",
"cat <<EOF > /usr/share/cephadm-ansible/inventory ceph1 ceph2 ceph3 ceph4 ceph5 ceph6 ceph7 [admin] ceph1 ceph4 EOF",
"ansible -i /usr/share/cephadm-ansible/inventory -m ping all -b",
"ceph6 | SUCCESS => { \"ansible_facts\": { \"discovered_interpreter_python\": \"/usr/libexec/platform-python\" }, \"changed\": false, \"ping\": \"pong\" } ceph4 | SUCCESS => { \"ansible_facts\": { \"discovered_interpreter_python\": \"/usr/libexec/platform-python\" }, \"changed\": false, \"ping\": \"pong\" } ceph3 | SUCCESS => { \"ansible_facts\": { \"discovered_interpreter_python\": \"/usr/libexec/platform-python\" }, \"changed\": false, \"ping\": \"pong\" } ceph2 | SUCCESS => { \"ansible_facts\": { \"discovered_interpreter_python\": \"/usr/libexec/platform-python\" }, \"changed\": false, \"ping\": \"pong\" } ceph5 | SUCCESS => { \"ansible_facts\": { \"discovered_interpreter_python\": \"/usr/libexec/platform-python\" }, \"changed\": false, \"ping\": \"pong\" } ceph1 | SUCCESS => { \"ansible_facts\": { \"discovered_interpreter_python\": \"/usr/libexec/platform-python\" }, \"changed\": false, \"ping\": \"pong\" } ceph7 | SUCCESS => { \"ansible_facts\": { \"discovered_interpreter_python\": \"/usr/libexec/platform-python\" }, \"changed\": false, \"ping\": \"pong\" }",
"ansible-playbook -i /usr/share/cephadm-ansible/inventory /usr/share/cephadm-ansible/cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\"",
"cat <<EOF > /root/registry.json { \"url\":\"registry.redhat.io\", \"username\":\"User\", \"password\":\"Pass\" } EOF",
"cat <<EOF > /root/cluster-spec.yaml service_type: host addr: 10.0.40.78 ## <XXX.XXX.XXX.XXX> hostname: ceph1 ## <ceph-hostname-1> location: root: default datacenter: DC1 labels: - osd - mon - mgr --- service_type: host addr: 10.0.40.35 hostname: ceph2 location: datacenter: DC1 labels: - osd - mon --- service_type: host addr: 10.0.40.24 hostname: ceph3 location: datacenter: DC1 labels: - osd - mds - rgw --- service_type: host addr: 10.0.40.185 hostname: ceph4 location: root: default datacenter: DC2 labels: - osd - mon - mgr --- service_type: host addr: 10.0.40.88 hostname: ceph5 location: datacenter: DC2 labels: - osd - mon --- service_type: host addr: 10.0.40.66 hostname: ceph6 location: datacenter: DC2 labels: - osd - mds - rgw --- service_type: host addr: 10.0.40.221 hostname: ceph7 labels: - mon --- service_type: mon placement: label: \"mon\" --- service_type: mds service_id: cephfs placement: label: \"mds\" --- service_type: mgr service_name: mgr placement: label: \"mgr\" --- service_type: osd service_id: all-available-devices service_name: osd.all-available-devices placement: label: \"osd\" spec: data_devices: all: true --- service_type: rgw service_id: objectgw service_name: rgw.objectgw placement: count: 2 label: \"rgw\" spec: rgw_frontend_port: 8080 EOF",
"ip a | grep 10.0.40",
"10.0.40.78",
"cephadm bootstrap --ssh-user=deployment-user --mon-ip 10.0.40.78 --apply-spec /root/cluster-spec.yaml --registry-json /root/registry.json",
"You can access the Ceph CLI with: sudo /usr/sbin/cephadm shell --fsid dd77f050-9afe-11ec-a56c-029f8148ea14 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring Consider enabling telemetry to help improve Ceph: ceph telemetry on For more information see: https://docs.ceph.com/docs/pacific/mgr/telemetry/",
"ceph -s",
"cluster: id: 3a801754-e01f-11ec-b7ab-005056838602 health: HEALTH_OK services: mon: 5 daemons, quorum ceph1,ceph2,ceph4,ceph5,ceph7 (age 4m) mgr: ceph1.khuuot(active, since 5m), standbys: ceph4.zotfsp osd: 12 osds: 12 up (since 3m), 12 in (since 4m) rgw: 2 daemons active (2 hosts, 1 zones) data: pools: 5 pools, 107 pgs objects: 191 objects, 5.3 KiB usage: 105 MiB used, 600 GiB / 600 GiB avail 105 active+clean",
"ceph orch host ls",
"HOST ADDR LABELS STATUS ceph1 10.0.40.78 _admin osd mon mgr ceph2 10.0.40.35 osd mon ceph3 10.0.40.24 osd mds rgw ceph4 10.0.40.185 osd mon mgr ceph5 10.0.40.88 osd mon ceph6 10.0.40.66 osd mds rgw ceph7 10.0.40.221 mon",
"ceph orch ps | grep mon | awk '{print USD1 \" \" USD2}'",
"mon.ceph1 ceph1 mon.ceph2 ceph2 mon.ceph4 ceph4 mon.ceph5 ceph5 mon.ceph7 ceph7",
"ceph orch ps | grep mgr | awk '{print USD1 \" \" USD2}'",
"mgr.ceph2.ycgwyz ceph2 mgr.ceph5.kremtt ceph5",
"ceph osd tree",
"ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.87900 root default -16 0.43950 datacenter DC1 -11 0.14650 host ceph1 2 ssd 0.14650 osd.2 up 1.00000 1.00000 -3 0.14650 host ceph2 3 ssd 0.14650 osd.3 up 1.00000 1.00000 -13 0.14650 host ceph3 4 ssd 0.14650 osd.4 up 1.00000 1.00000 -17 0.43950 datacenter DC2 -5 0.14650 host ceph4 0 ssd 0.14650 osd.0 up 1.00000 1.00000 -9 0.14650 host ceph5 1 ssd 0.14650 osd.1 up 1.00000 1.00000 -7 0.14650 host ceph6 5 ssd 0.14650 osd.5 up 1.00000 1.00000",
"ceph osd pool create 32 32 ceph osd pool application enable rbdpool rbd",
"ceph osd lspools | grep rbdpool",
"3 rbdpool",
"ceph orch ps | grep mds",
"mds.cephfs.ceph3.cjpbqo ceph3 running (17m) 117s ago 17m 16.1M - 16.2.9 mds.cephfs.ceph6.lqmgqt ceph6 running (17m) 117s ago 17m 16.1M - 16.2.9",
"ceph fs volume create cephfs",
"ceph fs status",
"cephfs - 0 clients ====== RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 active cephfs.ceph6.ggjywj Reqs: 0 /s 10 13 12 0 POOL TYPE USED AVAIL cephfs.cephfs.meta metadata 96.0k 284G cephfs.cephfs.data data 0 284G STANDBY MDS cephfs.ceph3.ogcqkl",
"ceph orch ps | grep rgw",
"rgw.objectgw.ceph3.kkmxgb ceph3 *:8080 running (7m) 3m ago 7m 52.7M - 16.2.9 rgw.objectgw.ceph6.xmnpah ceph6 *:8080 running (7m) 3m ago 7m 53.3M - 16.2.9",
"ceph mon dump | grep election_strategy",
"dumped monmap epoch 9 election_strategy: 1",
"ceph mon set election_strategy connectivity",
"ceph mon dump | grep election_strategy",
"dumped monmap epoch 10 election_strategy: 3",
"ceph mon set_location ceph1 datacenter=DC1 ceph mon set_location ceph2 datacenter=DC1 ceph mon set_location ceph4 datacenter=DC2 ceph mon set_location ceph5 datacenter=DC2 ceph mon set_location ceph7 datacenter=DC3",
"ceph mon dump",
"epoch 17 fsid dd77f050-9afe-11ec-a56c-029f8148ea14 last_changed 2022-03-04T07:17:26.913330+0000 created 2022-03-03T14:33:22.957190+0000 min_mon_release 16 (pacific) election_strategy: 3 0: [v2:10.0.143.78:3300/0,v1:10.0.143.78:6789/0] mon.ceph1; crush_location {datacenter=DC1} 1: [v2:10.0.155.185:3300/0,v1:10.0.155.185:6789/0] mon.ceph4; crush_location {datacenter=DC2} 2: [v2:10.0.139.88:3300/0,v1:10.0.139.88:6789/0] mon.ceph5; crush_location {datacenter=DC2} 3: [v2:10.0.150.221:3300/0,v1:10.0.150.221:6789/0] mon.ceph7; crush_location {datacenter=DC3} 4: [v2:10.0.155.35:3300/0,v1:10.0.155.35:6789/0] mon.ceph2; crush_location {datacenter=DC1}",
"dnf -y install ceph-base",
"ceph osd getcrushmap > /etc/ceph/crushmap.bin",
"crushtool -d /etc/ceph/crushmap.bin -o /etc/ceph/crushmap.txt",
"vim /etc/ceph/crushmap.txt",
"rule stretch_rule { id 1 type replicated min_size 1 max_size 10 step take default step choose firstn 0 type datacenter step chooseleaf firstn 2 type host step emit } end crush map",
"crushtool -c /etc/ceph/crushmap.txt -o /etc/ceph/crushmap2.bin",
"ceph osd setcrushmap -i /etc/ceph/crushmap2.bin",
"17",
"ceph osd crush rule ls",
"replicated_rule stretch_rule",
"ceph mon enable_stretch_mode ceph7 stretch_rule datacenter",
"for pool in USD(rados lspools);do echo -n \"Pool: USD{pool}; \";ceph osd pool get USD{pool} crush_rule;done",
"Pool: device_health_metrics; crush_rule: stretch_rule Pool: cephfs.cephfs.meta; crush_rule: stretch_rule Pool: cephfs.cephfs.data; crush_rule: stretch_rule Pool: .rgw.root; crush_rule: stretch_rule Pool: default.rgw.log; crush_rule: stretch_rule Pool: default.rgw.control; crush_rule: stretch_rule Pool: default.rgw.meta; crush_rule: stretch_rule Pool: rbdpool; crush_rule: stretch_rule",
"ceph orch ps | grep rgw.objectgw",
"rgw.objectgw.ceph3.mecpzm ceph3 *:8080 running (5d) 31s ago 7w 204M - 16.2.7-112.el8cp rgw.objectgw.ceph6.mecpzm ceph6 *:8080 running (5d) 31s ago 7w 204M - 16.2.7-112.el8cp",
"host ceph3.example.com host ceph6.example.com",
"ceph3.example.com has address 10.0.40.24 ceph6.example.com has address 10.0.40.66",
"python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name rbdpool --cephfs-filesystem-name cephfs --cephfs-data-pool-name cephfs.cephfs.data --cephfs-metadata-pool-name cephfs.cephfs.meta --<rgw-endpoint> XXX.XXX.XXX.XXX:8080 --run-as-user client.odf.cluster1 > ocp-cluster1.json",
"python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name rbdpool --cephfs-filesystem-name cephfs --cephfs-data-pool-name cephfs.cephfs.data --cephfs-metadata-pool-name cephfs.cephfs.meta --rgw-endpoint XXX.XXX.XXX.XXX:8080 --run-as-user client.odf.cluster2 > ocp-cluster2.json",
"oc get storagecluster -n openshift-storage ocs-external-storagecluster -o jsonpath='{.status.phase}{\"\\n\"}'",
"oc get noobaa -n openshift-storage noobaa -o jsonpath='{.status.phase}{\"\\n\"}'",
"oc get pods -n openshift-operators",
"NAME READY STATUS RESTARTS AGE odf-multicluster-console-6845b795b9-blxrn 1/1 Running 0 4d20h odfmo-controller-manager-f9d9dfb59-jbrsd 1/1 Running 0 4d20h ramen-hub-operator-6fb887f885-fss4w 2/2 Running 0 4d20h",
"oc get cm default-ingress-cert -n openshift-config-managed -o jsonpath=\"{['data']['ca-bundle\\.crt']}\" > primary.crt",
"oc get cm default-ingress-cert -n openshift-config-managed -o jsonpath=\"{['data']['ca-bundle\\.crt']}\" > secondary.crt",
"apiVersion: v1 data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- <copy contents of cert1 from primary.crt here> -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- <copy contents of cert2 from primary.crt here> -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- <copy contents of cert3 primary.crt here> -----END CERTIFICATE---- -----BEGIN CERTIFICATE----- <copy contents of cert1 from secondary.crt here> -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- <copy contents of cert2 from secondary.crt here> -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- <copy contents of cert3 from secondary.crt here> -----END CERTIFICATE----- kind: ConfigMap metadata: name: user-ca-bundle namespace: openshift-config",
"oc create -f cm-clusters-crt.yaml",
"configmap/user-ca-bundle created",
"oc patch proxy cluster --type=merge --patch='{\"spec\":{\"trustedCA\":{\"name\":\"user-ca-bundle\"}}}'",
"proxy.config.openshift.io/cluster patched",
"oc get drpolicy <drpolicy_name> -o jsonpath='{.status.conditions[].reason}{\"\\n\"}'",
"Succeeded",
"oc get drclusters",
"oc get drcluster <drcluster_name> -o jsonpath='{.status.conditions[2].reason}{\"\\n\"}'",
"Succeeded",
"oc get csv,pod -n openshift-dr-system",
"NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/odr-cluster-operator.v4.15.0 Openshift DR Cluster Operator 4.15.0 Succeeded clusterserviceversion.operators.coreos.com/volsync-product.v0.8.0 VolSync 0.8.0 Succeeded NAME READY STATUS RESTARTS AGE pod/ramen-dr-cluster-operator-6467cf5d4c-cc8kz 2/2 Running 0 3d12h",
"get secrets -n openshift-dr-system | grep Opaque",
"get cm -n openshift-operators ramen-hub-operator-config -oyaml",
"oc get nodes -o jsonpath='{range .items[*]}{.status.addresses[?(@.type==\"ExternalIP\")].address}{\"\\n\"}{end}'",
"10.70.56.118 10.70.56.193 10.70.56.154 10.70.56.242 10.70.56.136 10.70.56.99",
"oc get drcluster",
"NAME AGE ocp4perf1 5m35s ocp4perf2 5m35s",
"oc edit drcluster <drcluster_name>",
"apiVersion: ramendr.openshift.io/v1alpha1 kind: DRCluster metadata: [...] spec: s3ProfileName: s3profile-<drcluster_name>-ocs-external-storagecluster ## Add this section cidrs: - <IP_Address1>/32 - <IP_Address2>/32 - <IP_Address3>/32 - <IP_Address4>/32 - <IP_Address5>/32 - <IP_Address6>/32 [...]",
"drcluster.ramendr.openshift.io/ocp4perf1 edited",
"oc edit drcluster <drcluster_name>",
"apiVersion: ramendr.openshift.io/v1alpha1 kind: DRCluster metadata: ## Add this section annotations: drcluster.ramendr.openshift.io/storage-clusterid: openshift-storage drcluster.ramendr.openshift.io/storage-driver: openshift-storage.rbd.csi.ceph.com drcluster.ramendr.openshift.io/storage-secret-name: rook-csi-rbd-provisioner drcluster.ramendr.openshift.io/storage-secret-namespace: openshift-storage [...]",
"drcluster.ramendr.openshift.io/ocp4perf1 edited",
"oc get pods,pvc -n busybox-sample",
"NAME READY STATUS RESTARTS AGE pod/busybox-67bf494b9-zl5tr 1/1 Running 0 77s NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/busybox-pvc Bound pvc-c732e5fe-daaf-4c4d-99dd-462e04c18412 5Gi RWO ocs-storagecluster-ceph-rbd 77s",
"tolerations: - key: cluster.open-cluster-management.io/unreachable operator: Exists - key: cluster.open-cluster-management.io/unavailable operator: Exists",
"oc edit drcluster <drcluster_name>",
"apiVersion: ramendr.openshift.io/v1alpha1 kind: DRCluster metadata: [...] spec: ## Add this line clusterFence: Fenced cidrs: [...] [...]",
"drcluster.ramendr.openshift.io/ocp4perf1 edited",
"oc get drcluster.ramendr.openshift.io <drcluster_name> -o jsonpath='{.status.phase}{\"\\n\"}'",
"Fenced",
"ceph osd blocklist ls",
"cidr:10.1.161.1:0/32 2028-10-30T22:30:03.585634+0000 cidr:10.1.161.14:0/32 2028-10-30T22:30:02.483561+0000 cidr:10.1.161.51:0/32 2028-10-30T22:30:01.272267+0000 cidr:10.1.161.63:0/32 2028-10-30T22:30:05.099655+0000 cidr:10.1.161.129:0/32 2028-10-30T22:29:58.335390+0000 cidr:10.1.161.130:0/32 2028-10-30T22:29:59.861518+0000",
"oc edit drcluster <drcluster_name>",
"apiVersion: ramendr.openshift.io/v1alpha1 kind: DRCluster metadata: [...] spec: ## Add this line clusterFence: Fenced cidrs: [...] [...]",
"drcluster.ramendr.openshift.io/ocp4perf1 edited",
"oc get drcluster.ramendr.openshift.io <drcluster_name> -o jsonpath='{.status.phase}{\"\\n\"}'",
"Fenced",
"ceph osd blocklist ls",
"cidr:10.1.161.1:0/32 2028-10-30T22:30:03.585634+0000 cidr:10.1.161.14:0/32 2028-10-30T22:30:02.483561+0000 cidr:10.1.161.51:0/32 2028-10-30T22:30:01.272267+0000 cidr:10.1.161.63:0/32 2028-10-30T22:30:05.099655+0000 cidr:10.1.161.129:0/32 2028-10-30T22:29:58.335390+0000 cidr:10.1.161.130:0/32 2028-10-30T22:29:59.861518+0000",
"oc edit drcluster <drcluster_name>",
"apiVersion: ramendr.openshift.io/v1alpha1 kind: DRCluster metadata: [...] spec: cidrs: [...] ## Modify this line clusterFence: Unfenced [...] [...]",
"drcluster.ramendr.openshift.io/ocp4perf1 edited",
"get pods -A | egrep -v 'Running|Completed'",
"NAMESPACE NAME READY STATUS RESTARTS AGE",
"oc get drcluster.ramendr.openshift.io <drcluster_name> -o jsonpath='{.status.phase}{\"\\n\"}'",
"Unfenced",
"ceph osd blocklist ls",
"oc edit drcluster <drcluster_name>",
"apiVersion: ramendr.openshift.io/v1alpha1 kind: DRCluster metadata: [...] spec: cidrs: [...] ## Modify this line clusterFence: Unfenced [...] [...]",
"drcluster.ramendr.openshift.io/ocp4perf1 edited",
"get pods -A | egrep -v 'Running|Completed'",
"NAMESPACE NAME READY STATUS RESTARTS AGE",
"oc get drcluster.ramendr.openshift.io <drcluster_name> -o jsonpath='{.status.phase}{\"\\n\"}'",
"Unfenced",
"ceph osd blocklist ls",
"edit drcluster <drcluster_name>",
"apiVersion: ramendr.openshift.io/v1alpha1 kind: DRCluster metadata: [...] spec: ## Add or modify this line clusterFence: Fenced cidrs: [...] [...]",
"oc edit drcluster <drcluster_name>",
"apiVersion: ramendr.openshift.io/v1alpha1 kind: DRCluster metadata: [...] spec: ## Modify this line clusterFence: Unfenced cidrs: [...] [...]",
"oc delete drcluster <drcluster_name> --wait=false",
"oc edit placement <placement_name> -n <namespace>",
"apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: annotations: cluster.open-cluster-management.io/experimental-scheduling-disable: \"true\" [...] spec: clusterSets: - submariner predicates: - requiredClusterSelector: claimSelector: {} labelSelector: matchExpressions: - key: name operator: In values: - cluster1 <-- Modify to be surviving cluster name [...]",
"oc get vrg -n <application_namespace> -o jsonpath='{.items[0].spec.s3Profiles}' | jq",
"oc delete drpc <drpc_name> -n <namespace>",
"oc get drpc -A",
"oc edit placement <placement_name> -n <namespace>",
"apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: annotations: ## Remove this annotation cluster.open-cluster-management.io/experimental-scheduling-disable: \"true\" [...]",
"#!/bin/bash secrets=USD(oc get secrets -n openshift-operators | grep Opaque | cut -d\" \" -f1) echo USDsecrets for secret in USDsecrets do oc patch -n openshift-operators secret/USDsecret -p '{\"metadata\":{\"finalizers\":null}}' --type=merge done mirrorpeers=USD(oc get mirrorpeer -o name) echo USDmirrorpeers for mp in USDmirrorpeers do oc patch USDmp -p '{\"metadata\":{\"finalizers\":null}}' --type=merge oc delete USDmp done drpolicies=USD(oc get drpolicy -o name) echo USDdrpolicies for drp in USDdrpolicies do oc patch USDdrp -p '{\"metadata\":{\"finalizers\":null}}' --type=merge oc delete USDdrp done drclusters=USD(oc get drcluster -o name) echo USDdrclusters for drp in USDdrclusters do oc patch USDdrp -p '{\"metadata\":{\"finalizers\":null}}' --type=merge oc delete USDdrp done delete project openshift-operators managedclusters=USD(oc get managedclusters -o name | cut -d\"/\" -f2) echo USDmanagedclusters for mc in USDmanagedclusters do secrets=USD(oc get secrets -n USDmc | grep multicluster.odf.openshift.io/secret-type | cut -d\" \" -f1) echo USDsecrets for secret in USDsecrets do set -x oc patch -n USDmc secret/USDsecret -p '{\"metadata\":{\"finalizers\":null}}' --type=merge oc delete -n USDmc secret/USDsecret done done delete clusterrolebinding spoke-clusterrole-bindings",
"oc label namespace openshift-operators openshift.io/cluster-monitoring='true'",
"oc get obc -n openshift-storage",
"oc label namespace openshift-operators openshift.io/cluster-monitoring='true'",
"oc get drpolicy <drpolicy_name> -o jsonpath='{.status.conditions[].reason}{\"\\n\"}'",
"Succeeded",
"oc get drpc -o wide -A",
"NAMESPACE NAME AGE PREFERREDCLUSTER FAILOVERCLUSTER DESIREDSTATE CURRENTSTATE PROGRESSION START TIME DURATION PEER READY [...] busybox cephfs-busybox-placement-1-drpc 103m cluster-1 cluster-2 Failover FailedOver Cleaning Up 2024-04-15T09:12:23Z False busybox cephfs-busybox-placement-1-drpc 102m cluster-1 Deployed Completed 2024-04-15T07:40:09Z 37.200569819s True [...]"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/configuring_openshift_data_foundation_disaster_recovery_for_openshift_workloads/metro-dr-solution |
Index | Index Symbols @Converter, Implement an annotated converter class A accessing, Accessing message headers , Wrapping the exchange accessors annotating the implementation, Implement an annotated converter class AsyncCallback, Asynchronous processing asynchronous, Asynchronous producer asynchronous producer implementing, How to implement an asynchronous producer AsyncProcessor, Asynchronous processing auto-discovery configuration, Configuring auto-discovery B bean properties, Define bean properties on your component class C Component createEndpoint(), URI parsing definition, The Component interface methods, Component methods component prefix, Component components, Component bean properties, Define bean properties on your component class configuring, Installing and configuring the component implementation steps, Implementation steps installing, Installing and configuring the component interfaces to implement, Which interfaces do you need to implement? parameter injection, Parameter injection Spring configuration, Configure the component in Spring configuration, Configuring auto-discovery configuring, Installing and configuring the component Consumer, Consumer consumers, Consumer event-driven, Event-driven pattern , Implementation steps polling, Polling pattern , Implementation steps scheduled, Scheduled poll pattern , Implementation steps threading, Overview controller, Controller type converter copy(), Exchange methods createConsumer(), Endpoint methods createEndpoint(), URI parsing createExchange(), Endpoint methods , Event-driven endpoint implementation , Producer methods createPollingConsumer(), Endpoint methods , Event-driven endpoint implementation createProducer(), Endpoint methods D DefaultComponent createEndpoint(), URI parsing DefaultEndpoint, Event-driven endpoint implementation createExchange(), Event-driven endpoint implementation createPollingConsumer(), Event-driven endpoint implementation getCamelConext(), Event-driven endpoint implementation getComponent(), Event-driven endpoint implementation getEndpointUri(), Event-driven endpoint implementation definition, The Component interface discovery file, Create a TypeConverter file E Endpoint, Endpoint createConsumer(), Endpoint methods createExchange(), Endpoint methods createPollingConsumer(), Endpoint methods createProducer(), Endpoint methods getCamelContext(), Endpoint methods getEndpointURI(), Endpoint methods interface definition, The Endpoint interface isLenientProperties(), Endpoint methods isSingleton(), Endpoint methods setCamelContext(), Endpoint methods endpoint event-driven, Event-driven endpoint implementation scheduled, Scheduled poll endpoint implementation endpoints, Endpoint event-driven, Event-driven pattern , Implementation steps , Event-driven endpoint implementation Exchange, Exchange , The Exchange interface copy(), Exchange methods getExchangeId(), Exchange methods getIn(), Accessing message headers , Exchange methods getOut(), Exchange methods getPattern(), Exchange methods getProperties(), Exchange methods getProperty(), Exchange methods getUnitOfWork(), Exchange methods removeProperty(), Exchange methods setExchangeId(), Exchange methods setIn(), Exchange methods setOut(), Exchange methods setProperty(), Exchange methods setUnitOfWork(), Exchange methods exchange in capable, Testing the exchange pattern out capable, Testing the exchange pattern exchange properties accessing, Wrapping the exchange accessors ExchangeHelper, The ExchangeHelper Class getContentType(), Get the In message's MIME content type getMandatoryHeader(), Accessing message headers , Wrapping the exchange accessors getMandatoryInBody(), Wrapping the exchange accessors getMandatoryOutBody(), Wrapping the exchange accessors getMandatoryProperty(), Wrapping the exchange accessors isInCapable(), Testing the exchange pattern isOutCapable(), Testing the exchange pattern resolveEndpoint(), Resolve an endpoint exchanges, Exchange G getCamelConext(), Event-driven endpoint implementation getCamelContext(), Endpoint methods getComponent(), Event-driven endpoint implementation getContentType(), Get the In message's MIME content type getEndpoint(), Producer methods getEndpointURI(), Endpoint methods getEndpointUri(), Event-driven endpoint implementation getExchangeId(), Exchange methods getHeader(), Accessing message headers getIn(), Accessing message headers , Exchange methods getMandatoryHeader(), Accessing message headers , Wrapping the exchange accessors getMandatoryInBody(), Wrapping the exchange accessors getMandatoryOutBody(), Wrapping the exchange accessors getMandatoryProperty(), Wrapping the exchange accessors getOut(), Exchange methods getPattern(), Exchange methods getProperties(), Exchange methods getProperty(), Exchange methods getUnitOfWork(), Exchange methods I implementation steps, How to implement a type converter , Implementation steps implementing, Implementing the Processor interface , How to implement a synchronous producer , How to implement an asynchronous producer in capable, Testing the exchange pattern in message MIME type, Get the In message's MIME content type installing, Installing and configuring the component interface definition, The Endpoint interface interfaces to implement, Which interfaces do you need to implement? isInCapable(), Testing the exchange pattern isLenientProperties(), Endpoint methods isOutCapable(), Testing the exchange pattern isSingleton(), Endpoint methods M Message, Message getHeader(), Accessing message headers message headers accessing, Accessing message headers messages, Message methods, Component methods MIME type, Get the In message's MIME content type O out capable, Testing the exchange pattern P packaging, Package the type converter parameter injection, Parameter injection performer, Overview pipeline, Pipelining model polling, Polling pattern , Implementation steps process(), Producer methods Processor, Processor interface implementing, Implementing the Processor interface producer, Producer Producer, Producer createExchange(), Producer methods getEndpoint(), Producer methods process(), Producer methods producers asynchronous, Asynchronous producer synchronous, Synchronous producer R removeProperty(), Exchange methods resolveEndpoint(), Resolve an endpoint runtime process, Type conversion process S scheduled, Scheduled poll pattern , Implementation steps , Scheduled poll endpoint implementation ScheduledPollEndpoint, Scheduled poll endpoint implementation setCamelContext(), Endpoint methods setExchangeId(), Exchange methods setIn(), Exchange methods setOut(), Exchange methods setProperty(), Exchange methods setUnitOfWork(), Exchange methods simple processor implementing, Implementing the Processor interface Spring configuration, Configure the component in Spring synchronous, Synchronous producer synchronous producer implementing, How to implement a synchronous producer T threading, Overview type conversion runtime process, Type conversion process type converter annotating the implementation, Implement an annotated converter class controller, Controller type converter discovery file, Create a TypeConverter file implementation steps, How to implement a type converter packaging, Package the type converter worker, Controller type converter TypeConverter, Type converter interface TypeConverterLoader, Type converter loader U useIntrospectionOnEndpoint(), Disabling endpoint parameter injection W wire tap pattern, System Management worker, Controller type converter | null | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_development_guide/ix01 |
Chapter 5. ConfigMap [v1] | Chapter 5. ConfigMap [v1] Description ConfigMap holds configuration data for pods to consume. Type object 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources binaryData object (string) BinaryData contains the binary data. Each key must consist of alphanumeric characters, '-', '_' or '.'. BinaryData can contain byte sequences that are not in the UTF-8 range. The keys stored in BinaryData must not overlap with the ones in the Data field, this is enforced during validation process. Using this field will require 1.10+ apiserver and kubelet. data object (string) Data contains the configuration data. Each key must consist of alphanumeric characters, '-', '_' or '.'. Values with non-UTF-8 byte sequences must use the BinaryData field. The keys stored in Data must not overlap with the keys in the BinaryData field, this is enforced during validation process. immutable boolean Immutable, if set to true, ensures that data stored in the ConfigMap cannot be updated (only object metadata can be modified). If not set to true, the field can be modified at any time. Defaulted to nil. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 5.2. API endpoints The following API endpoints are available: /api/v1/configmaps GET : list or watch objects of kind ConfigMap /api/v1/watch/configmaps GET : watch individual changes to a list of ConfigMap. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/configmaps DELETE : delete collection of ConfigMap GET : list or watch objects of kind ConfigMap POST : create a ConfigMap /api/v1/watch/namespaces/{namespace}/configmaps GET : watch individual changes to a list of ConfigMap. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/configmaps/{name} DELETE : delete a ConfigMap GET : read the specified ConfigMap PATCH : partially update the specified ConfigMap PUT : replace the specified ConfigMap /api/v1/watch/namespaces/{namespace}/configmaps/{name} GET : watch changes to an object of kind ConfigMap. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 5.2.1. /api/v1/configmaps HTTP method GET Description list or watch objects of kind ConfigMap Table 5.1. HTTP responses HTTP code Reponse body 200 - OK ConfigMapList schema 401 - Unauthorized Empty 5.2.2. /api/v1/watch/configmaps HTTP method GET Description watch individual changes to a list of ConfigMap. deprecated: use the 'watch' parameter with a list operation instead. Table 5.2. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 5.2.3. /api/v1/namespaces/{namespace}/configmaps HTTP method DELETE Description delete collection of ConfigMap Table 5.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 5.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind ConfigMap Table 5.5. HTTP responses HTTP code Reponse body 200 - OK ConfigMapList schema 401 - Unauthorized Empty HTTP method POST Description create a ConfigMap Table 5.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.7. Body parameters Parameter Type Description body ConfigMap schema Table 5.8. HTTP responses HTTP code Reponse body 200 - OK ConfigMap schema 201 - Created ConfigMap schema 202 - Accepted ConfigMap schema 401 - Unauthorized Empty 5.2.4. /api/v1/watch/namespaces/{namespace}/configmaps HTTP method GET Description watch individual changes to a list of ConfigMap. deprecated: use the 'watch' parameter with a list operation instead. Table 5.9. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 5.2.5. /api/v1/namespaces/{namespace}/configmaps/{name} Table 5.10. Global path parameters Parameter Type Description name string name of the ConfigMap HTTP method DELETE Description delete a ConfigMap Table 5.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 5.12. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ConfigMap Table 5.13. HTTP responses HTTP code Reponse body 200 - OK ConfigMap schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ConfigMap Table 5.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.15. HTTP responses HTTP code Reponse body 200 - OK ConfigMap schema 201 - Created ConfigMap schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ConfigMap Table 5.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.17. Body parameters Parameter Type Description body ConfigMap schema Table 5.18. HTTP responses HTTP code Reponse body 200 - OK ConfigMap schema 201 - Created ConfigMap schema 401 - Unauthorized Empty 5.2.6. /api/v1/watch/namespaces/{namespace}/configmaps/{name} Table 5.19. Global path parameters Parameter Type Description name string name of the ConfigMap HTTP method GET Description watch changes to an object of kind ConfigMap. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 5.20. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/metadata_apis/configmap-v1 |
Chapter 3. User tasks | Chapter 3. User tasks 3.1. Creating applications from installed Operators This guide walks developers through an example of creating applications from an installed Operator using the OpenShift Container Platform web console. 3.1.1. Creating an etcd cluster using an Operator This procedure walks through creating a new etcd cluster using the etcd Operator, managed by Operator Lifecycle Manager (OLM). Prerequisites Access to an OpenShift Container Platform 4.14 cluster. The etcd Operator already installed cluster-wide by an administrator. Procedure Create a new project in the OpenShift Container Platform web console for this procedure. This example uses a project called my-etcd . Navigate to the Operators Installed Operators page. The Operators that have been installed to the cluster by the cluster administrator and are available for use are shown here as a list of cluster service versions (CSVs). CSVs are used to launch and manage the software provided by the Operator. Tip You can get this list from the CLI using: USD oc get csv On the Installed Operators page, click the etcd Operator to view more details and available actions. As shown under Provided APIs , this Operator makes available three new resource types, including one for an etcd Cluster (the EtcdCluster resource). These objects work similar to the built-in native Kubernetes ones, such as Deployment or ReplicaSet , but contain logic specific to managing etcd. Create a new etcd cluster: In the etcd Cluster API box, click Create instance . The page allows you to make any modifications to the minimal starting template of an EtcdCluster object, such as the size of the cluster. For now, click Create to finalize. This triggers the Operator to start up the pods, services, and other components of the new etcd cluster. Click the example etcd cluster, then click the Resources tab to see that your project now contains a number of resources created and configured automatically by the Operator. Verify that a Kubernetes service has been created that allows you to access the database from other pods in your project. All users with the edit role in a given project can create, manage, and delete application instances (an etcd cluster, in this example) managed by Operators that have already been created in the project, in a self-service manner, just like a cloud service. If you want to enable additional users with this ability, project administrators can add the role using the following command: USD oc policy add-role-to-user edit <user> -n <target_project> You now have an etcd cluster that will react to failures and rebalance data as pods become unhealthy or are migrated between nodes in the cluster. Most importantly, cluster administrators or developers with proper access can now easily use the database with their applications. 3.2. Installing Operators in your namespace If a cluster administrator has delegated Operator installation permissions to your account, you can install and subscribe an Operator to your namespace in a self-service manner. 3.2.1. Prerequisites A cluster administrator must add certain permissions to your OpenShift Container Platform user account to allow self-service Operator installation to a namespace. See Allowing non-cluster administrators to install Operators for details. 3.2.2. About Operator installation with OperatorHub OperatorHub is a user interface for discovering Operators; it works in conjunction with Operator Lifecycle Manager (OLM), which installs and manages Operators on a cluster. As a user with the proper permissions, you can install an Operator from OperatorHub by using the OpenShift Container Platform web console or CLI. During installation, you must determine the following initial settings for the Operator: Installation Mode Choose a specific namespace in which to install the Operator. Update Channel If an Operator is available through multiple channels, you can choose which channel you want to subscribe to. For example, to deploy from the stable channel, if available, select it from the list. Approval Strategy You can choose automatic or manual updates. If you choose automatic updates for an installed Operator, when a new version of that Operator is available in the selected channel, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention. If you select manual updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version. Understanding OperatorHub 3.2.3. Installing from OperatorHub using the web console You can install and subscribe to an Operator from OperatorHub by using the OpenShift Container Platform web console. Prerequisites Access to an OpenShift Container Platform cluster using an account with Operator installation permissions. Procedure Navigate in the web console to the Operators OperatorHub page. Scroll or type a keyword into the Filter by keyword box to find the Operator you want. For example, type advanced to find the Advanced Cluster Management for Kubernetes Operator. You can also filter options by Infrastructure Features . For example, select Disconnected if you want to see Operators that work in disconnected environments, also known as restricted network environments. Select the Operator to display additional information. Note Choosing a Community Operator warns that Red Hat does not certify Community Operators; you must acknowledge the warning before continuing. Read the information about the Operator and click Install . On the Install Operator page: Choose a specific, single namespace in which to install the Operator. The Operator will only watch and be made available for use in this single namespace. If the cluster is in AWS STS mode, enter the Amazon Resource Name (ARN) of the AWS IAM role of your service account in the role ARN field. To create the role's ARN, follow the procedure described in Preparing AWS account . If more than one update channel is available, select an Update channel . Select Automatic or Manual approval strategy, as described earlier. Important If the web console shows that the cluster is in "STS mode", you must set Update approval to Manual . Subscriptions with automatic update approvals are not recommended because there might be permission changes to make prior to updating. Subscriptions with manual update approvals ensure that administrators have the opportunity to verify the permissions of the later version and take any necessary steps prior to update. Click Install to make the Operator available to the selected namespaces on this OpenShift Container Platform cluster. If you selected a Manual approval strategy, the upgrade status of the subscription remains Upgrading until you review and approve the install plan. After approving on the Install Plan page, the subscription upgrade status moves to Up to date . If you selected an Automatic approval strategy, the upgrade status should resolve to Up to date without intervention. After the upgrade status of the subscription is Up to date , select Operators Installed Operators to verify that the cluster service version (CSV) of the installed Operator eventually shows up. The Status should ultimately resolve to InstallSucceeded in the relevant namespace. Note For the All namespaces... installation mode, the status resolves to InstallSucceeded in the openshift-operators namespace, but the status is Copied if you check in other namespaces. If it does not: Check the logs in any pods in the openshift-operators project (or other relevant namespace if A specific namespace... installation mode was selected) on the Workloads Pods page that are reporting issues to troubleshoot further. 3.2.4. Installing from OperatorHub using the CLI Instead of using the OpenShift Container Platform web console, you can install an Operator from OperatorHub by using the CLI. Use the oc command to create or update a Subscription object. Prerequisites Access to an OpenShift Container Platform cluster using an account with Operator installation permissions. You have installed the OpenShift CLI ( oc ). Procedure View the list of Operators available to the cluster from OperatorHub: USD oc get packagemanifests -n openshift-marketplace Example output NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m ... couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m ... etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m ... Note the catalog for your desired Operator. Inspect your desired Operator to verify its supported install modes and available channels: USD oc describe packagemanifests <operator_name> -n openshift-marketplace An Operator group, defined by an OperatorGroup object, selects target namespaces in which to generate required RBAC access for all Operators in the same namespace as the Operator group. The namespace to which you subscribe the Operator must have an Operator group that matches the install mode of the Operator, either the AllNamespaces or SingleNamespace mode. If the Operator you intend to install uses the AllNamespaces mode, the openshift-operators namespace already has the appropriate global-operators Operator group in place. However, if the Operator uses the SingleNamespace mode and you do not already have an appropriate Operator group in place, you must create one. Note The web console version of this procedure handles the creation of the OperatorGroup and Subscription objects automatically behind the scenes for you when choosing SingleNamespace mode. You can only have one Operator group per namespace. For more information, see "Operator groups". Create an OperatorGroup object YAML file, for example operatorgroup.yaml : Example OperatorGroup object apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> spec: targetNamespaces: - <namespace> Warning Operator Lifecycle Manager (OLM) creates the following cluster roles for each Operator group: <operatorgroup_name>-admin <operatorgroup_name>-edit <operatorgroup_name>-view When you manually create an Operator group, you must specify a unique name that does not conflict with the existing cluster roles or other Operator groups on the cluster. Create the OperatorGroup object: USD oc apply -f operatorgroup.yaml Create a Subscription object YAML file to subscribe a namespace to an Operator, for example sub.yaml : Example Subscription object apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: openshift-operators 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace 5 config: env: 6 - name: ARGS value: "-v=10" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: "Exists" resources: 11 requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m" nodeSelector: 12 foo: bar 1 For default AllNamespaces install mode usage, specify the openshift-operators namespace. Alternatively, you can specify a custom global namespace, if you have created one. Otherwise, specify the relevant single namespace for SingleNamespace install mode usage. 2 Name of the channel to subscribe to. 3 Name of the Operator to subscribe to. 4 Name of the catalog source that provides the Operator. 5 Namespace of the catalog source. Use openshift-marketplace for the default OperatorHub catalog sources. 6 The env parameter defines a list of Environment Variables that must exist in all containers in the pod created by OLM. 7 The envFrom parameter defines a list of sources to populate Environment Variables in the container. 8 The volumes parameter defines a list of Volumes that must exist on the pod created by OLM. 9 The volumeMounts parameter defines a list of volume mounts that must exist in all containers in the pod created by OLM. If a volumeMount references a volume that does not exist, OLM fails to deploy the Operator. 10 The tolerations parameter defines a list of Tolerations for the pod created by OLM. 11 The resources parameter defines resource constraints for all the containers in the pod created by OLM. 12 The nodeSelector parameter defines a NodeSelector for the pod created by OLM. If the cluster is in STS mode, include the following fields in the Subscription object: kind: Subscription # ... spec: installPlanApproval: Manual 1 config: env: - name: ROLEARN value: "<role_arn>" 2 1 Subscriptions with automatic update approvals are not recommended because there might be permission changes to make prior to updating. Subscriptions with manual update approvals ensure that administrators have the opportunity to verify the permissions of the later version and take any necessary steps prior to update. 2 Include the role ARN details. Create the Subscription object: USD oc apply -f sub.yaml At this point, OLM is now aware of the selected Operator. A cluster service version (CSV) for the Operator should appear in the target namespace, and APIs provided by the Operator should be available for creation. Additional resources Operator groups Channel names 3.2.5. Installing a specific version of an Operator You can install a specific version of an Operator by setting the cluster service version (CSV) in a Subscription object. Prerequisites Access to an OpenShift Container Platform cluster using an account with Operator installation permissions. You have installed the OpenShift CLI ( oc ). Procedure Look up the available versions and channels of the Operator you want to install by running the following command: Command syntax USD oc describe packagemanifests <operator_name> -n <catalog_namespace> For example, the following command prints the available channels and versions of the Red Hat Quay Operator from OperatorHub: Example command USD oc describe packagemanifests quay-operator -n openshift-marketplace Example 3.1. Example output Name: quay-operator Namespace: operator-marketplace Labels: catalog=redhat-operators catalog-namespace=openshift-marketplace hypershift.openshift.io/managed=true operatorframework.io/arch.amd64=supported operatorframework.io/os.linux=supported provider=Red Hat provider-url= Annotations: <none> API Version: packages.operators.coreos.com/v1 Kind: PackageManifest ... Current CSV: quay-operator.v3.7.11 ... Entries: Name: quay-operator.v3.7.11 Version: 3.7.11 Name: quay-operator.v3.7.10 Version: 3.7.10 Name: quay-operator.v3.7.9 Version: 3.7.9 Name: quay-operator.v3.7.8 Version: 3.7.8 Name: quay-operator.v3.7.7 Version: 3.7.7 Name: quay-operator.v3.7.6 Version: 3.7.6 Name: quay-operator.v3.7.5 Version: 3.7.5 Name: quay-operator.v3.7.4 Version: 3.7.4 Name: quay-operator.v3.7.3 Version: 3.7.3 Name: quay-operator.v3.7.2 Version: 3.7.2 Name: quay-operator.v3.7.1 Version: 3.7.1 Name: quay-operator.v3.7.0 Version: 3.7.0 Name: stable-3.7 ... Current CSV: quay-operator.v3.8.5 ... Entries: Name: quay-operator.v3.8.5 Version: 3.8.5 Name: quay-operator.v3.8.4 Version: 3.8.4 Name: quay-operator.v3.8.3 Version: 3.8.3 Name: quay-operator.v3.8.2 Version: 3.8.2 Name: quay-operator.v3.8.1 Version: 3.8.1 Name: quay-operator.v3.8.0 Version: 3.8.0 Name: stable-3.8 Default Channel: stable-3.8 Package Name: quay-operator Tip You can print an Operator's version and channel information in the YAML format by running the following command: USD oc get packagemanifests <operator_name> -n <catalog_namespace> -o yaml If more than one catalog is installed in a namespace, run the following command to look up the available versions and channels of an Operator from a specific catalog: USD oc get packagemanifest \ --selector=catalog=<catalogsource_name> \ --field-selector metadata.name=<operator_name> \ -n <catalog_namespace> -o yaml Important If you do not specify the Operator's catalog, running the oc get packagemanifest and oc describe packagemanifest commands might return a package from an unexpected catalog if the following conditions are met: Multiple catalogs are installed in the same namespace. The catalogs contain the same Operators or Operators with the same name. An Operator group, defined by an OperatorGroup object, selects target namespaces in which to generate required role-based access control (RBAC) access for all Operators in the same namespace as the Operator group. The namespace to which you subscribe the Operator must have an Operator group that matches the install mode of the Operator, either the AllNamespaces or SingleNamespace mode. If the Operator you intend to install uses the AllNamespaces mode, then the openshift-operators namespace already has an appropriate Operator group in place. However, if the Operator uses the SingleNamespace mode and you do not already have an appropriate Operator group in place, you must create one: Create an OperatorGroup object YAML file, for example operatorgroup.yaml : Example OperatorGroup object apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> spec: targetNamespaces: - <namespace> Warning Operator Lifecycle Manager (OLM) creates the following cluster roles for each Operator group: <operatorgroup_name>-admin <operatorgroup_name>-edit <operatorgroup_name>-view When you manually create an Operator group, you must specify a unique name that does not conflict with the existing cluster roles or other Operator groups on the cluster. Create the OperatorGroup object: USD oc apply -f operatorgroup.yaml Create a Subscription object YAML file that subscribes a namespace to an Operator with a specific version by setting the startingCSV field. Set the installPlanApproval field to Manual to prevent the Operator from automatically upgrading if a later version exists in the catalog. For example, the following sub.yaml file can be used to install the Red Hat Quay Operator specifically to version 3.7.10: Subscription with a specific starting Operator version apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: quay-operator namespace: quay spec: channel: stable-3.7 installPlanApproval: Manual 1 name: quay-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: quay-operator.v3.7.10 2 1 Set the approval strategy to Manual in case your specified version is superseded by a later version in the catalog. This plan prevents an automatic upgrade to a later version and requires manual approval before the starting CSV can complete the installation. 2 Set a specific version of an Operator CSV. Create the Subscription object: USD oc apply -f sub.yaml Manually approve the pending install plan to complete the Operator installation. Additional resources Manually approving a pending Operator update | [
"oc get csv",
"oc policy add-role-to-user edit <user> -n <target_project>",
"oc get packagemanifests -n openshift-marketplace",
"NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m",
"oc describe packagemanifests <operator_name> -n openshift-marketplace",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> spec: targetNamespaces: - <namespace>",
"oc apply -f operatorgroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: openshift-operators 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace 5 config: env: 6 - name: ARGS value: \"-v=10\" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: \"Exists\" resources: 11 requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\" nodeSelector: 12 foo: bar",
"kind: Subscription spec: installPlanApproval: Manual 1 config: env: - name: ROLEARN value: \"<role_arn>\" 2",
"oc apply -f sub.yaml",
"oc describe packagemanifests <operator_name> -n <catalog_namespace>",
"oc describe packagemanifests quay-operator -n openshift-marketplace",
"Name: quay-operator Namespace: operator-marketplace Labels: catalog=redhat-operators catalog-namespace=openshift-marketplace hypershift.openshift.io/managed=true operatorframework.io/arch.amd64=supported operatorframework.io/os.linux=supported provider=Red Hat provider-url= Annotations: <none> API Version: packages.operators.coreos.com/v1 Kind: PackageManifest Current CSV: quay-operator.v3.7.11 Entries: Name: quay-operator.v3.7.11 Version: 3.7.11 Name: quay-operator.v3.7.10 Version: 3.7.10 Name: quay-operator.v3.7.9 Version: 3.7.9 Name: quay-operator.v3.7.8 Version: 3.7.8 Name: quay-operator.v3.7.7 Version: 3.7.7 Name: quay-operator.v3.7.6 Version: 3.7.6 Name: quay-operator.v3.7.5 Version: 3.7.5 Name: quay-operator.v3.7.4 Version: 3.7.4 Name: quay-operator.v3.7.3 Version: 3.7.3 Name: quay-operator.v3.7.2 Version: 3.7.2 Name: quay-operator.v3.7.1 Version: 3.7.1 Name: quay-operator.v3.7.0 Version: 3.7.0 Name: stable-3.7 Current CSV: quay-operator.v3.8.5 Entries: Name: quay-operator.v3.8.5 Version: 3.8.5 Name: quay-operator.v3.8.4 Version: 3.8.4 Name: quay-operator.v3.8.3 Version: 3.8.3 Name: quay-operator.v3.8.2 Version: 3.8.2 Name: quay-operator.v3.8.1 Version: 3.8.1 Name: quay-operator.v3.8.0 Version: 3.8.0 Name: stable-3.8 Default Channel: stable-3.8 Package Name: quay-operator",
"oc get packagemanifests <operator_name> -n <catalog_namespace> -o yaml",
"oc get packagemanifest --selector=catalog=<catalogsource_name> --field-selector metadata.name=<operator_name> -n <catalog_namespace> -o yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> spec: targetNamespaces: - <namespace>",
"oc apply -f operatorgroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: quay-operator namespace: quay spec: channel: stable-3.7 installPlanApproval: Manual 1 name: quay-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: quay-operator.v3.7.10 2",
"oc apply -f sub.yaml"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/operators/user-tasks |
Chapter 3. Configuring build runs | Chapter 3. Configuring build runs In a BuildRun custom resource (CR), you can define the build reference, build specification, parameter values, service account, output, retention parameters, and volumes to configure a build run. A BuildRun resource is available for use within a namespace. For configuring a build run, create a BuildRun resource YAML file and apply it to the OpenShift Container Platform cluster. 3.1. Configurable fields in build run You can use the following fields in your BuildRun custom resource (CR): Table 3.1. Fields in the BuildRun CR Field Presence Description apiVersion Required Specifies the API version of the resource. For example, shipwright.io/v1beta1 . kind Required Specifies the type of the resource. For example, BuildRun . metadata Required Indicates the metadata that identifies the custom resource definition instance. For example, the name of the BuildRun resource. spec.build.name Optional Specifies an existing Build resource instance to use. You cannot use this field with the spec.build.spec field. spec.build.spec Optional Specifies an embedded Build resource instance to use. You cannot use this field with the spec.build.name field. spec.serviceAccount Optional Indicates the service account to use when building the image. spec.timeout Optional Defines a custom timeout. This field value overwrites the value of the spec.timeout field defined in your Build resource. spec.paramValues Optional Indicates a name-value list to specify values for parameters defined in the build strategy. The parameter value overwrites the value of the parameter that is defined with the same name in your Build resource. spec.output.image Optional Indicates a custom location where the generated image will be pushed. This field value overwrites the value of the output.image field defined in your Build resource. spec.output.pushSecret Optional Indicates an existing secret to get access to the container registry. This secret will be added to the service account along with other secrets requested by the Build resource. spec.env Optional Defines additional environment variables that you can pass to the build container. This field value overrides any environment variables that are specified in the Build resource. The available variables depend on the tool that is used by your build strategy. Note You cannot use the spec.build.name and spec.build.spec fields together in the same CR because they are mutually exclusive. 3.2. Build reference definition You can configure the spec.build.name field in your BuildRun resource to reference a Build resource that indicates an image to build. The following example shows a BuildRun CR that configures the spec.build.name field: apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: buildah-buildrun spec: build: name: buildah-build 3.3. Build specification definition You can embed a complete build specification into your BuildRun resource using the spec.build.spec field. By embedding specifications, you can build an image without creating and maintaining a dedicated Build custom resource. The following example shows a BuildRun CR that configures the spec.build.spec field: apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: standalone-buildrun spec: build: spec: source: git: url: https://github.com/shipwright-io/sample-go.git contextDir: source-build strategy: kind: ClusterBuildStrategy name: buildah output: image: <path_to_image> Note You cannot use the spec.build.name and spec.build.spec fields together in the same CR because they are mutually exclusive. 3.4. Parameter values definition for a build run You can specify values for the build strategy parameters in your BuildRun CR. If you have provided a value for a parameter that is also defined in the Build resource with the same name, then the value defined in the BuildRun resource takes priority. In the following example, the value of the cache parameter in the BuildRun resource overrides the value of the cache parameter, which is defined in the Build resource: apiVersion: shipwright.io/v1beta1 kind: Build metadata: name: <your_build> namespace: <your_namespace> spec: paramValues: - name: cache value: disabled strategy: name: <your_strategy> kind: ClusterBuildStrategy source: # ... output: # ... apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: <your_buildrun> namespace: <your_namespace> spec: build: name: <your_build> paramValues: - name: cache value: registry 3.5. Service account definition You can define a service account in your BuildRun resource. The service account hosts all secrets referenced in your Build resource, as shown in the following example: apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: buildah-buildrun spec: build: name: buildah-build serviceAccount: pipeline 1 1 You can also set the value of the spec.serviceAccount field to ".generate" to generate the service account during runtime. The name of the generated service account corresponds with the name of the BuildRun resource. Note When you do not define the service account, the BuildRun resource uses the pipeline service account if it exists in the namespace. Otherwise, the BuildRun resource uses the default service account. 3.6. Retention parameters definition for a build run You can specify the duration for which a completed build run can exist in your BuildRun resource. Retention parameters provide a way to clean your BuildRun instances automatically. You can set the value of the following retention parameters in your BuildRun CR: retention.ttlAfterFailed : Specifies the duration for which a failed build run can exist retention.ttlAfterSucceeded : Specifies the duration for which a successful build run can exist The following example shows how to define retention parameters in your BuildRun CR: apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: buidrun-retention-ttl spec: build: name: build-retention-ttl retention: ttlAfterFailed: 10m ttlAfterSucceeded: 10m Note If you have defined a retention parameter in both BuildRun and Build CRs, the value defined in the BuildRun CR overrides the value of the retention parameter defined in the Build CR. 3.7. Volumes definition for a build run You can define volumes in your BuildRun CR. The defined volumes override the volumes specified in the BuildStrategy resource. If a volume is not overridden, then the build run fails. In case the Build and BuildRun resources override the same volume, the volume defined in the BuildRun resource is used for overriding. The following example shows a BuildRun CR that uses the volumes field: apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: <buildrun_name> spec: build: name: <build_name> volumes: - name: <volume_name> configMap: name: <configmap_name> 3.8. Environment variables definition You can use environment variables in your BuildRun CR based on your needs. The following example shows how to define environment variables: Example: Defining a BuildRun resource with environment variables apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: buildah-buildrun spec: build: name: buildah-build env: - name: <example_var_1> value: "<example_value_1>" - name: <example_var_2> value: "<example_value_2>" The following example shows a BuildRun resource that uses the Kubernetes downward API to expose a pod as an environment variable: Example: Defining a BuildRun resource to expose a pod as an environment variable apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: buildah-buildrun spec: build: name: buildah-build env: - name: <pod_name> valueFrom: fieldRef: fieldPath: metadata.name The following example shows a BuildRun resource that uses the Kubernetes downward API to expose a container as an environment variable: Example: Defining a BuildRun resource to expose a container as an environment variable apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: buildah-buildrun spec: build: name: buildah-build env: - name: MEMORY_LIMIT valueFrom: resourceFieldRef: containerName: <my_container> resource: limits.memory 3.9. Build run status The BuildRun resource updates whenever the image building status changes, as shown in the following examples: Example: BuildRun with Unknown status USD oc get buildrun buildah-buildrun-mp99r NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME buildah-buildrun-mp99r Unknown Unknown 1s Example: BuildRun with True status USD oc get buildrun buildah-buildrun-mp99r NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME buildah-buildrun-mp99r True Succeeded 29m 20m A BuildRun resource stores the status-related information in the status.conditions field. For example, a condition with the type Succeeded indicates that resources have successfully completed their operation. The status.conditions field includes significant information like status, reason, and message for the BuildRun resource. 3.9.1. Build run statuses description A BuildRun custom resource (CR) can have different statuses during the image building process. The following table covers the different statuses of a build run: Table 3.2. Statuses of a build run Status Cause Description Unknown Pending The BuildRun resource waits for a pod in status Pending . Unknown Running The BuildRun resource has been validated and started to perform its work. Unknown BuildRunCanceled The user has requested to cancel the build run. This request triggers the build run controller to make a request for canceling the related task runs. Cancellation is still under process when this status is present. True Succeeded The pod for the BuildRun resource is created. False Failed The BuildRun resource is failed in one of the steps. False BuildRunTimeout The execution of the BuildRun resource is timed out. False UnknownStrategyKind The strategy type defined in the Kind field is unknown. You can define these strategy types: ClusterBuildStrategy and BuildStrategy . False ClusterBuildStrategyNotFound The referenced cluster-scoped strategy was not found in the cluster. False BuildStrategyNotFound The referenced namespace-scoped strategy was not found in the cluster. False SetOwnerReferenceFailed Setting the ownerReferences field from the BuildRun resource to the related TaskRun resource failed. False TaskRunIsMissing The TaskRun resource related to the BuildRun resource was not found. False TaskRunGenerationFailed The generation of a TaskRun specification has failed. False MissingParameterValues You have not provided any value for some parameters that are defined in the build strategy without any default. You must provide the values for those parameters in the Build or the BuildRun CR. False RestrictedParametersInUse A value for a system parameter was provided, which is not allowed. False UndefinedParameter A value for a parameter was provided that is not defined in the build strategy. False WrongParameterValueType A value was provided for a build strategy parameter with the wrong type. For example, if the parameter is defined as an array or a string in the build strategy, you must provide a set of values or a direct value accordingly. False InconsistentParameterValues A value for a parameter contained more than one of these values: value , configMapValue , and secretValue . You must provide only one of the mentioned values to maintain consistency. False EmptyArrayItemParameterValues An item inside the values of an array parameter contained none of these values: value , configMapValue , and secretValue . You must provide only one of the mentioned values as null array items are not allowed. False IncompleteConfigMapValueParameterValues A value for a parameter contained a configMapValue value where the name or the value field was empty. You must specify the empty field to point to an existing config map key in your namespace. False IncompleteSecretValueParameterValues A value for a parameter contained a secretValue value where the name or the value field was empty. You must specify the empty field to point to an existing secret key in your namespace. False ServiceAccountNotFound The referenced service account was not found in the cluster. False BuildRegistrationFailed The referenced build in the BuildRun resource is in a Failed state. False BuildNotFound The referenced build in the BuildRun resource was not found. False BuildRunCanceled The BuildRun and related TaskRun resources were canceled successfully. False BuildRunNameInvalid The defined build run name in the metadata.name field is invalid. You must provide a valid label value for the build run name in your BuildRun CR. False BuildRunNoRefOrSpec The BuildRun resource does not have either the spec.build.name or spec.build.spec field defined. False BuildRunAmbiguousBuild The defined BuildRun resource uses both the spec.build.name and spec.build.spec fields. Only one of the parameters is allowed at a time. False BuildRunBuildFieldOverrideForbidden The defined spec.build.name field uses an override in combination with the spec.build.spec field, which is not allowed. Use the spec.build.spec field to directly specify the respective value. False PodEvicted The build run pod was evicted from the node it was running on. 3.9.2. Failed build runs When a build run fails, you can check the status.failureDetails field in your BuildRun CR to identify the exact point where the failure happened in the pod or container. The status.failureDetails field includes an error message and a reason for the failure. You only see the message and reason for failure if they are defined in your build strategy. The following example shows a failed build run: # ... status: # ... failureDetails: location: container: step-source-default pod: baran-build-buildrun-gzmv5-b7wbf-pod-bbpqr message: The source repository does not exist, or you have insufficient permission to access it. reason: GitRemotePrivate Note The status.failureDetails field also provides error details for all operations related to Git. 3.9.3. Step results in build run status After a BuildRun resource completes its execution, the .status field contains the .status.taskResults result emitted from the steps generated by the build run controller. The result includes the image digest or the commit SHA of the source code that is used for building the image. In a BuildRun resource, the .status.sources field contains the result from the execution of source steps and the .status.output field contains the result from the execution of output steps. The following example shows a BuildRun resource with step results for a Git source: Example: A BuildRun resource with step results for a Git source # ... status: buildSpec: # ... output: digest: sha256:07626e3c7fdd28d5328a8d6df8d29cd3da760c7f5e2070b534f9b880ed093a53 size: 1989004 sources: - name: default git: commitAuthor: xxx xxxxxx commitSha: f25822b85021d02059c9ac8a211ef3804ea8fdde branchName: main The following example shows a BuildRun resource with step results for a local source code: Example: A BuildRun resource with step results for a local source code # ... status: buildSpec: # ... output: digest: sha256:07626e3c7fdd28d5328a8d6df8d29cd3da760c7f5e2070b534f9b880ed093a53 size: 1989004 sources: - name: default bundle: digest: sha256:0f5e2070b534f9b880ed093a537626e3c7fdd28d5328a8d6df8d29cd3da760c7 Note You get to see the digest and size of the output image only if it is defined in your build strategy. 3.9.4. Build snapshot For each build run reconciliation, the buildSpec field in the status of the BuildRun resource updates if an existing task run is part of that build run. During this update, a Build resource snapshot generates and embeds into the status.buildSpec field of the BuildRun resource. Due to this, the buildSpec field contains an exact copy of the original Build specification, which was used to execute a particular image build. By using the build snapshot, you can see the original Build resource configuration. 3.10. Relationship of build run with Tekton tasks The BuildRun resource delegates the task of image construction to the Tekton TaskRun resource, which runs all steps until either the completion of the task, or a failure occurs in the task. During the build run reconciliation, the build run controller generates a new TaskRun resource. The controller embeds the required steps for a build run execution in the TaskRun resource. The embedded steps are defined in your build strategy. 3.11. Build run cancellation You can cancel an active BuildRun instance by setting its state to BuildRunCanceled . When you cancel a BuildRun instance, the underlying TaskRun resource is also marked as canceled. The following example shows a canceled build run for a BuildRun resource: apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: buildah-buildrun spec: # [...] state: "BuildRunCanceled" 3.12. Automatic build run deletion To automatically delete a build run, you can add the following retention parameters in the build or buildrun specification: buildrun TTL parameters: Ensures that build runs only exist for a defined duration of time after completion. buildrun.spec.retention.ttlAfterFailed : The build run is deleted if the specified time has passed and the build run has failed. buildrun.spec.retention.ttlAfterSucceeded : The build run is deleted if the specified time has passed and the build run has succeeded. build TTL parameters: Ensures that build runs for a build only exist for a defined duration of time after completion. build.spec.retention.ttlAfterFailed : The build run is deleted if the specified time has passed and the build run has failed for the build. build.spec.retention.ttlAfterSucceeded : The build run is deleted if the specified time has passed and the build run has succeeded for the build. build limit parameters: Ensures that only a limited number of succeeded or failed build runs can exist for a build. build.spec.retention.succeededLimit : Defines the number of succeeded build runs that can exist for the build. build.spec.retention.failedLimit : Defines the number of failed build runs that can exist for the build. | [
"apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: buildah-buildrun spec: build: name: buildah-build",
"apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: standalone-buildrun spec: build: spec: source: git: url: https://github.com/shipwright-io/sample-go.git contextDir: source-build strategy: kind: ClusterBuildStrategy name: buildah output: image: <path_to_image>",
"apiVersion: shipwright.io/v1beta1 kind: Build metadata: name: <your_build> namespace: <your_namespace> spec: paramValues: - name: cache value: disabled strategy: name: <your_strategy> kind: ClusterBuildStrategy source: # output: #",
"apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: <your_buildrun> namespace: <your_namespace> spec: build: name: <your_build> paramValues: - name: cache value: registry",
"apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: buildah-buildrun spec: build: name: buildah-build serviceAccount: pipeline 1",
"apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: buidrun-retention-ttl spec: build: name: build-retention-ttl retention: ttlAfterFailed: 10m ttlAfterSucceeded: 10m",
"apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: <buildrun_name> spec: build: name: <build_name> volumes: - name: <volume_name> configMap: name: <configmap_name>",
"apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: buildah-buildrun spec: build: name: buildah-build env: - name: <example_var_1> value: \"<example_value_1>\" - name: <example_var_2> value: \"<example_value_2>\"",
"apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: buildah-buildrun spec: build: name: buildah-build env: - name: <pod_name> valueFrom: fieldRef: fieldPath: metadata.name",
"apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: buildah-buildrun spec: build: name: buildah-build env: - name: MEMORY_LIMIT valueFrom: resourceFieldRef: containerName: <my_container> resource: limits.memory",
"oc get buildrun buildah-buildrun-mp99r NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME buildah-buildrun-mp99r Unknown Unknown 1s",
"oc get buildrun buildah-buildrun-mp99r NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME buildah-buildrun-mp99r True Succeeded 29m 20m",
"status: # failureDetails: location: container: step-source-default pod: baran-build-buildrun-gzmv5-b7wbf-pod-bbpqr message: The source repository does not exist, or you have insufficient permission to access it. reason: GitRemotePrivate",
"status: buildSpec: # output: digest: sha256:07626e3c7fdd28d5328a8d6df8d29cd3da760c7f5e2070b534f9b880ed093a53 size: 1989004 sources: - name: default git: commitAuthor: xxx xxxxxx commitSha: f25822b85021d02059c9ac8a211ef3804ea8fdde branchName: main",
"status: buildSpec: # output: digest: sha256:07626e3c7fdd28d5328a8d6df8d29cd3da760c7f5e2070b534f9b880ed093a53 size: 1989004 sources: - name: default bundle: digest: sha256:0f5e2070b534f9b880ed093a537626e3c7fdd28d5328a8d6df8d29cd3da760c7",
"apiVersion: shipwright.io/v1beta1 kind: BuildRun metadata: name: buildah-buildrun spec: # [...] state: \"BuildRunCanceled\""
] | https://docs.redhat.com/en/documentation/builds_for_red_hat_openshift/1.1/html/configure/configuring-build-runs |
Chapter 3. Upgrading Red Hat Satellite | Chapter 3. Upgrading Red Hat Satellite Use the following procedures to upgrade your existing Red Hat Satellite to Red Hat Satellite 6.11: Review Section 1.1, "Prerequisites" . Section 3.1, "Upgrading Satellite Server" Section 3.2, "Synchronizing the New Repositories" Section 3.3, "Upgrading Capsule Servers" Section 3.4, "Upgrading Content Hosts" Section 3.6, "Performing Post-Upgrade Tasks" 3.1. Upgrading Satellite Server This section describes how to upgrade Satellite Server from 6.10 to 6.11. You can upgrade from any minor version of Satellite Server 6.10. Before You Begin Note that you can upgrade Capsules separately from Satellite. For more information, see Section 1.4, "Upgrading Capsules Separately from Satellite" . Review and update your firewall configuration prior to upgrading your Satellite Server. For more information, see Preparing your environment for installation in Installing Satellite Server . Ensure that you do not delete the manifest from the Customer Portal or in the Satellite web UI because this removes all the entitlements of your content hosts. If you have edited any of the default job or provisioning templates, back up the files either by cloning or exporting them. Cloning is the recommended method because that prevents them being overwritten in future updates or upgrades. To confirm if a template has been edited, you can view its History before you upgrade or view the changes in the audit log after an upgrade. In the Satellite web UI, navigate to Monitor > Audits and search for the template to see a record of changes made. If you use the export method, restore your changes by comparing the exported template and the default template, manually applying your changes. Capsule Considerations If you use Content Views to control updates to a Capsule Server's base operating system, or for Capsule Server repository, you must publish updated versions of those Content Views. Note that Satellite Server upgraded from 6.10 to 6.11 can use Capsule Servers still at 6.10. Warning If you implemented custom certificates, you must retain the content of both the /root/ssl-build directory and the directory in which you created any source files associated with your custom certificates. Failure to retain these files during an upgrade causes the upgrade to fail. If these files have been deleted, they must be restored from a backup in order for the upgrade to proceed. Upgrade Scenarios To upgrade a Satellite Server connected to the Red Hat Content Delivery Network, proceed to Section 3.1.1, "Upgrading a Connected Satellite Server" . To upgrade a Satellite Server not connected to the Red Hat Content Delivery Network, proceed to Section 3.1.2, "Upgrading a Disconnected Satellite Server" . You cannot upgrade a self-registered Satellite. You must migrate a self-registered Satellite to the Red Hat Content Delivery Network (CDN) and then perform the upgrade. FIPS mode You cannot upgrade Satellite Server from a RHEL base system that is not operating in FIPS mode to a RHEL base system that is operating in FIPS mode. To run Satellite Server on a Red Hat Enterprise Linux base system operating in FIPS mode, you must install Satellite on a freshly provisioned RHEL base system operating in FIPS mode. For more information, see Preparing your environment for installation in Installing Satellite Server . 3.1.1. Upgrading a Connected Satellite Server Use this procedure for a Satellite Server with access to the public internet Warning If you customize configuration files, manually or using a tool such as Hiera, these changes are overwritten when the installation script runs during upgrading or updating. You can use the --noop option with the satellite-installer script to test for changes. For more information, see the Red Hat Knowledgebase solution How to use the noop option to check for changes in Satellite config files during an upgrade. Upgrade Satellite Server Stop all Satellite services: Take a snapshot or create a backup: On a virtual machine, take a snapshot. On a physical machine, create a backup. Start all Satellite services: Optional: If you made manual edits to DNS or DHCP configuration in the /etc/zones.conf or /etc/dhcp/dhcpd.conf files, back up the configuration files because the installer only supports one domain or subnet, and therefore restoring changes from these backups might be required. Optional: If you made manual edits to DNS or DHCP configuration files and do not want to overwrite the changes, enter the following command: In the Satellite web UI, navigate to Hosts > Discovered hosts . On the Discovered Hosts page, power off and then delete the discovered hosts. From the Select an Organization menu, select each organization in turn and repeat the process to power off and delete the discovered hosts. Make a note to reboot these hosts when the upgrade is complete. Ensure that the Satellite Maintenance repository is enabled: Check the available versions to confirm the version you want is listed: Use the health check option to determine if the system is ready for upgrade. When prompted, enter the hammer admin user credentials to configure satellite-maintain with hammer credentials. These changes are applied to the /etc/foreman-maintain/foreman-maintain-hammer.yml file. Review the results and address any highlighted error conditions before performing the upgrade. Because of the lengthy upgrade time, use a utility such as tmux to suspend and reattach a communication session. You can then check the upgrade progress without staying connected to the command shell continuously. If you lose connection to the command shell where the upgrade command is running you can see the logged messages in the /var/log/foreman-installer/satellite.log file to check if the process completed successfully. Perform the upgrade: Determine if the system needs a reboot: Check the version of newest installed kernel: Compare this to the version of currently running kernel: Optional: If the newest kernel differs from the currently running kernel, reboot the system: If using a BASH shell, after a successful or failed upgrade, enter: 3.1.2. Upgrading a Disconnected Satellite Server Use this procedure if your Satellite Server is not connected to the Red Hat Content Delivery Network. Warning If you customized configuration files, either manually or using a tool such as Hiera, these changes are overwritten when you enter the satellite-maintain command during upgrading or updating. You can use the --noop option with the satellite-installer command to review the changes that are applied during upgrading or updating. For more information, see the Red Hat Knowledgebase solution How to use the noop option to check for changes in Satellite config files during an upgrade . The hammer import and export commands have been replaced with hammer content-import and hammer content-export tooling. If you have scripts that are using hammer content-view version export , hammer content-view version export-legacy , hammer repository export , or their respective import commands, you have to adjust them to use the hammer content-export command instead, along with its respective import command. If you implemented custom certificates, you must retain the content of both the /root/ssl-build directory and the directory in which you created any source files associated with your custom certificates. Failure to retain these files during an upgrade causes the upgrade to fail. If these files have been deleted, they must be restored from a backup in order for the upgrade to proceed. Before You Begin Review and update your firewall configuration before upgrading your Satellite Server. For more information, see Ports and Firewalls Requirements in Installing Satellite Server in a Disconnected Network Environment . Ensure that you do not delete the manifest from the Customer Portal or in the Satellite web UI because this removes all the entitlements of your content hosts. Back up and remove all Foreman hooks before upgrading. Reinstate hooks only after Satellite is known to be working after the upgrade is complete. Upgrade Disconnected Satellite Server Stop all Satellite services: Take a snapshot or create a backup: On a virtual machine, take a snapshot. On a physical machine, create a backup. Start all Satellite services: A pre-upgrade script is available to detect conflicts and list hosts which have duplicate entries in Satellite Server that can be unregistered and deleted after upgrade. In addition, it will detect hosts which are not assigned to an organization. If a host is listed under Hosts > All hosts without an organization association and if a content host with same name has an organization already associated with it then the content host will automatically be unregistered. This can be avoided by associating such hosts to an organization before upgrading. Run the pre-upgrade check script to get a list of hosts that can be deleted after upgrading. If any unassociated hosts are found, associating them to an organization before upgrading is recommended. Optional: If you made manual edits to DNS or DHCP configuration in the /etc/zones.conf or /etc/dhcp/dhcpd.conf files, back up the configuration files because the installer only supports one domain or subnet, and therefore restoring changes from these backups might be required. Optional: If you made manual edits to DNS or DHCP configuration files and do not want to overwrite the changes, enter the following command: In the Satellite web UI, navigate to Hosts > Discovered hosts . If there are discovered hosts available, turn them off and then delete all entries under the Discovered hosts page. Select all other organizations in turn using the organization setting menu and repeat this action as required. Reboot these hosts after the upgrade has completed. Make sure all external Capsule Servers are assigned to an organization, otherwise they might get unregistered due to host-unification changes. Remove old repositories: Stop Satellite services: Obtain the latest ISO files by following the Downloading the Binary DVD Images procedure in Installing Satellite Server in a Disconnected Network Environment . Create directories to serve as a mount point, mount the ISO images, and configure the rhel7-server or the rhel8 repository: For Red Hat Enterprise Linux 8 Follow the Configuring the Base Operating System with Offline Repositories in RHEL 8 procedure in Installing Satellite Server in a Disconnected Network Environment . For Red Hat Enterprise Linux 7 Follow the Configuring the Base Operating System with Offline Repositories in RHEL 7 procedure in Installing Satellite Server in a Disconnected Network Environment . Do not install or update any packages at this stage. Configure the Satellite 6.11 repository from the ISO file. Copy the ISO file's repository data file for the Red Hat Satellite packages: Edit the /etc/yum.repos.d/satellite.repo file: Change the default InstallMedia repository name to Satellite-6.11 : Add the baseurl directive: Configure the Red Hat Satellite Maintenance repository from the ISO file. Copy the ISO file's repository data file for Red Hat Satellite Maintenance packages: Edit the /etc/yum.repos.d/satellite-maintenance.repo file: Change the default InstallMedia repository name to Satellite-Maintenance : Add the baseurl directive: If your Satellite runs on Red Hat Enterprise Linux 7, configure the Ansible repository from the ISO file. Copy the ISO file's repository data file for Ansible packages: Edit the /etc/yum.repos.d/ansible.repo file: Change the default InstallMedia repository name to Ansible : Add the baseurl directive: If your Satellite runs on Red Hat Enterprise Linux 7, configure the Red Hat Software Collections repository from the ISO file. Copy the ISO file's repository data file for Red Hat Software Collections packages: Edit the /etc/yum.repos.d/RHSCL.repo file: Change the default InstallMedia repository name to RHSCL : Add the baseurl directive: Optional: If you have applied custom Apache server configurations, note that the custom configurations are reverted to the installation defaults when you perform the upgrade. To preview the changes that are applied during the upgrade, enter the satellite-installer command with the --noop (no operation) option. These changes are applied when you enter the satellite-maintain upgrade command in a following step. Add the following line to the /etc/httpd/conf/httpd.conf configuration file. Restart the httpd service. Start the postgresql database services. Enter the satellite-installer command with the --noop option: Review the /var/log/foreman-installer/satellite.log to preview the changes that are applied during the upgrade. Locate the +++ and --- symbols that indicate the changes to the configurations files. Although entering the satellite-installer command with the --noop option does not apply any changes to your Satellite, some Puppet resources in the module expect changes to be applied and might display failure messages. Stop Satellite services: Because of the lengthy upgrade time, use a utility such as tmux to suspend and reattach a communication session. You can then check the upgrade progress without staying connected to the command shell continuously. If you lose connection to the command shell where the upgrade command is running you can see the logs in /var/log/foreman-installer/satellite.log to check if the process completed successfully. Check the available versions to confirm the version you want is listed: Use the health check option to determine if the system is ready for upgrade. When prompted, enter the hammer admin user credentials to configure satellite-maintain with hammer credentials. These changes are applied to the /etc/foreman-maintain/foreman-maintain-hammer.yml file. Review the results and address any highlighted error conditions before performing the upgrade. Perform the upgrade: If the script fails due to missing or outdated packages, you must download and install these separately. For more information, see Resolving Package Dependency Errors in Installing Satellite Server in a Disconnected Network Environment . If using a BASH shell, after a successful or failed upgrade, enter: Check when the kernel packages were last updated: Optional: If a kernel update occurred since the last reboot, stop Satellite services and reboot the system: Optional: If you made manual edits to DNS or DHCP configuration files, check and restore any changes required to the DNS and DHCP configuration files using the backups that you made. If you make changes in the step, restart Satellite services: If you have the OpenSCAP plug-in installed, but do not have the default OpenSCAP content available, enter the following command. In the Satellite web UI, go to Configure > Discovery Rules and associate selected organizations and locations with discovery rules. 3.2. Synchronizing the New Repositories You must enable and synchronize the new 6.11 repositories before you can upgrade Capsule Servers and Satellite clients. Procedure In the Satellite web UI, navigate to Content > Red Hat Repositories . Toggle the Recommended Repositories switch to the On position. From the list of results, expand the following repositories and click the Enable icon to enable the repositories: To upgrade Satellite clients, enable the Red Hat Satellite Client 6 repositories for all Red Hat Enterprise Linux versions that clients use. If you have Capsule Servers, to upgrade them, enable the following repositories too: Red Hat Satellite Capsule 6.11 (for RHEL 7 Server) (RPMs) Red Hat Satellite Maintenance 6.11 (for RHEL 7 Server) (RPMs) Red Hat Ansible Engine 2.9 RPMs for Red Hat Enterprise Linux 7 Server Red Hat Software Collections RPMs for Red Hat Enterprise Linux 7 Server Note If the 6.11 repositories are not available, refresh the Red Hat Subscription Manifest. In the Satellite web UI, navigate to Content > Subscriptions , click Manage Manifest , then click Refresh . In the Satellite web UI, navigate to Content > Sync Status . Click the arrow to the product to view the available repositories. Select the repositories for 6.11. Note that Red Hat Satellite Client 6 does not have a 6.11 version. Choose Red Hat Satellite Client 6 instead. Click Synchronize Now . Important If an error occurs when you try to synchronize a repository, refresh the manifest. If the problem persists, raise a support request. Do not delete the manifest from the Customer Portal or in the Satellite web UI; this removes all the entitlements of your content hosts. If you use Content Views to control updates to the base operating system of Capsule Server, update those Content Views with new repositories, publish, and promote their updated versions. For more information, see Managing Content Views in the Content Management Guide . 3.3. Upgrading Capsule Servers This section describes how to upgrade Capsule Servers from 6.10 to 6.11. Before You Begin You must upgrade Satellite Server before you can upgrade any Capsule Servers. Note that you can upgrade Capsules separately from Satellite. For more information, see Section 1.4, "Upgrading Capsules Separately from Satellite" . Ensure the Red Hat Satellite Capsule 6.11 repository is enabled in Satellite Server and synchronized. Ensure that you synchronize the required repositories on Satellite Server. For more information, see Section 3.2, "Synchronizing the New Repositories" . If you use Content Views to control updates to the base operating system of Capsule Server, update those Content Views with new repositories, publish, and promote their updated versions. For more information, see Managing Content Views in the Content Management Guide . Ensure the Capsule's base system is registered to the newly upgraded Satellite Server. Ensure the Capsule has the correct organization and location settings in the newly upgraded Satellite Server. Review and update your firewall configuration prior to upgrading your Capsule Server. For more information, see Preparing Your Environment for Capsule Installation in Installing Capsule Server . Warning If you implemented custom certificates, you must retain the content of both the /root/ssl-build directory and the directory in which you created any source files associated with your custom certificates. Failure to retain these files during an upgrade causes the upgrade to fail. If these files have been deleted, they must be restored from a backup in order for the upgrade to proceed. Upgrading Capsule Servers Create a backup. On a virtual machine, take a snapshot. On a physical machine, create a backup. For information on backups, see Backing Up Satellite Server and Capsule Server in the Administering Red Hat Satellite 6.10 guide. Regenerate certificates on your Satellite Server: Regenerate certificates for Capsules that use default certificates: For Capsule Servers that do not use load balancing: For Capsule Servers that are load-balanced: Regenerate certificates for Capsules that use custom certificates: For Capsule Servers that do not use load balancing: For Capsule Servers that are load-balanced: For more information on custom SSL certificates signed by a Certificate Authority, see Deploying a Custom SSL Certificate to Capsule Server in Installing Capsule Server . Copy the resulting tarball to your Capsule. The location must match what the installer expects. Use grep tar_file /etc/foreman-installer/scenarios.d/capsule-answers.yaml on your Capsule to determine this. Clean yum cache: Ensure Capsule has access to rhel-7-server-satellite-maintenance-6.11-rpms and update satellite-maintain. On Capsule Server, verify that the foreman_url setting points to the Satellite FQDN: Check the available versions to confirm the version you want is listed: Because of the lengthy upgrade time, use a utility such as tmux to suspend and reattach a communication session. You can then check the upgrade progress without staying connected to the command shell continuously. If you lose connection to the command shell where the upgrade command is running, you can see the logged messages in the /var/log/foreman-installer/capsule.log file to check if the process completed successfully. Use the health check option to determine if the system is ready for upgrade: Review the results and address any highlighted error conditions before performing the upgrade. Perform the upgrade: Check when the kernel packages were last updated: Optional: If a kernel update occurred since the last reboot, reboot the system: Optional: If you made manual edits to DNS or DHCP configuration files, check and restore any changes required to the DNS and DHCP configuration files using the backups made earlier. Optional: If you use custom repositories, ensure that you enable these custom repositories after the upgrade completes. 3.4. Upgrading Content Hosts The Satellite Client 6 repository provides katello-agent and katello-host-tools , which provide communication services for managing Errata. Note The Katello agent is deprecated and will be removed in a future Satellite version. Migrate your workloads to use the remote execution feature to update clients remotely. For more information, see Migrating from Katello Agent to Remote Execution in the Managing Hosts Guide . For deployments using katello-agent and goferd , update all clients to the new version of katello-agent . For deployments not using katello-agent and goferd , update all clients to the new version of katello-host-tools . Complete this action as soon as possible so that your clients are fully compatible with Satellite Server. Prerequisites You must have upgraded Satellite Server. You must have enabled the new Satellite Client 6 repositories on the Satellite. You must have synchronized the new repositories in the Satellite. If you have not previously installed katello-agent on your clients and you want to install it, use the manual method. For more information, see CLI Procedure . Warning If you implemented custom certificates, you must retain the content of both the /root/ssl-build directory and the directory in which you created any source files associated with your custom certificates. Failure to retain these files during an upgrade causes the upgrade to fail. If these files have been deleted, they must be restored from a backup in order for the upgrade to proceed. Procedure In the Satellite web UI, navigate to Hosts > Content Hosts and select the Content Hosts that you want to upgrade. From the Select Action list, select Manage Repository Sets . From the Repository Sets Management list, select the Red Hat Satellite Tools 6.10 checkbox. From the Select Action list, select Override to Disabled , and click Done . When the process completes, on the same set of hosts from the steps, from the Select Action list, select Manage Repository Sets . From the Repository Sets Management list, select the Red Hat Satellite Client 6 checkbox. From the Select Action list, select Override to Enabled , and click Done . When the process completes, on the same set of hosts from the steps, from the Select Action list, select Manage Packages . In the Package search field, enter one of the following options depending on your configuration: If your deployment uses katello-agent and goferd , enter katello-agent . If your deployment does not use katello-agent and goferd , enter katello-host-tools . From the Update list, you must select the via remote execution option. This is required because if you update the package using the Katello agent, the package update disrupts the communication between the client and Satellite or Capsule Server, which causes the update to fail. For more information, see Configuring and Setting Up Remote Jobs in the Managing Hosts guide. CLI Procedure Log into the client system. Disable the repositories for the version of Satellite. Enable the Satellite Client 6 repository for this version of Satellite. Depending on your configuration, complete one of the following steps: If your deployment uses katello-agent and goferd , enter the following command to install or upgrade katello-agent : If your deployment does not use katello-agent and goferd , enter the following command to install or upgrade katello-host-tools : 3.5. Upgrading the External Database You can upgrade an external database from Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8 while upgrading Satellite from 6.10 to 6.11. Prerequisites Create a new Red Hat Enterprise Linux 8 based host for PostgreSQL server that follows the external database on Red Hat Enterprise Linux 8 documentation. For more information, see Using External Databases with Satellite . Procedure Create a backup. Restore the backup on the new server. If Satellite reaches the new database server via the old name, no further changes are required. Otherwise reconfigure Satellite to use the new name: 3.6. Performing Post-Upgrade Tasks Some of the procedures in this section are optional. You can choose to perform only those procedures that are relevant to your installation. 3.6.1. Upgrading Discovery If you use the PXE-based discovery process, then you must complete the discovery upgrade procedure on Satellite and on any Capsule Server with hosts that you want to be listed in Satellite on the Hosts > Discovered hosts page. This section describes updating the PXELinux template and the boot image passed to hosts that use PXE booting to register themselves with Satellite Server. From Satellite 6.11, provisioning templates now have a separate association with a subnet, and do not default to using the TFTP Capsule for that subnet. If you create subnets after the upgrade, you must specifically enable the Satellite or a Capsule to provide a proxy service for discovery templates and then configure all subnets with discovered hosts to use a specific template Capsule . During the upgrade, for every subnet with a TFTP proxy enabled, the template Capsule is set to be the same as the TFTP Capsule. After the upgrade, check all subnets to verify this was set correctly. These procedures are not required if you do not use PXE booting of hosts to enable Satellite to discover new hosts. Additional resources For information about configuring the Discovery service, see Configuring the Discovery Service in Provisioning Hosts . 3.6.1.1. Upgrading Discovery on Satellite Server Update the Discovery template in the Satellite web UI: In the Satellite web UI, navigate to Hosts > Provisioning templates . On the PXELinux global default line, click Clone . Enter a new name for the template in the Name field, for example ACME PXE global default . In the template editor field, change the line ONTIMEOUT local to ONTIMEOUT discovery and click Submit . In the Satellite web UI, navigate to Administer > Settings . On the Provisioning tab, set Default PXE global template entry to a custom value for your environment. Locate Global default PXELinux template and click on its Value . Select the name of the newly created template from the menu and click Submit . In the Satellite web UI, navigate to Hosts > Provisioning templates . Click Build PXE Default , then click OK . Note If the template is modified, a Satellite upgrade overrides it to its default version. Once the PXE Default configuration is built, the template configured in the Settings is deployed to the TFTP. This can result in deploying the default template if the new template is correctly set in the Settings . In the Satellite web UI, go to Configure > Discovery Rules and associate selected organizations and locations with discovery rules. 3.6.2. Upgrading Discovery on Capsule Servers Verify that the Foreman Discovery package is current on Satellite Server. If an update occurred in the step, restart the satellite-maintain services. Upgrade the Discovery image on the Satellite Capsule that is either connected to the provisioning network with discovered hosts or provides TFTP services for discovered hosts. On the same instance, install the package which provides the Proxy service, and then restart foreman-proxy service. In the Satellite web UI, go to Infrastructure > Capsules and verify that the relevant Capsule lists Discovery in the features column. Select Refresh from the Actions drop-down menu if necessary. Go to Infrastructure > Subnets and for each subnet on which you want to use discovery: Click the subnet name. On the Capsules tab, ensure the Discovery Capsule is set to a Capsule you configured above. 3.6.2.1. Verifying Subnets have a Template Capsule If the Templates feature is enabled in your environment, ensure all subnets with discovered hosts have a template Capsule: In the Satellite web UI, navigate to Infrastructure > Subnets . Select the subnet you want to check. On the Capsules tab, ensure a Template Capsule has been set for this subnet. For more information about configuring subnets with template Capsules, see Configuring the Discovery Service in the Provisioning guide. 3.6.3. Upgrading virt-who If virt-who is installed on Satellite Server or a Capsule Server, it will be upgraded when they are upgraded. No further action is required. If virt-who is installed elsewhere, it must be upgraded manually. Before You Begin If virt-who is installed on a host registered to Satellite Server or a Capsule Server, first upgrade the host to the latest packages available in the Satellite Client 6 repository. For information about upgrading hosts, see Section 3.4, "Upgrading Content Hosts" . Upgrade virt-who Manually Upgrade virt-who. Restart the virt-who service so the new version is activated. 3.6.4. Removing the Version of the Satellite Tools Repository After completing the upgrade to Satellite 6.11, the Red Hat Satellite Tools 6.10 repository can be removed from Content Views and then disabled. Disable Version 6.10 of the Satellite Tools Repository: In the Satellite web UI, navigate to Content > Red Hat Repositories . In the Enabled Repositories area, locate Red Hat Satellite Tools 6.10 for RHEL 7 Server RPMs x86_64 . Click the Disable icon to the right. If the repository is still contained in a Content View then you cannot disable it. Packages from a disabled repository are removed automatically by a scheduled task. 3.6.5. Migrating Ansible Content The upgrade from Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8 includes an upgrade from Ansible Engine 2.9 to Ansible Core 2.12. If you have custom Ansible content such as playbooks, job templates inside REX, roles and collections on disk, and you rely on modules being delivered by the Ansible RPM on Satellite, you have to take additional steps to adapt your Ansible installation or migrate your Ansible content. Ansible Core contains only essential modules. In terms of FQCN notation namespace.collection.module , you can continue to use ansible.builtin.* , but everything else is missing in Ansible Core. That means you will be no longer able to use non-builtin Ansible modules as you were used to and you have to get them from another source, eventually. You have the following options to handle your Ansible content after the upgrade: You can obtain additional community-maintained collections that provide the non-essential functionality from Ansible Galaxy. For more information, see Installing collections in the Galaxy User Guide . Note that Red Hat does not provide support for this content. If you have a subscription for Red Hat Automation Hub , you can configure your ansible-galaxy to talk to Automation Hub server and download content from there. That content is supported by Red Hat. For more information on configuring Automation Hub connection for ansible-galaxy , see Configuring Red Hat automation hub as the primary source for content . You can rewrite your Ansible roles, templates and other affected content. Note that Red Hat does not provide support for content that you maintain yourself. Note If you want to download and install Ansible content on Capsule that does not have a connection to an external Ansible Galaxy server, then you must pass the content through Satellite Server instead of using the URL of the Ansible Galaxy server in the configuration on the Capsule directly: Sync the content from a Ansible Galaxy server to a custom repository on your Satellite Server. Configure Ansible on your Capsule to download the content from Satellite Server. Additional resources Updates to using Ansible in RHEL 8.6 and 9.0 Using Ansible in RHEL 8.6 and later Release Notes for Red Hat Enterprise Linux 8.6 3.6.6. Reclaiming PostgreSQL Space The PostgreSQL database can use a large amount of disk space especially in heavily loaded deployments. Use this procedure to reclaim some of this disk space on Satellite. Procedure Stop all services, except for the postgresql service: Switch to the postgres user and reclaim space on the database: Start the other services when the vacuum completes: 3.6.7. Updating Templates, Parameters, Lookup Keys and Values During the upgrade process, Satellite attempts to locate macros that are deprecated for Satellite 6.11 and converts old syntax to new syntax for the default Satellite templates, parameters, and lookup keys and values. However, Satellite does not convert old syntax in cloned templates and in custom job or provisioning templates that you have created. The process uses simple text replacement, for example: Warning If you use cloned templates in Satellite, verify whether the cloned templates have diverged from the latest version of the original templates in Satellite. The syntax for the same template can differ between versions of Satellite. If your cloned templates contain outdated syntax, update the syntax to match the latest version of the template. To ensure that this text replacement does not break or omit any variables in your files during the upgrade, check all templates, parameters, and lookup keys and values for the old syntax and replace manually. The following error occurs because of old syntax remaining in files after the upgrade: Fixing the outdated subscription_manager_registration snippet Satellite 6.4 onwards uses the redhat_register snippet instead of the subscription_manager_registration snippet. If you upgrade from Satellite 6.3 and earlier, you must replace the subscription_manager_registration snippet in your custom job or provisioning templates as follows: 3.6.8. Tuning Satellite Server with Predefined Profiles If your Satellite deployment includes more than 5000 hosts, you can use predefined tuning profiles to improve performance of Satellite. Note that you cannot use tuning profiles on Capsules. You can choose one of the profiles depending on the number of hosts your Satellite manages and available hardware resources. The tuning profiles are available in the /usr/share/foreman-installer/config/foreman.hiera/tuning/sizes directory. When you run the satellite-installer command with the --tuning option, deployment configuration settings are applied to Satellite in the following order: The default tuning profile defined in the /usr/share/foreman-installer/config/foreman.hiera/tuning/common.yaml file The tuning profile that you want to apply to your deployment and is defined in the /usr/share/foreman-installer/config/foreman.hiera/tuning/sizes/ directory Optional: If you have configured a /etc/foreman-installer/custom-hiera.yaml file, Satellite applies these configuration settings. Note that the configuration settings that are defined in the /etc/foreman-installer/custom-hiera.yaml file override the configuration settings that are defined in the tuning profiles. Therefore, before applying a tuning profile, you must compare the configuration settings that are defined in the default tuning profile in /usr/share/foreman-installer/config/foreman.hiera/tuning/common.yaml , the tuning profile that you want to apply and your /etc/foreman-installer/custom-hiera.yaml file, and remove any duplicated configuration from the /etc/foreman-installer/custom-hiera.yaml file. default Number of managed hosts: 0 - 5000 RAM: 20G Number of CPU cores: 4 medium Number of managed hosts: 5001 - 10000 RAM: 32G Number of CPU cores: 8 large Number of managed hosts: 10001 - 20000 RAM: 64G Number of CPU cores: 16 extra-large Number of managed hosts: 20001 - 60000 RAM: 128G Number of CPU cores: 32 extra-extra-large Number of managed hosts: 60000+ RAM: 256G Number of CPU cores: 48+ Procedure Optional: If you have configured the custom-hiera.yaml file on Satellite Server, back up the /etc/foreman-installer/custom-hiera.yaml file to custom-hiera.original . You can use the backup file to restore the /etc/foreman-installer/custom-hiera.yaml file to its original state if it becomes corrupted: Optional: If you have configured the custom-hiera.yaml file on Satellite Server, review the definitions of the default tuning profile in /usr/share/foreman-installer/config/foreman.hiera/tuning/common.yaml and the tuning profile that you want to apply in /usr/share/foreman-installer/config/foreman.hiera/tuning/sizes/ . Compare the configuration entries against the entries in your /etc/foreman-installer/custom-hiera.yaml file and remove any duplicated configuration settings in your /etc/foreman-installer/custom-hiera.yaml file. Enter the satellite-installer command with the --tuning option for the profile that you want to apply. For example, to apply the medium tuning profile settings, enter the following command: | [
"satellite-maintain service stop",
"satellite-maintain service start",
"satellite-installer --foreman-proxy-dns-managed=false --foreman-proxy-dhcp-managed=false",
"subscription-manager repos --enable rhel-7-server-satellite-maintenance-6.11-rpms",
"satellite-maintain upgrade list-versions",
"satellite-maintain upgrade check --target-version 6.11",
"satellite-maintain upgrade run --target-version 6.11",
"rpm --query --last kernel | head -n 1",
"uname --kernel-release",
"reboot",
"hash -d satellite-maintain service 2> /dev/null",
"satellite-maintain service stop",
"satellite-maintain service start",
"foreman-rake katello:upgrade_check",
"satellite-installer --foreman-proxy-dns-managed=false --foreman-proxy-dhcp-managed=false",
"rm /etc/yum.repos.d/*",
"satellite-maintain service stop",
"cp /media/sat6/Satellite/media.repo /etc/yum.repos.d/satellite.repo",
"vi /etc/yum.repos.d/satellite.repo",
"[Satellite-6.11]",
"baseurl=file:///media/sat6/Satellite",
"cp /media/sat6/Maintenance/media.repo /etc/yum.repos.d/satellite-maintenance.repo",
"vi /etc/yum.repos.d/satellite-maintenance.repo",
"[Satellite-Maintenance]",
"baseurl=file:///media/sat6/Maintenance/",
"cp /media/sat6/ansible/media.repo /etc/yum.repos.d/ansible.repo",
"vi /etc/yum.repos.d/ansible.repo",
"[Ansible]",
"baseurl=file:///media/sat6/ansible/",
"cp /media/sat6/RHSCL/media.repo /etc/yum.repos.d/RHSCL.repo",
"vi /etc/yum.repos.d/RHSCL.repo",
"[RHSCL]",
"baseurl=file:///media/sat6/RHSCL/",
"Include /etc/httpd/conf.modules.d/*.conf",
"systemctl restart httpd",
"systemctl start postgresql",
"satellite-installer --scenario satellite --verbose --noop",
"satellite-maintain service stop",
"satellite-maintain upgrade list-versions",
"satellite-maintain upgrade check --target-version 6.11 --whitelist=\"repositories-validate,repositories-setup\"",
"satellite-maintain upgrade run --target-version 6.11 --whitelist=\"repositories-validate,repositories-setup\"",
"hash -d satellite-maintain service 2> /dev/null",
"rpm -qa --last | grep kernel",
"satellite-maintain service stop reboot",
"satellite-maintain service restart",
"foreman-rake foreman_openscap:bulk_upload:default",
"capsule-certs-generate --foreman-proxy-fqdn \"_capsule.example.com_\" --certs-update-all --certs-tar \"~/_capsule.example.com-certs.tar_\"",
"capsule-certs-generate --foreman-proxy-fqdn \"_capsule.example.com_\" --certs-update-all --foreman-proxy-cname \"_load-balancer.example.com_\" --certs-tar \"~/_capsule.example.com-certs.tar_\"",
"capsule-certs-generate --foreman-proxy-fqdn \"_capsule.example.com_\" --certs-tar \"~/_capsule.example.com-certs.tar_\" --server-cert \"/root/capsule_cert/_capsule_cert.pem_\" --server-key \"/root/capsule_cert/_capsule_cert_key.pem_\" --server-ca-cert \"/root/capsule_cert/_ca_cert_bundle.pem_\" --certs-update-server",
"capsule-certs-generate --foreman-proxy-fqdn \"_capsule.example.com_\" --certs-tar \"~/_capsule.example.com-certs.tar_\" --server-cert \"/root/capsule_cert/_capsule_cert.pem_\" --server-key \"/root/capsule_cert/_capsule_cert_key.pem_\" --server-ca-cert \"/root/capsule_cert/_ca_cert_bundle.pem_\" --foreman-proxy-cname \"_load-balancer.example.com_\" --certs-update-server",
"yum clean metadata",
"subscription-manager repos --enable rhel-7-server-satellite-maintenance-6.11-rpms yum --disableplugin=foreman-protector update rubygem-foreman_maintain satellite-maintain",
"grep foreman_url /etc/foreman-proxy/settings.yml",
"satellite-maintain upgrade list-versions",
"satellite-maintain upgrade check --target-version 6.11",
"satellite-maintain upgrade run --target-version 6.11",
"rpm -qa --last | grep kernel",
"reboot",
"subscription-manager repos --disable rhel-7-server-satellite-tools-6.10-rpms",
"subscription-manager repos --enable=rhel-7-server-satellite-client-6-rpms",
"yum install katello-agent",
"yum install katello-host-tools",
"satellite-installer --foreman-db-host newpostgres.example.com --katello-candlepin-db-host newpostgres.example.com --foreman-proxy-content-pulpcore-postgresql-host newpostgres.example.com",
"satellite-maintain packages install tfm-rubygem-foreman_discovery",
"satellite-maintain service restart",
"satellite-maintain packages install foreman-discovery-image",
"satellite-maintain packages install tfm-rubygem-smart_proxy_discovery service foreman-proxy restart",
"yum upgrade virt-who",
"systemctl restart virt-who.service",
"satellite-maintain service stop --exclude postgresql",
"su - postgres -c 'vacuumdb --full --all'",
"satellite-maintain service start",
"@host.params['parameter1'] -> host_param('parameter1') @host.param_true?('parameter1') -> host_param_true?('parameter1') @host.param_false?('parameter1') -> host_param_false?('parameter1') @host.info['parameters'] -> host_enc['parameters']",
"undefined method '#params' for Host::Managed::Jail",
"<%= snippet \"subscription_manager_registration\" %> v <%= snippet 'redhat_register' %>",
"cp /etc/foreman-installer/custom-hiera.yaml /etc/foreman-installer/custom-hiera.original",
"satellite-installer --tuning medium"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/upgrading_and_updating_red_hat_satellite/upgrading_satellite |
Chapter 4. View OpenShift Data Foundation Topology | Chapter 4. View OpenShift Data Foundation Topology The topology shows the mapped visualization of the OpenShift Data Foundation storage cluster at various abstraction levels and also lets you to interact with these layers. The view also shows how the various elements compose the Storage cluster altogether. Procedure On the OpenShift Web Console, navigate to Storage Data Foundation Topology . The view shows the storage cluster and the zones inside it. You can see the nodes depicted by circular entities within the zones, which are indicated by dotted lines. The label of each item or resource contains basic information such as status and health or indication for alerts. Choose a node to view node details on the right-hand panel. You can also access resources or deployments within a node by clicking on the search/preview decorator icon. To view deployment details Click the preview decorator on a node. A modal window appears above the node that displays all of the deployments associated with that node along with their statuses. Click the Back to main view button in the model's upper left corner to close and return to the view. Select a specific deployment to see more information about it. All relevant data is shown in the side panel. Click the Resources tab to view the pods information. This tab provides a deeper understanding of the problems and offers granularity that aids in better troubleshooting. Click the pod links to view the pod information page on OpenShift Container Platform. The link opens in a new window. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/deploying_openshift_data_foundation_on_any_platform/viewing-odf-topology_mcg-verify |
Appendix A. Versioning information | Appendix A. Versioning information Documentation last updated on Thursday, March 14th, 2024. | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/designing_your_decision_management_architecture_for_red_hat_process_automation_manager/versioning-information |
Chapter 97. Protobuf Jackson | Chapter 97. Protobuf Jackson Jackson Protobuf is a Data Format which uses the Jackson library with the Protobuf extension to unmarshal a Protobuf payload into Java objects or to marshal Java objects into a Protobuf payload. Note If you are familiar with Jackson, this Protobuf data format behaves in the same way as its JSON counterpart, and thus can be used with classes annotated for JSON serialization/deserialization. from("kafka:topic"). unmarshal().protobuf(ProtobufLibrary.Jackson, JsonNode.class). to("log:info"); 97.1. Dependencies When using protobuf-jackson with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jackson-protobuf-starter</artifactId> </dependency> 97.2. Configuring the SchemaResolver Since Protobuf serialization is schema-based, this data format requires that you provide a SchemaResolver object that is able to lookup the schema for each exchange that is going to be marshalled/unmarshalled. You can add a single SchemaResolver to the registry and it will be looked up automatically. Or you can explicitly specify the reference to a custom SchemaResolver. 97.3. Protobuf Jackson Options The Protobuf Jackson dataformat supports 18 options, which are listed below. Name Default Java Type Description contentTypeHeader Boolean Whether the data format should set the Content-Type header with the type from the data format. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSON. objectMapper String Lookup and use the existing ObjectMapper with the given id when using Jackson. useDefaultObjectMapper Boolean Whether to lookup and use default Jackson ObjectMapper from the registry. unmarshalType String Class name of the java type to use when unmarshalling. jsonView String When marshalling a POJO to JSON you might want to exclude certain fields from the JSON output. With Jackson you can use JSON views to accomplish this. This option is to refer to the class which has JsonView annotations. include String If you want to marshal a pojo to JSON, and the pojo has some fields with null values. And you want to skip these null values, you can set this option to NON_NULL. allowJmsType Boolean Used for JMS users to allow the JMSType header from the JMS spec to specify a FQN classname to use to unmarshal to. collectionType String Refers to a custom collection type to lookup in the registry to use. This option should rarely be used, but allows to use different collection types than java.util.Collection based as default. useList Boolean To unmarshal to a List of Map or a List of Pojo. moduleClassNames String To use custom Jackson modules com.fasterxml.jackson.databind.Module specified as a String with FQN class names. Multiple classes can be separated by comma. moduleRefs String To use custom Jackson modules referred from the Camel registry. Multiple modules can be separated by comma. enableFeatures String Set of features to enable on the Jackson com.fasterxml.jackson.databind.ObjectMapper. The features should be a name that matches a enum from com.fasterxml.jackson.databind.SerializationFeature, com.fasterxml.jackson.databind.DeserializationFeature, or com.fasterxml.jackson.databind.MapperFeature Multiple features can be separated by comma. disableFeatures String Set of features to disable on the Jackson com.fasterxml.jackson.databind.ObjectMapper. The features should be a name that matches a enum from com.fasterxml.jackson.databind.SerializationFeature, com.fasterxml.jackson.databind.DeserializationFeature, or com.fasterxml.jackson.databind.MapperFeature Multiple features can be separated by comma. allowUnmarshallType Boolean If enabled then Jackson is allowed to attempt to use the CamelJacksonUnmarshalType header during the unmarshalling. This should only be enabled when desired to be used. timezone String If set then Jackson will use the Timezone when marshalling/unmarshalling. autoDiscoverObjectMapper Boolean If set to true then Jackson will lookup for an objectMapper into the registry. schemaResolver String Optional schema resolver used to lookup schemas for the data in transit. autoDiscoverSchemaResolver Boolean When not disabled, the SchemaResolver will be looked up into the registry. 97.4. Using custom ProtobufMapper You can configure JacksonProtobufDataFormat to use a custom ProtobufMapper in case you need more control of the mapping configuration. If you setup a single ProtobufMapper in the registry, then Camel will automatic lookup and use this ProtobufMapper . 97.5. Spring Boot Auto-Configuration The component supports 19 options, which are listed below. Name Description Default Type camel.dataformat.protobuf-jackson.allow-jms-type Used for JMS users to allow the JMSType header from the JMS spec to specify a FQN classname to use to unmarshal to. false Boolean camel.dataformat.protobuf-jackson.allow-unmarshall-type If enabled then Jackson is allowed to attempt to use the CamelJacksonUnmarshalType header during the unmarshalling. This should only be enabled when desired to be used. false Boolean camel.dataformat.protobuf-jackson.auto-discover-object-mapper If set to true then Jackson will lookup for an objectMapper into the registry. false Boolean camel.dataformat.protobuf-jackson.auto-discover-schema-resolver When not disabled, the SchemaResolver will be looked up into the registry. true Boolean camel.dataformat.protobuf-jackson.collection-type Refers to a custom collection type to lookup in the registry to use. This option should rarely be used, but allows to use different collection types than java.util.Collection based as default. String camel.dataformat.protobuf-jackson.content-type-header Whether the data format should set the Content-Type header with the type from the data format. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSON. true Boolean camel.dataformat.protobuf-jackson.disable-features Set of features to disable on the Jackson com.fasterxml.jackson.databind.ObjectMapper. The features should be a name that matches a enum from com.fasterxml.jackson.databind.SerializationFeature, com.fasterxml.jackson.databind.DeserializationFeature, or com.fasterxml.jackson.databind.MapperFeature Multiple features can be separated by comma. String camel.dataformat.protobuf-jackson.enable-features Set of features to enable on the Jackson com.fasterxml.jackson.databind.ObjectMapper. The features should be a name that matches a enum from com.fasterxml.jackson.databind.SerializationFeature, com.fasterxml.jackson.databind.DeserializationFeature, or com.fasterxml.jackson.databind.MapperFeature Multiple features can be separated by comma. String camel.dataformat.protobuf-jackson.enabled Whether to enable auto configuration of the protobuf-jackson data format. This is enabled by default. Boolean camel.dataformat.protobuf-jackson.include If you want to marshal a pojo to JSON, and the pojo has some fields with null values. And you want to skip these null values, you can set this option to NON_NULL. String camel.dataformat.protobuf-jackson.json-view When marshalling a POJO to JSON you might want to exclude certain fields from the JSON output. With Jackson you can use JSON views to accomplish this. This option is to refer to the class which has JsonView annotations. String camel.dataformat.protobuf-jackson.module-class-names To use custom Jackson modules com.fasterxml.jackson.databind.Module specified as a String with FQN class names. Multiple classes can be separated by comma. String camel.dataformat.protobuf-jackson.module-refs To use custom Jackson modules referred from the Camel registry. Multiple modules can be separated by comma. String camel.dataformat.protobuf-jackson.object-mapper Lookup and use the existing ObjectMapper with the given id when using Jackson. String camel.dataformat.protobuf-jackson.schema-resolver Optional schema resolver used to lookup schemas for the data in transit. String camel.dataformat.protobuf-jackson.timezone If set then Jackson will use the Timezone when marshalling/unmarshalling. String camel.dataformat.protobuf-jackson.unmarshal-type Class name of the java type to use when unmarshalling. String camel.dataformat.protobuf-jackson.use-default-object-mapper Whether to lookup and use default Jackson ObjectMapper from the registry. true Boolean camel.dataformat.protobuf-jackson.use-list To unmarshal to a List of Map or a List of Pojo. false Boolean | [
"from(\"kafka:topic\"). unmarshal().protobuf(ProtobufLibrary.Jackson, JsonNode.class). to(\"log:info\");",
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jackson-protobuf-starter</artifactId> </dependency>"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-protobuf-jackson-dataformat-starter |
Chapter 4. Initial Load Balancer Configuration with Keepalived | Chapter 4. Initial Load Balancer Configuration with Keepalived After installing Load Balancer packages, you must take some basic steps to set up the LVS router and the real servers for use with Keepalived. This chapter covers these initial steps in detail. 4.1. A Basic Keepalived configuration In this basic example, two systems are configured as load balancers. LB1 (Active) and LB2 (Backup) will be routing requests for a pool of four Web servers running httpd with real IP addresses numbered 192.168.1.20 to 192.168.1.24, sharing a virtual IP address of 10.0.0.1. Each load balancer has two interfaces ( eth0 and eth1 ), one for handling external Internet traffic, and the other for routing requests to the real servers. The load balancing algorithm used is Round Robin and the routing method will be Network Address Translation. 4.1.1. Creating the keapalived.conf file Keepalived is configured by means of the keepalived.conf file in each system configured as a load balancer. To create a load balancer topology like the example shown in Section 4.1, "A Basic Keepalived configuration" , use a text editor to open keepalived.conf in both the active and backup load balancers, LB1 and LB2. For example: A basic load balanced system with the configuration as detailed in Section 4.1, "A Basic Keepalived configuration" has a keepalived.conf file as explained in the following code sections. In this example, the keepalived.conf file is the same on both the active and backup routers with the exception of the VRRP instance, as noted in Section 4.1.1.2, "VRRP Instance" 4.1.1.1. Global Definitions The Global Definitions section of the keepalived.conf file allows administrators to specify notification details when changes to the load balancer occurs. Note that the Global Definitions are optional and are not required for Keepalived configuration. This section of the keepalived.conf file is the same on both LB1 and LB2. The notification_email is the administrator of the load balancer, while the notification_email_from is an address that sends the load balancer state changes. The SMTP specific configuration specifies the mail server from which the notifications are mailed. 4.1.1.2. VRRP Instance The following examples show the vrrp_sync_group stanza of the keeplalived.conf file in the master router and the backup router. Note that the state and priority values differ between the two systems. The following example shows the vrrp_sync_group stanza for the keepalived.conf file in LB1, the master router. The following example shows the vrrp_sync_group stanza of the keepalived.conf file for LB2, the backup router. In these example, the vrrp_sync_group stanza defines the VRRP group that stays together through any state changes (such as failover). There is an instance defined for the external interface that communicates with the Internet (RH_EXT), as well as one for the internal interface (RH_INT). The vrrp_instance line details the virtual interface configuration for the VRRP service daemon, which creates virtual IP instances. The state MASTER designates the active server, the state BACKUP designates the backup server. The interface parameter assigns the physical interface name to this particular virtual IP instance. virtual_router_id is a numerical identifier for the Virtual Router instance. It must be the same on all LVS Router systems participating in this Virtual Router. It is used to differentiate multiple instances of keepalived running on the same network interface. The priority specifies the order in which the assigned interface takes over in a failover; the higher the number, the higher the priority. This priority value must be within the range of 0 to 255, and the Load Balancing server configured as state MASTER should have a priority value set to a higher number than the priority value of the server configured as state BACKUP . The authentication block specifies the authentication type ( auth_type ) and password ( auth_pass ) used to authenticate servers for failover synchronization. PASS specifies password authentication; Keepalived also supports AH , or Authentication Headers for connection integrity. Finally, the virtual_ipaddress option specifies the interface virtual IP address. 4.1.1.3. Virtual Server Definitions The Virtual Server definitions section of the keepalived.conf file is the same on both LB1 and LB2. In this block, the virtual_server is configured first with the IP address. Then a delay_loop configures the amount of time (in seconds) between health checks. The lb_algo option specifies the kind of algorithm used for availability (in this case, rr for Round-Robin; for a list of possible lb_algo values see Table 4.1, "lv_algo Values for Virtual Server" ). The lb_kind option determines routing method, which in this case Network Address Translation (or nat ) is used. After configuring the Virtual Server details, the real_server options are configured, again by specifying the IP Address first. The TCP_CHECK stanza checks for availability of the real server using TCP. The connect_timeout configures the time in seconds before a timeout occurs. Note Accessing the virtual IP from the load balancers or one of the real servers is not supported. Likewise, configuring a load balancer on the same machines as a real server is not supported. Table 4.1. lv_algo Values for Virtual Server Algorithm Name lv_algo value Round-Robin rr Weighted Round-Robin wrr Least-Connection lc Weighted Least-Connection wlc Locality-Based Least-Connection lblc Locality-Based Least-Connection Scheduling with Replication lblcr Destination Hash dh Source Hash sh Source Expected Delay sed Never Queue nq | [
"vi /etc/keepalived/keepalived.conf",
"global_defs { notification_email { [email protected] } notification_email_from [email protected] smtp_server 127.0.0.1 smtp_connect_timeout 60 }",
"vrrp_sync_group VG1 { group { RH_EXT RH_INT } } vrrp_instance RH_EXT { state MASTER interface eth0 virtual_router_id 50 priority 100 advert_int 1 authentication { auth_type PASS auth_pass passw123 } virtual_ipaddress { 10.0.0.1 } } vrrp_instance RH_INT { state MASTER interface eth1 virtual_router_id 2 priority 100 advert_int 1 authentication { auth_type PASS auth_pass passw123 } virtual_ipaddress { 192.168.1.1 } }",
"vrrp_sync_group VG1 { group { RH_EXT RH_INT } } vrrp_instance RH_EXT { state BACKUP interface eth0 virtual_router_id 50 priority 99 advert_int 1 authentication { auth_type PASS auth_pass passw123 } virtual_ipaddress { 10.0.0.1 } } vrrp_instance RH_INT { state BACKUP interface eth1 virtual_router_id 2 priority 99 advert_int 1 authentication { auth_type PASS auth_pass passw123 } virtual_ipaddress { 192.168.1.1 } }",
"virtual_server 10.0.0.1 80 { delay_loop 6 lb_algo rr lb_kind NAT protocol TCP real_server 192.168.1.20 80 { TCP_CHECK { connect_timeout 10 } } real_server 192.168.1.21 80 { TCP_CHECK { connect_timeout 10 } } real_server 192.168.1.22 80 { TCP_CHECK { connect_timeout 10 } } real_server 192.168.1.23 80 { TCP_CHECK { connect_timeout 10 } } }"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/load_balancer_administration/ch-initial-setup-VSA |
Chapter 114. AclRuleTransactionalIdResource schema reference | Chapter 114. AclRuleTransactionalIdResource schema reference Used in: AclRule The type property is a discriminator that distinguishes use of the AclRuleTransactionalIdResource type from AclRuleTopicResource , AclRuleGroupResource , AclRuleClusterResource . It must have the value transactionalId for the type AclRuleTransactionalIdResource . Property Property type Description type string Must be transactionalId . name string Name of resource for which given ACL rule applies. Can be combined with patternType field to use prefix pattern. patternType string (one of [prefix, literal]) Describes the pattern used in the resource field. The supported types are literal and prefix . With literal pattern type, the resource field will be used as a definition of a full name. With prefix pattern type, the resource name will be used only as a prefix. Default value is literal . | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-aclruletransactionalidresource-reference |
A.14. OProfile | A.14. OProfile OProfile is a low overhead, system-wide performance monitoring tool provided by the oprofile package. It uses the performance monitoring hardware on the processor to retrieve information about the kernel and executables on the system, such as when memory is referenced, the number of second-level cache requests, and the number of hardware interrupts received. OProfile is also able to profile applications that run in a Java Virtual Machine (JVM). OProfile provides the following tools. Note that the legacy opcontrol tool and the new operf tool are mutually exclusive. ophelp Displays available events for the system's processor along with a brief description of each. opimport Converts sample database files from a foreign binary format to the native format for the system. Only use this option when analyzing a sample database from a different architecture. opannotate Creates annotated source for an executable if the application was compiled with debugging symbols. opcontrol Configures which data is collected in a profiling run. operf Intended to replace opcontrol . The operf tool uses the Linux Performance Events subsystem, allowing you to target your profiling more precisely, as a single process or system-wide, and allowing OProfile to co-exist better with other tools using the performance monitoring hardware on your system. Unlike opcontrol , no initial setup is required, and it can be used without the root privileges unless the --system-wide option is in use. opreport Retrieves profile data. oprofiled Runs as a daemon to periodically write sample data to disk. Legacy mode ( opcontrol , oprofiled , and post-processing tools) remains available, but is no longer the recommended profiling method. For further information about any of these commands, see the OProfile man page: | [
"man oprofile"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/performance_tuning_guide/sect-Red_Hat_Enterprise_Linux-Performance_Tuning_Guide-Tool_Reference-OProfile |
function::ipmib_local_addr | function::ipmib_local_addr Name function::ipmib_local_addr - Get the local ip address Synopsis Arguments skb pointer to a struct sk_buff SourceIsLocal flag to indicate whether local operation Description Returns the local ip address skb . | [
"ipmib_local_addr:long(skb:long,SourceIsLocal:long)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-ipmib-local-addr |
Chapter 7. Bug fixes | Chapter 7. Bug fixes This part describes bugs fixed in Red Hat Enterprise Linux 8.5 that have a significant impact on users. 7.1. Installer and image creation RHEL installation no longer aborts when Insights client fails to register system Previously, the RHEL installation failed with an error at the end if the Red Hat Insights client failed to register the system during the installation. With this update, the system completes the installation even if the insights client fails. The user is notified about the error during installation so the error can be handled later independently. ( BZ#1931069 ) Anaconda allows data encryption for automatically created disk layout in the custom partitioning screen Previously, requesting encrypted disk layout when the disk layout was automatically created in the custom partitioning screen was not possible. With this update, Anaconda provides an option on the custom partitioning screen to encrypt the automatically created disk layout. ( BZ#1903786 ) Installation program does not attempt automatic partitioning when partitioning scheme is not specified in the Kickstart file When using a Kickstart file to perform an automated installation, the installation program does not attempt to perform automatic partitioning when you do not specify any partitioning scheme in the Kickstart file. The installation process is interrupted and allows the user to configure the partitioning. (BZ#1954408) RHEL-Edge container image now uses nginx and serves on port 8080 Previously, the edge-container image type was unable to run in non-root mode. As a result, Red Hat OpenShift 4 was unable to use the edge-container image type. With this enhancement, the container now uses nginx HTTP server to serve the commit and a configuration file that allows the server to run as a non-root user inside the container, enabling its use on Red Hat OpenShift 4. The internal web server now uses the port 8080 instead of 80 . ( BZ#1945238 ) 7.2. Shells and command-line tools opal-prd rebased to version 6.7.1 opal-prd has been upgraded to version 6.7.1. Notable bug fixes and enhancements include: Fixed xscom error logging issues caused due to xscom OPAL call. Fixed possible deadlock with the DEBUG build. Fallback to full_reboot if fast-reboot fails in core/platform . Fixed next_ungarded_primary in core/cpu . Improved rate limit timer requests and the timer state in Self-Boot Engine (SBE). (BZ#1921665) libservicelog rebased to version 1.1.19 libservicelog has been upgraded to version 1.1.19. Notable bug fixes and enhancements include: Fixed output alignment issue. Fixed segfault on servicelog_open() failure. (BZ#1844430) ipmitool sol activate command no longer crashes Previously, after upgrading from RHEL 7 to RHEL 8 the ipmitool sol activate command would crash while trying to access the remote console on an IBM DataPower appliance. With this update, the bug has been fixed and one can use ipmitool to access the remote console again. ( BZ#1951480 ) Relax-and-Recover (ReaR) package now depends on the bootlist executable Previously, ReaR could produce a rescue image without the bootlist executable on the IBM Power Systems, Little Endian architecture. Consequently, if the powerpc-utils-core package is not installed, the rescue image did not contain the bootlist executable. With this update, the ReaR package now depends on the bootlist executable. The dependency ensures that the bootlist executable is present. ReaR does not create a rescue image if the bootlist executable is missing. This avoids creating an invalid rescue image. ( BZ#1983013 ) rsync with an unprivileged remote user can now be used in ReaR Previously, when rsync was used to back up and restore the system data (BACKUP=RSYNC) , the parameters to rsync were incorrectly quoted, and the --fake-super parameter was not passed to the remote rsync process. Consequently, the file metadata was not correctly saved and restored. With this update following bugs have been fixed: ReaR uses the correct parameters for rsync. Improved rsync code for error detection during backup and restore: If there is a rsync error detected during the backup, ReaR aborts with an error message. If there is a rsync error detected during the restore, ReaR displays a warning message. In the /etc/rear/local.conf file set BACKUP_INTEGRITY_CHECK=1 to turn the warning into an error message. ( BZ#1930662 ) Loss of backup data on network shares when using ReaR does not occur anymore Previously, when a network file system like NFS was used to store the ReaR backups, in case of an error ReaR removed the directory where the NFS was mounted. Consequently, this caused backup data loss. With this update, ReaR now uses a new method to unmount network shares. This new method does not remove the content of the mounted filesystem when it is removes the mount point. The loss of backup data on network shares when using ReaR is now fixed. ( BZ#1958247 ) ReaR can now be used to back up and recover machines that use ESP Previously, ReaR did not create Extensible Firmware Interface (EFI) entries when software RAID (MDRAID) is used for the EFI System Partition on machines with Unified Extensible Firmware Interface (UEFI) firmware. When a system with UEFI firmware and EFI System Partition on software RAID were recovered using ReaR; the recovered system was unbootable and required manual intervention to fix the boot EFI variables. With this update, the support for creating boot EFI entries for software RAID devices is added to ReaR. ReaR can now be used to back up and recover machines that use EFI System Partition (ESP) on software RAID, without manual post-recovery intervention. ( BZ#1958222 ) /etc/slp.spi file added to openslp package Previously, the /etc/slp.spi file was missing in the openslp package. Consequently, the /usr/bin/slptool command did not generate output. With this update, /etc/slp.spi has been added to openslp . ( BZ#1965649 ) BM Power Systems, Little Endian architecture machines with multipath can now be safely recovered using ReaR Previously, the /sys file system was not mounted in the chroot when ReaR was recovering the system. The ofpathname executable on the IBM Power Systems, Little Endian architecture failed when installing the boot loader. Consequently, the error remained undetected and the recovered system was unbootable. With this update, ReaR now mounts the /sys file system in the recovery chroot. ReaR ensures that ofpathname is present in the rescue system on Power Systems, Little Endian architecture machines. ( BZ#1983003 ) The which utility no longer aborts with a syntax error message when used with an alias Previously, when you tried to use the which command with an alias, for example, A=B which ls , the which utility aborted with the syntax error message bash: syntax error near unexpected token `(' . This bug has been fixed, and which correctly displays the full path of the command without an error message. (BZ#1940468) 7.3. Infrastructure services Permissions of the /var/lib/chrony have changed Previously, enterprise security scanners would flag the /var/lib/chrony directory for having world-readable and executable permissions. With this update, the permissions of the /var/lib/chrony directory have changed to limit access only to the root and chrony users. ( BZ#1939295 ) 7.4. Security GnuTLS no longer rejects SHA-1-signed CAs if they are explicitly trusted Previously, the GnuTLS library checked signature hash strength of all certificate authorities (CA) even if the CA was explicitly trusted. As a consequence, chains containing CAs signed with the SHA-1 algorithm were rejected with the error message certificate's signature hash strength is unacceptable . With this update, GnuTLS excludes trusted CAs from the signature hash strength checks and therefore no longer rejects certificate chains containing CAs even if they are signed using weak algorithms. ( BZ#1965445 ) Hardware optimization enabled in FIPS mode Previously, the Federal Information Processing Standard (FIPS 140-2) did not allow using hardware optimization. Therefore, the operation was disabled in the libgcrypt package when in the FIPS mode. This update enables hardware optimization in FIPS mode, and as a result, all cryptographic operations are performed faster. ( BZ#1976137 ) leftikeport and rightikeport options work correctly Previously, Libreswan ignored the leftikeport and rightikeport options in any host-to-host Libreswan connections. As a consequence, Libreswam used the default ports regardless of any non-default options settings. With this update, the issue is now fixed and you can use leftikeport and rightikeport connection options over the default options. ( BZ#1934058 ) SELinux policy did not allow GDM to set the GRUB boot_success flag Previously, SELinux policy did not allow the GNOME Display Manager (GDM) to set the GRUB boot_success flag during the power-off and reboot operations. Consequently, the GRUB menu appeared on the boot. With this update, the SELinux policy introduces a new xdm_exec_bootloader boolean that allows the GDM to set the GRUB boot_success flag, and which is enabled by default. As a result, the GRUB boot menu is shown on the first boot and the flicker-free boot support feature works correctly. ( BZ#1994096 ) selinux-policy now supports IPsec-based VPNs using TCP encapsulation Since RHEL 8.4, the libreswan packages have supported IPsec-based VPNs using TCP encapsulation, but the selinux-policy package did not reflect this update. As a consequence, when Libreswan was configured to use TCP, the ipsec service failed to bind to the given TCP port. With this update to the selinux-policy package, the ipsec service can bind and connect to the commonly used TCP port 4500 , and therefore you can use TCP encapsulation in IPsec-based VPNs. ( BZ#1931848 ) SELinux policy now prevents staff_u users from switching to unconfined_r Previously, when the secure_mode boolean was enabled, staff_u users could incorrectly switch to the unconfined_r role. As a consequence, staff_u users could perform privileged operations affecting the security of the system. With this fix, SELinux policy prevents staff_u users from switching to the unconfined_r role using the newrole command. As a result, unprivileged users cannot run privileged operations. ( BZ#1947841 ) OSCAP Anaconda Addon now handles customized profiles Previously, the OSCAP Anaconda Addon plugin did not correctly handle security profiles with customizations in separate files. Consequently, the customized profiles were not available in the RHEL graphical installation even when you specified them in the corresponding Kickstart section. The handling has been fixed, and you can use customized SCAP profiles in the RHEL graphical installation. (BZ#1691305) OpenSCAP no longer fails during evaluation of the STIG profile and other SCAP content Previously, initialization of the cryptography library in OpenSCAP was not performed properly in OpenSCAP, specifically in the filehash58 probe. As a consequence, a segmentation fault occurred while evaluating SCAP content containing the filehash58_test Open Vulnerability Assessment Language (OVAL) test. This affected in particular the evaluation of the STIG profile for Red Hat Enterprise Linux 8. The evaluation failed unexpectedly and results were not generated. The process of initializing libraries has been fixed in the new version of the openscap package. As a result, OpenSCAP no longer fails during the evaluation of the STIG profile for RHEL 8 and other SCAP content that contains the filehash58_test OVAL test. ( BZ#1959570 ) Ansible updates banner files only when needed Previously, the playbook used for banner remediation always removed the file and recreated it. As a consequence, the banner file inodes were always modified regardless of need. With this update, the Ansible remediation playbook has been improved to use the copy module, which first compares existing content with the intended content and only updates the file when needed. As a result, banner files are only updated when the existing content differs from the intended content. ( BZ#1857179 ) USB devices now work correctly with the DISA STIG profile Previously, the DISA STIG profile enabled the USBGuard service but did not configure any initially connected USB devices. Consequently, the USBGuard service blocked any device that was not specifically allowed. This made some USB devices, such as smart cards, unreachable. With this update, the initial USBGuard configuration is generated when applying the DISA STIG profile and allows the use of any connected USB device. As a result, USB devices are not blocked and work correctly. ( BZ#1946252 ) OSCAP Anaconda Addon now installs all selected packages in text mode Previously, the OSCAP Anaconda Addon plugin did not evaluate rules that required certain partition layout or package installations and removals before the installation started when running in text mode. Consequently, when a security policy profile was specified using Kickstart and the installation was running in text mode, any additional packages required by a selected security profile were not installed. OSCAP Anaconda Addon now performs the required checks before the installation starts regardless of whether the installation is graphical or text-based, and all selected packages are installed also in text mode. ( BZ#1674001 ) rpm_verify_permissions removed from the CIS profile The rpm_verify_permissions rule, which compares file permissions to package default permissions, has been removed from the Center for Internet Security (CIS) Red Hat Enterprise Linux 8 Benchmark. With this update, the CIS profile is aligned with the CIS RHEL 8 benchmark, and as a result, this rule no longer affects users who harden their systems according to CIS. ( BZ#1843913 ) 7.5. Kernel A revert of upstream patch allows some systemd services and user-space workloads to run as expected The backported upstream change to the mknod() system call caused the open() system call to be more privileged with respect to device nodes than mknod() . Consequently, multiple user-space workloads and some systemd services in containers became unresponsive. With this update, the incorrect behavior has been reverted and no crashes occur any more. (BZ#1902543) Improved performance regression in memory accounting operations Previously, a slab memory controller was increasing the frequency of memory accounting operations per slab. Consequently, a performance regression occurred due to an increased number of memory accounting operations. To fix the problem, the memory accounting operations have been streamlined to use as much caching and as little atomic operations as possible. As a result, a slight performance regression still remains. However, the user experience is much better. (BZ#1959772) Hard lockups and system panic no longer occur when issuing multiple SysRg-T magic keys Issuing multiple SysRg-T magic key sequences to a system caused an interrupt to be disabled for an extended period of time, depending on the serial console speed, and on the volume of information being printed out. This prolonged disabled-interrupt time often resulted in a hard lockup followed by a system panic. This update brings the SysRg-T key sequence to substantially reduce the period when interrupt is disabled. As a result, no hard lockups or system panic occur in the described scenario. (BZ#1954363) Certain BCC utilities do not display the "macro redefined" warning anymore Macro redefinitions in some compiler-specific kernel headers caused some BPF Compiler Collection (BCC) utilities to display the following zero-impact warning: With this update, the problem has been fixed by removing the macro redefinitions. As a result, the relevant BCC utilities no longer display the warning in this scenario. (BZ#1907271) kdump no longer fails to dump vmcore on SSH or NFS targets Previously, when configuring a network interface card (NIC) port to a static IP address and setting kdump to dump vmcore on SSH or NFS dump targets, the kdump service started with the following error message: Consequently, a kdump on SSH or NFS dump targets eventually failed. This update fixes the problem and the kexec-tools utility no longer depends on the ipcalc tool for IP address and netmask calculation. As a result, the kdump works as expected when you use SSH or NFS dump targets. (BZ#1931266) Certain networking kernel drivers now properly display their version The behavior for module versioning of many networking kernel drivers changed in RHEL 8.4. Consequently, those drivers did not display their version. Alternatively, after executing the ethtool -i command, the drivers displayed the kernel version instead of the driver version. This update fixes the bug by providing the kernel module strings. As a result, users can determine versions of the affected kernel drivers. (BZ#1944639) The hwloc commands now return correct data on single CPU Power9 and Power10 logical partitions With the hwloc utility of version 2.2.0, any single-node Non-Uniform Memory Access (NUMA) system that ran a Power9 or Power10 CPU was considered to be "disallowed". Consequently, all hwloc commands did not work, because NODE0 (socket 0, CPU 0) was offline and the hwloc source code expected NODE0 to be online. The following error message was displayed: With this update, hwloc has been fixed so that its source code checks to see if NODE0 is online before querying it. If NODE0 is not online, the code proceeds to the online NODE. As a result, the hwloc command does not return any errors in the described scenario. ( BZ#1917560 ) 7.6. File systems and storage Records obtained from getaddrinfo() now include a default TTL Previously, API did not convey time-to-live (TTL) information, which left TTL unset for address records obtained through getaddrinfo() , even if they were obtained from the DNS. As a consequence, the key.dns_resolver upcall program did not set an expiry time on dns_resolver records, unless the records included a component obtained directly from the DNS, such as an SRV or AFSDB record. With this update, records from getaddrinfo() now include a default TTL of 10 minutes to prevent an unset expiry time. (BZ#1661674) 7.7. High availability and clusters The ocf:heartbeat:pgsql resource agent and some third-party agents no longer fail to stop during a shutdown process In the RHEL 8.4 GA release, Pacemaker's crm_mon command-line tool was modified to display a "shutting down" message rather than the usual cluster information when Pacemaker starts to shut down. As a consequence, shutdown progress, such as the stopping of resources, could not be monitored. In this situation, resource agents that parse crm_mon output in their stop operation (such as the ocf:heartbeat:pgsql agent distributed with the resource-agents package, or some custom or third-party agents) could fail to stop, leading to cluster problems. This bug has been fixed, and the described problem no longer occurs. ( BZ#1948620 ) 7.8. Dynamic programming languages, web and database servers pyodbc works again with MariaDB 10.3 The pyodbc module did not work with the MariaDB 10.3 server included in the RHEL 8.4 release. The root cause in the mariadb-connector-odbc package has been fixed, and pyodbc now works with MariaDB 10.3 as expected. Note that earlier versions of the MariaDB 10.3 server and the MariaDB 10.5 server were not affected by this problem. ( BZ#1944692 ) 7.9. Compilers and development tools GCC Toolset 11: GCC 11 now defaults to DWARF 4 While upstream GCC 11 defaults to using the DWARF 5 debugging format, GCC of GCC Toolset 11 defaults to DWARF 4 to stay compatible with RHEL 8 components, for example, rpmbuild . (BZ#1974402) The tunables framework now parses GLIBC_TUNABLES correctly Previously, the tunables framework did not parse the GLIBC_TUNABLES environment variable correctly for non-setuid children of setuid programs. As a consequence, in some cases all tunables remained in non-setuid children of setuid programs. With this update, tunables in the GLIBC_TUNABLES environment variable are correctly parsed. As a result, only a restricted subset of identified tunables are now inherited by non-setuid children of setuid programs. (BZ#1934155) The semctl system call wrapper in glibc now treats SEM_STAT_ANY like SEM_STAT Previously, the semctl system call wrapper in glibc did not treat the kernel argument SEM_STAT_ANY like SEM_STAT . As a result, glibc did not pass the address of the result object struct semid_ds to the kernel, so that the kernel failed to update it. With this update, glibc now treats SEM_STAT_ANY like SEM_STAT , and as a result, applications can obtain struct semid_ds data using SEM_STAT_ANY . ( BZ#1912670 ) Glibc now includes definitions for IPPROTO_ETHERNET , IPPROTO_MPTCP , and INADDR_ALLSNOOPERS_GROUP Previously, the Glibc system library headers ( /usr/include/netinet/in.h ) did not include definitions of IPPROTO_ETHERNET , IPPROTO_MPTCP , and INADDR_ALLSNOOPERS_GROUP . As a consequence, applications needing these definitions failed to compile. With this update, the system library headers now include the new network constant definitions for IPPROTO_ETHERNET , IPPROTO_MPTCP , and INADDR_ALLSNOOPERS_GROUP resulting in correctly compiling applications. ( BZ#1930302 ) gcc rebased to version 8.5 The GNU Compiler Collection (GCC) has been rebased to upstream version 8.5, which provides a number of bug fixes over the version. ( BZ#1946758 ) Incorrect file decryption using OpenSSL aes-cbc mode The OpenSSL EVP aes-cbc mode did not decrypt files correctly, because it expects to handle padding while the Go CryptoBlocks interface expects full blocks. This issue has been fixed by disabling padding before executing EVP operations in OpenSSL. ( BZ#1979100 ) 7.10. Identity Management FreeRADIUS no longer incorrectly generating default certificates when the bootstrap script is run A bootstrap script runs each time FreeRADIUS is started. Previously, this script generated new testing certificates in the /etc/raddb/certs directory and as a result, the FreeRADIUS server sometimes failed to start as these testing certificates were invalid. For example, the certificates might have expired. With this update, the bootstrap script checks the /etc/raddb/certs directory and if it contains any testing or customer certificates, the script is not run and the FreeRADIUS server should start correctly. Note that the testing certificates are only for testing purposes during the configuration of FreeRADIUS and should not be used in a real environment. The bootstrap script should be deleted once the users' certificates are used. ( BZ#1954521 ) FreeRADIUS no longer fails to create a core dump file Previously, FreeRADIUS did not create a core dump file when allow_core_dumps was set to yes . Consequently, no core dump files were created if any process failed. With this update, when you set allow_core_dumps to yes , FreeRADIUS now creates a core dump file if any process fails. ( BZ#1977572 ) SSSD correctly evaluates the default setting for the Kerberos keytab name in /etc/krb5.conf Previously, if you defined a non-standard location for your krb5.keytab file, SSSD did not use this location and used the default /etc/krb5.keytab location instead. As a result, when you tried to log into the system, the login failed as the /etc/krb5.keytab contained no entries. With this update, SSSD now evaluates the default_keytab_name variable in the /etc/krb5.conf and uses the location specified by this variable. SSSD only uses the default /etc/krb5.keytab location if the default_keytab_name variable is not set. (BZ#1737489) Running sudo commands no longer exports the KRB5CCNAME environment variable Previously, after running sudo commands, the environment variable KRB5CCNAME pointed to the Kerberos credential cache of the original user, which might not be accessible to the target user. As a result Kerberos related operations might fail as this cache is not accessible. With this update, running sudo commands no longer sets the KRB5CCNAME environment variable and the target user can use their default Kerberos credential cache. (BZ#1879869) Kerberos now only requests permitted encryption types Previously, RHEL did not apply permitted encryption types specified in the permitted_enctypes parameter in the /etc/krb5.conf file if the default_tgs_enctypes or default_tkt_enctypes parameters were not set. Consequently, Kerberos clients were able to request deprecated cipher suites, such as RC4, which might cause other processes to fail. With this update, RHEL applies the encryption types set in permitted_enctypes to the default encryption types as well, and processes can only request permitted encryption types. If you use Red Hat Identity Management (IdM) and want to set up a trust with Active Directory (AD), note that the RC4 cipher suite, which is deprecated in RHEL 8, is the default encryption type for users, services, and trusts between AD domains in an AD forest. You can use one of the following options: (Preferred): Enable strong AES encryption types in AD. For details, see the AD DS: Security: Kerberos "Unsupported etype" error when accessing a resource in a trusted domain Microsoft article. Use the update-crypto-policies --set DEFAULT:AD-SUPPORT command on RHEL hosts that should be members of an AD domain to enable the deprecated RC4 encryption type for backwards compatibility with AD. ( BZ#2005277 ) The replication session update speed is now enhanced Previously, when the changelog contained larger updates, the replication session started from the beginning of the changelog. This slowed the session down. The using of a small buffer to store the update from a changelog during the replication session caused this. With this update, the replication session checks that the buffer is large enough to store the update at the starting point. The replication session starts sending updates immediately. ( BZ#1898541 ) The database indexes created by plug-ins are now enabled Previously, when a server plug-in created its own database indexes, you had to enable those indexes manually. With this update, the indexes are enabled immediately after creation by default. ( BZ#1951020 ) 7.11. Red Hat Enterprise Linux system roles Role tasks no longer change when running the same output Previously, several of the role tasks would report as CHANGED when running the same input once again, even if there were no changes. Consequently, the role was not acting idempotent. To fix the issue, perform the following actions: Check if configuration variables change before applying them. You can use the option --check for this verification. Do not add a Last Modified: USDdate header to the configuration file. As a result, the role tasks are idempotent. ( BZ#1960375 ) relayhost parameter no longer incorrectly defined in the Postfix documentation Previously, the relayhost parameter of the Postfix RHEL system role was defined as relay_host in the doc /usr/share/doc/rhel-system-roles/postfix/README.md documentation provided by rhel-system-roles . This update fixes the issue and the relayhost parameter is now correctly defined in the Postfix documentation. ( BZ#1866544 ) Postfix RHEL system role README.md no longer missing variables under the "Role Variables" section Previously, the Postfix RHEL system role variables, such as postfix_check , postfix_backup , postfix_backup_multiple were not available under the "Role Variables" section. Consequently, users were not able to consult the Postfix role documentation. This update adds role variable documentation to the Postfix README section. The role variables are documented and available for users in the doc/usr/share/doc/rhel-system-roles/postfix/README.md documentation provided by rhel-system-roles . ( BZ#1961858 ) Postfix role README no longer uses plain role name Previously, the examples provided in the /usr/share/ansible/roles/rhel-system-roles.postfix/README.md used the plain version of the role name, postfix , instead of using rhel-system-roles.postfix . Consequently, users would consult the documentation and incorrectly use the plain role name instead of Full Qualified Role Name (FQRN). This update fixes the issue, and the documentation contains examples with the FQRN, rhel-system-roles.postfix , enabling users to correctly write playbooks. ( BZ#1958963 ) The output log of timesync only reports harmful errors Previously, the timesync RHEL system role used the ignore_errors directive with separate checking for task failure in many tasks. Consequently, the output log of the successful role run was full of harmless errors. The users were safe to ignore those errors, but still they were distressing to see. In this update, the relevant tasks have been rewritten not to use ignore_errors . As a result, the output log is now clean, and only role-stopping errors are reported. ( BZ#1938014 ) The requirements.txt file no longer missing in the Ansible collection Previously, the requirements.txt file, responsible for specifying the python dependencies, was missing in the Ansible collection. This fix adds the missing file with the correct dependencies at the /usr/share/ansible/collections/ansible_collections/redhat/rhel_system_roles/requirements.tx path. ( BZ#1954747 ) Traceback no longer observed when set type: partition for storage_pools Previously, when setting the variable type as partition for storage_pools in a playbook, running this playbook would fail and indicate traceback . This update fixes the issue and the Traceback error no longer appears. ( BZ#1854187 ) SElinux role no longer perform unnecessary reloads Previously, the SElinux role would not check if changes were actually applied before reloading the SElinux policy. As a consequence, the SElinux policy was being reloaded unnecessarily, which had an impact on the system resources. With this fix, the SElinux role now uses ansible handlers and conditionals to ensure that the policy is only reloaded if there is a change. As a result, the SElinux role runs much faster. ( BZ#1757869 ) sshd role no longer fails to start with the installed sshd_config file on the RHEL6 host. Previously, when a managed node was running RHEL6, the version of OpenSSH did not support "Match all" in the Match criteria, which was added by the install task. As a consequence, sshd failed to start with the installed sshd_config file on the RHEL6 host. This update fixes the issue by replacing "Match all" with "Match address *" for the RHEL6 sshd_config configuration file, as the criteria is supported in the version of OpenSSH. As a result, the sshd RHEL system role successfully starts with the installed sshd_config file on the RHEL6 host. ( BZ#1990947 ) The SSHD role name in README.md examples no longer incorrect Previously, in the sshd README.md file, the examples reference calling the role with the willshersystems.sshd name. This update fixes the issue, and now the example references correctly refers to the role as "rhel_system_roles.sshd". ( BZ#1952090 ) The key/certs source files are no longer copied when tls is false Previously, in the logging RHEL system role elasticsearch output, if the key/certs source files path on the control host were configured in the playbook, they would be copied to the managed hosts, even if tls was set to false . Consequently, if the key/cert file paths were configured and tls was set to false , the command would fail, because the copy source files did not exist. This update fixes the issue, and copying the key/certs is executed only when the tls param is set to true . ( BZ#1994580 ) Task to enable logging for targeted hosts in the metric role now works Previously, a bug in the metric RHEL system role prevented referring to targeted hosts in the enabling the performance metric logging task. Consequently, the control file for performance metric logging was not generated. This update fixes the issue, and now the targeted hosts are correctly referred to. As a result, the control file is successfully created, enabling the performance metric logging execution. ( BZ#1967335 ) sshd_hostkey_group and sshd_hostkey_mode variables now configurable in the playbook Previously, the sshd_hostkey_group and sshd_hostkey_mode variables were unintentionally defined in both defaults and vars files. Consequently, users were unable to configure those variables in the playbook. With this fix, the sshd_hostkey_group is renamed to __sshd_hostkey_group and sshd_hostkey_mode to __sshd_hostkey_mode for defining the constant value in the vars files. In the default file, sshd_hostkey_group is set to __sshd_hostkey_group and sshd_hostkey_mode to __sshd_hostkey_mode . As a result, users can now configure the sshd_hostkey_group and sshd_hostkey_mode variables in the playbook. ( BZ#1966711 ) RHEL system roles internal links in README.md are no longer broken Previously, the internal links available in the README.md files were broken. Consequently, if a user clicked a specific section documentation link, it would not redirect users to the specific README.md section. This update fixes the issue and now the internal links point users to the correct section. ( BZ#1962976 ) 7.12. RHEL in cloud environments nm-cloud-setup utility now sets the correct default route on Microsoft Azure Previously, on Microsoft Azure, the nm-cloud-setup utility failed to detect the correct gateway of the cloud environment. As a consequence, the utility set an incorrect default route, and connectivity failed. This update fixes the problem. As a result, nm-cloud-setup utility now sets the correct default route on Microsoft Azure. ( BZ#1912236 ) SSH keys are now generated correctly on EC2 instances created from a backup AMI Previously, when creating a new Amazon EC2 instance of RHEL 8 from a backup Amazon Machine Image (AMI), cloud-init deleted existing SSH keys on the VM but did not create new ones. Consequently, the VM in some cases could not connect to the host. This problem has been fixed for newly created RHEL 8.5 VMs. For VMs that were upgraded from RHEL 8.4 or earlier, you must work around the issue manually. To do so, edit the cloud.cfg file and changing the ssh_genkeytypes: ~ line to ssh_genkeytypes: ['rsa', 'ecdsa', 'ed25519'] . This makes it possible for SSH keys to be deleted and generated correctly when provisioning a RHEL 8 VM in the described circumstances. ( BZ#1957532 ) RHEL 8 running on AWS ARM64 instances can now reach the specified network speed When using RHEL 8 as a guest operating system in a virtual machine (VM) that runs on an Amazon Web Services (AWS) ARM64 instance, the VM previously had lower than expected network performance when the iommu.strict=1 kernel parameter was used or when no iommu.strict parameter was defined. This problem no longer occurs in RHEL 8.5 Amazon Machine Images (AMIs) provided by Red Hat. In other types of images, you can work around the issue by changing the parameter to iommu.strict=0 . This includes: RHEL 8.4 and earlier images RHEL 8.5 images upgraded from an earlier version using yum update RHEL 8.5 images not provided by Red Hat (BZ#1836058) Core dumping RHEL 8 virtual machines to a remote machine on Azure now works more reliably Previously, using the kdump utility to save the core dump file of a RHEL 8 virtual machine (VM) on a Microsoft Azure hypervisor to a remote machine did not work correctly when the VM was using a NIC with enabled accelerated networking. As a consequence, the dump file was saved after approximately 200 seconds, instead of immediately. In addition, the following error message was logged on the console before the dump file is saved. With this update, the underlying code has been fixed, and in the described circumstances, dump files are now saved immediately. (BZ#1854037) Hibernating RHEL 8 guests now works correctly when FIPS mode is enabled Previously, it was not possible to hibernate a virtual machine (VM) that was using RHEL 8 as its guest operating system if the VM was using FIPS mode. The underlying code has been fixed and the affected VMs can now hibernate correctly. (BZ#1934033, BZ#1944636) 7.13. Containers UBI 9-Beta containers can run on RHEL 7 and 8 hosts Previously, the UBI 9-Beta container images had an incorrect seccomp profile set in the containers-common package. As a consequence, containers were not able to deal with certain system calls causing a failure. With this update, the problem has been fixed. ( BZ#2019901 ) | [
"warning: '__no_sanitize_address' macro redefined [-Wmacro-redefined]",
"ipcalc: command not found",
"Topology does not contain any NUMA node, aborting!",
"device (eth0): linklocal6: DAD failed for an EUI-64 address"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.5_release_notes/bug_fixes |
Installing Satellite Server in a connected network environment | Installing Satellite Server in a connected network environment Red Hat Satellite 6.15 Install and configure Satellite Server in a network with Internet access Red Hat Satellite Documentation Team [email protected] | [
"nfs.example.com:/nfsshare /var/lib/pulp nfs context=\"system_u:object_r:var_lib_t:s0\" 1 2",
"restorecon -R /var/lib/pulp",
"firewall-cmd --add-port=\"8000/tcp\" --add-port=\"9090/tcp\"",
"firewall-cmd --add-service=dns --add-service=dhcp --add-service=tftp --add-service=http --add-service=https --add-service=puppetmaster",
"firewall-cmd --runtime-to-permanent",
"firewall-cmd --list-all",
"ping -c1 localhost ping -c1 `hostname -f` # my_system.domain.com",
"ping -c1 localhost PING localhost (127.0.0.1) 56(84) bytes of data. 64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.043 ms --- localhost ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms ping -c1 `hostname -f` PING hostname.gateway (XX.XX.XX.XX) 56(84) bytes of data. 64 bytes from hostname.gateway (XX.XX.XX.XX): icmp_seq=1 ttl=64 time=0.019 ms --- localhost.gateway ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms",
"hostnamectl set-hostname name",
"cp /etc/foreman-installer/custom-hiera.yaml /etc/foreman-installer/custom-hiera.original",
"satellite-installer --tuning medium",
"/Stage[main]/Dhcp/File[/etc/dhcp/dhcpd.conf]: Filebucketed /etc/dhcp/dhcpd.conf to puppet with sum 622d9820b8e764ab124367c68f5fa3a1",
"puppet filebucket -l restore /etc/dhcp/dhcpd.conf 622d9820b8e764ab124367c68f5fa3a1",
"an http proxy server to use (enter server FQDN) proxy_hostname = myproxy.example.com port for http proxy server proxy_port = 8080 user name for authenticating to an http proxy, if needed proxy_user = password for basic http proxy auth, if needed proxy_password =",
"subscription-manager register",
"subscription-manager register Username: user_name Password: The system has been registered with ID: 541084ff2-44cab-4eb1-9fa1-7683431bcf9a",
"subscription-manager repos --disable \"*\"",
"subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=rhel-8-for-x86_64-appstream-rpms --enable=satellite-6.15-for-rhel-8-x86_64-rpms --enable=satellite-maintenance-6.15-for-rhel-8-x86_64-rpms",
"dnf module enable satellite:el8",
"dnf install fapolicyd",
"satellite-maintain packages install fapolicyd",
"systemctl enable --now fapolicyd",
"systemctl status fapolicyd",
"dnf upgrade",
"dnf install satellite",
"dnf install chrony",
"systemctl enable --now chronyd",
"satellite-installer --scenario satellite --foreman-initial-organization \" My_Organization \" --foreman-initial-location \" My_Location \" --foreman-initial-admin-username admin_user_name --foreman-initial-admin-password admin_password",
"scp ~/ manifest_file .zip root@ satellite.example.com :~/.",
"hammer subscription upload --file ~/ manifest_file .zip --organization \" My_Organization \"",
"satellite-maintain packages install insights-client",
"satellite-installer --register-with-insights",
"insights-client --unregister",
"satellite-installer --register-with-insights",
"hammer repository synchronize --name \"Red Hat Satellite Client 6 for RHEL 9 x86_64 RPMs\" --organization \" My_Organization \" --product \"Red Hat Enterprise Linux for x86_64\"",
"hammer repository synchronize --name \"Red Hat Satellite Client 6 for RHEL 8 x86_64 RPMs\" --organization \" My_Organization \" --product \"Red Hat Enterprise Linux for x86_64\"",
"hammer repository synchronize --async --name \"Red Hat Satellite Client 6 for RHEL 7 Server RPMs x86_64\" --organization \" My_Organization \" --product \"Red Hat Enterprise Linux Server\"",
"hammer repository synchronize --async --name \"Red Hat Satellite Client 6 for RHEL 6 Server - ELS RPMs x86_64\" --organization \" My_Organization \" --product \"Red Hat Enterprise Linux Server - Extended Lifecycle Support\"",
"hammer repository-set enable --basearch=\"x86_64\" --name \"Red Hat Satellite Client 6 for RHEL 9 x86_64 (RPMs)\" --organization \" My_Organization \" --product \"Red Hat Enterprise Linux for x86_64\"",
"hammer repository-set enable --basearch=\"x86_64\" --name \"Red Hat Satellite Client 6 for RHEL 8 x86_64 (RPMs)\" --organization \" My_Organization \" --product \"Red Hat Enterprise Linux for x86_64\"",
"hammer repository-set enable --basearch=\"x86_64\" --name \"Red Hat Satellite Client 6 (for RHEL 7 Server) (RPMs)\" --organization \" My_Organization \" --product \"Red Hat Enterprise Linux Server\"",
"hammer repository-set enable --basearch=\"x86_64\" --name \"Red Hat Satellite Client 6 (for RHEL 6 Server - ELS) (RPMs)\" --organization \" My_Organization \" --product \"Red Hat Enterprise Linux Server - Extended Lifecycle Support\"",
"satellite-installer --foreman-proxy-plugin-remote-execution-script-mode=pull-mqtt",
"firewall-cmd --add-service=mqtt",
"firewall-cmd --runtime-to-permanent",
"satellite-maintain packages update grub2-efi",
"unset http_proxy unset https_proxy unset no_proxy",
"hammer http-proxy create --name= myproxy --url http:// myproxy.example.com :8080 --username= proxy_username --password= proxy_password",
"hammer settings set --name=content_default_http_proxy --value= myproxy",
"semanage port -l | grep http_cache http_cache_port_t tcp 8080, 8118, 8123, 10001-10010 [output truncated]",
"semanage port -a -t http_cache_port_t -p tcp 8088",
"hammer settings set --name=http_proxy --value= Proxy_URL",
"hammer settings set --name=http_proxy_except_list --value=[ hostname1.hostname2... ]",
"hammer settings set --name=content_default_http_proxy --value=\"\"",
"satellite-installer --foreman-proxy-bmc \"true\" --foreman-proxy-bmc-default-provider \"freeipmi\"",
"satellite-installer --foreman-proxy-dns true --foreman-proxy-dns-managed true --foreman-proxy-dns-zone example.com --foreman-proxy-dns-reverse 2.0.192.in-addr.arpa --foreman-proxy-dhcp true --foreman-proxy-dhcp-managed true --foreman-proxy-dhcp-range \" 192.0.2.100 192.0.2.150 \" --foreman-proxy-dhcp-gateway 192.0.2.1 --foreman-proxy-dhcp-nameservers 192.0.2.2 --foreman-proxy-tftp true --foreman-proxy-tftp-managed true --foreman-proxy-tftp-servername 192.0.2.3",
"satellite-installer --foreman-proxy-dhcp false --foreman-proxy-dns false --foreman-proxy-tftp false",
"Option 66: IP address of Satellite or Capsule Option 67: /pxelinux.0",
"cp mailca.crt /etc/pki/ca-trust/source/anchors/ update-ca-trust enable update-ca-trust",
"satellite-installer --certs-cname alternate_fqdn --certs-update-server",
"./bootstrap.py --server alternate_fqdn.example.com",
"Server hostname: hostname = alternate_fqdn.example.com content omitted Content base URL: baseurl=https:// alternate_fqdn.example.com /pulp/content/",
"mkdir /root/satellite_cert",
"openssl genrsa -out /root/satellite_cert/satellite_cert_key.pem 4096",
"[ req ] req_extensions = v3_req distinguished_name = req_distinguished_name prompt = no [ req_distinguished_name ] commonName = satellite.example.com [ v3_req ] basicConstraints = CA:FALSE keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment extendedKeyUsage = serverAuth, clientAuth, codeSigning, emailProtection subjectAltName = @alt_names [ alt_names ] DNS.1 = satellite.example.com",
"[req_distinguished_name] CN = satellite.example.com countryName = My_Country_Name 1 stateOrProvinceName = My_State_Or_Province_Name 2 localityName = My_Locality_Name 3 organizationName = My_Organization_Or_Company_Name organizationalUnitName = My_Organizational_Unit_Name 4",
"openssl req -new -key /root/satellite_cert/satellite_cert_key.pem \\ 1 -config /root/satellite_cert/openssl.cnf \\ 2 -out /root/satellite_cert/satellite_cert_csr.pem 3",
"katello-certs-check -c /root/satellite_cert/satellite_cert.pem \\ 1 -k /root/satellite_cert/satellite_cert_key.pem \\ 2 -b /root/satellite_cert/ca_cert_bundle.pem 3",
"Validation succeeded. To install the Red Hat Satellite Server with the custom certificates, run: satellite-installer --scenario satellite --certs-server-cert \" /root/satellite_cert/satellite_cert.pem \" --certs-server-key \" /root/satellite_cert/satellite_cert_key.pem \" --certs-server-ca-cert \" /root/satellite_cert/ca_cert_bundle.pem \" To update the certificates on a currently running Red Hat Satellite installation, run: satellite-installer --scenario satellite --certs-server-cert \" /root/satellite_cert/satellite_cert.pem \" --certs-server-key \" /root/satellite_cert/satellite_cert_key.pem \" --certs-server-ca-cert \" /root/satellite_cert/ca_cert_bundle.pem \" --certs-update-server --certs-update-server-ca",
"dnf install http:// satellite.example.com /pub/katello-ca-consumer-latest.noarch.rpm",
"subscription-manager repos --disable '*' subscription-manager repos --enable=satellite-6.15-for-rhel-8-x86_64-rpms --enable=satellite-maintenance-6.15-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-baseos-rpms --enable=rhel-8-for-x86_64-appstream-rpms",
"dnf module enable satellite:el8",
"dnf install postgresql-server postgresql-evr postgresql-contrib",
"postgresql-setup initdb",
"vi /var/lib/pgsql/data/postgresql.conf",
"listen_addresses = '*'",
"vi /var/lib/pgsql/data/pg_hba.conf",
"host all all Satellite_ip /32 md5",
"systemctl enable --now postgresql",
"firewall-cmd --add-service=postgresql",
"firewall-cmd --runtime-to-permanent",
"su - postgres -c psql",
"CREATE USER \"foreman\" WITH PASSWORD ' Foreman_Password '; CREATE USER \"candlepin\" WITH PASSWORD ' Candlepin_Password '; CREATE USER \"pulp\" WITH PASSWORD ' Pulpcore_Password '; CREATE DATABASE foreman OWNER foreman; CREATE DATABASE candlepin OWNER candlepin; CREATE DATABASE pulpcore OWNER pulp;",
"postgres=# \\c pulpcore You are now connected to database \"pulpcore\" as user \"postgres\".",
"pulpcore=# CREATE EXTENSION IF NOT EXISTS \"hstore\"; CREATE EXTENSION",
"\\q",
"PGPASSWORD=' Foreman_Password ' psql -h postgres.example.com -p 5432 -U foreman -d foreman -c \"SELECT 1 as ping\" PGPASSWORD=' Candlepin_Password ' psql -h postgres.example.com -p 5432 -U candlepin -d candlepin -c \"SELECT 1 as ping\" PGPASSWORD=' Pulpcore_Password ' psql -h postgres.example.com -p 5432 -U pulp -d pulpcore -c \"SELECT 1 as ping\"",
"satellite-installer --foreman-db-database foreman --foreman-db-host postgres.example.com --foreman-db-manage false --foreman-db-password Foreman_Password --foreman-proxy-content-pulpcore-manage-postgresql false --foreman-proxy-content-pulpcore-postgresql-db-name pulpcore --foreman-proxy-content-pulpcore-postgresql-host postgres.example.com --foreman-proxy-content-pulpcore-postgresql-password Pulpcore_Password --foreman-proxy-content-pulpcore-postgresql-user pulp --katello-candlepin-db-host postgres.example.com --katello-candlepin-db-name candlepin --katello-candlepin-db-password Candlepin_Password --katello-candlepin-manage-db false",
"--foreman-db-root-cert <path_to_CA> --foreman-db-sslmode verify-full --foreman-proxy-content-pulpcore-postgresql-ssl true --foreman-proxy-content-pulpcore-postgresql-ssl-root-ca <path_to_CA> --katello-candlepin-db-ssl true --katello-candlepin-db-ssl-ca <path_to_CA> --katello-candlepin-db-ssl-verify true",
"cat /etc/passwd | grep 'puppet\\|apache\\|foreman\\|foreman-proxy' cat /etc/group | grep 'puppet\\|apache\\|foreman\\|foreman-proxy'",
"install /tmp/ example.crt /etc/pki/tls/certs/",
"ln -s example.crt /etc/pki/tls/certs/USD(openssl x509 -noout -hash -in /etc/pki/tls/certs/ example.crt ).0",
"systemctl restart httpd",
"setsebool -P nis_enabled on",
"uid=USDlogin,cn=users,cn=accounts,dc=example,dc=com",
"DC=Domain,DC=Example | |----- CN=Users | |----- CN=Group1 |----- CN=Group2 |----- CN=User1 |----- CN=User2 |----- CN=User3",
"kinit admin",
"klist",
"ipa host-add --random hostname",
"ipa service-add HTTP/ hostname",
"satellite-maintain packages install ipa-client",
"ipa-client-install --password OTP",
"satellite-installer --foreman-ipa-authentication=true",
"satellite-installer --foreman-ipa-authentication-api=true --foreman-ipa-authentication=true",
"satellite-maintain service restart",
"kinit admin",
"klist",
"ipa hbacsvc-add satellite-prod ipa hbacrule-add allow_satellite_prod ipa hbacrule-add-service allow_satellite_prod --hbacsvcs=satellite-prod",
"ipa hbacrule-add-user allow_satellite_prod --user= username ipa hbacrule-add-host allow_satellite_prod --hosts= satellite.example.com",
"ipa hbacrule-find satellite-prod ipa hbactest --user= username --host= satellite.example.com --service=satellite-prod",
"satellite-installer --foreman-pam-service=satellite-prod",
"satellite-maintain packages install adcli krb5-workstation oddjob-mkhomedir oddjob realmd samba-winbind-clients samba-winbind samba-common-tools samba-winbind-krb5-locator sssd",
"realm join AD.EXAMPLE.COM --membership-software=samba --client-software=sssd",
"mkdir /etc/ipa/",
"[global] realm = AD.EXAMPLE.COM",
"[global] workgroup = AD.EXAMPLE realm = AD.EXAMPLE.COM kerberos method = system keytab security = ads",
"KRB5_KTNAME=FILE:/etc/httpd/conf/http.keytab net ads keytab add HTTP -U Administrator -s /etc/samba/smb.conf",
"[domain/ ad.example.com ] ad_gpo_access_control = enforcing ad_gpo_map_service = +foreman",
"systemctl restart sssd",
"satellite-installer --foreman-ipa-authentication=true",
"kinit ad_user @ AD.EXAMPLE.COM",
"curl -k -u : --negotiate https://satellite.example.com/users/extlogin <html><body>You are being <a href=\"satellite.example.com/hosts\">redirected</a>.</body></html>",
"[nss] user_attributes=+mail, +sn, +givenname [domain/EXAMPLE.com] krb5_store_password_if_offline = True ldap_user_extra_attrs=email:mail, lastname:sn, firstname:givenname [ifp] allowed_uids = ipaapi, root user_attributes=+email, +firstname, +lastname",
"dbus-send --print-reply --system --dest=org.freedesktop.sssd.infopipe /org/freedesktop/sssd/infopipe org.freedesktop.sssd.infopipe.GetUserAttr string:ad-user@ad-domain array:string:email,firstname,lastname",
"id username",
"foreman-rake ldap:refresh_usergroups",
":foreman: :use_sessions: true",
":foreman: :default_auth_type: 'Negotiate_Auth' :use_sessions: true",
"satellite-maintain packages install ipa-client",
"ipa-client-install",
"foreman-prepare-realm admin realm-capsule",
"scp /root/freeipa.keytab root@ capsule.example.com :/etc/foreman-proxy/freeipa.keytab",
"mv /root/freeipa.keytab /etc/foreman-proxy chown foreman-proxy:foreman-proxy /etc/foreman-proxy/freeipa.keytab",
"satellite-installer --foreman-proxy-realm true --foreman-proxy-realm-keytab /etc/foreman-proxy/freeipa.keytab --foreman-proxy-realm-principal [email protected] --foreman-proxy-realm-provider freeipa",
"cp /etc/ipa/ca.crt /etc/pki/ca-trust/source/anchors/ipa.crt update-ca-trust enable update-ca-trust",
"systemctl restart foreman-proxy",
"ipa hostgroup-add hostgroup_name --desc= hostgroup_description",
"ipa automember-add --type=hostgroup hostgroup_name automember_rule",
"ipa automember-add-condition --key=userclass --type=hostgroup --inclusive-regex= ^webserver hostgroup_name ---------------------------------- Added condition(s) to \" hostgroup_name \" ---------------------------------- Automember Rule: automember_rule Inclusive Regex: userclass= ^webserver ---------------------------- Number of conditions added 1 ----------------------------",
"satellite-maintain packages install mod_auth_openidc keycloak-httpd-client-install python3-lxml",
"keycloak-httpd-client-install --app-name foreman-openidc --keycloak-server-url \" https://RHSSO.example.com \" --keycloak-admin-username \" admin \" --keycloak-realm \" Satellite_Realm \" --keycloak-admin-realm master --keycloak-auth-role root-admin -t openidc -l /users/extlogin --force",
"satellite-installer --foreman-keycloak true --foreman-keycloak-app-name \"foreman-openidc\" --foreman-keycloak-realm \" Satellite_Realm \"",
"keycloak-httpd-client-install --app-name hammer-openidc --keycloak-server-url \" https://RHSSO.example.com \" --keycloak-admin-username \" admin \" --keycloak-realm \" Satellite_Realm \" --keycloak-admin-realm master --keycloak-auth-role root-admin -t openidc -l /users/extlogin --force",
"systemctl restart httpd",
"https://satellite.example.com/users/extlogin/redirect_uri https://satellite.example.com/users/extlogin",
"https://satellite.example.com/users/extlogin/redirect_uri urn:ietf:wg:oauth:2.0:oob",
"hammer settings set --name authorize_login_delegation --value true",
"hammer settings set --name login_delegation_logout_url --value https://satellite.example.com/users/extlogout",
"hammer settings set --name oidc_algorithm --value 'RS256'",
"hammer settings set --name oidc_audience --value \"[' satellite.example.com -hammer-openidc']\"",
"hammer settings set --name oidc_audience --value \"[' satellite.example.com -foreman-openidc', ' satellite.example.com -hammer-openidc']\"",
"hammer settings set --name oidc_issuer --value \" RHSSO.example.com /auth/realms/ RHSSO_Realm \"",
"hammer settings set --name oidc_jwks_url --value \" RHSSO.example.com /auth/realms/ RHSSO_Realm /protocol/openid-connect/certs\"",
"hammer auth-source external list",
"hammer auth-source external update --id Authentication Source ID --location-ids Location ID --organization-ids Organization ID",
"hammer auth login oauth --two-factor --oidc-token-endpoint 'https:// RHSSO.example.com /auth/realms/ssl-realm/protocol/openid-connect/token' --oidc-authorization-endpoint 'https:// RHSSO.example.com /auth' --oidc-client-id ' satellite.example.com -foreman-openidc' --oidc-redirect-uri urn:ietf:wg:oauth:2.0:oob",
"satellite-maintain packages install mod_auth_openidc keycloak-httpd-client-install python3-lxml",
"keycloak-httpd-client-install --app-name foreman-openidc --keycloak-server-url \" https://RHSSO.example.com \" --keycloak-admin-username \" admin \" --keycloak-realm \" Satellite_Realm \" --keycloak-admin-realm master --keycloak-auth-role root-admin -t openidc -l /users/extlogin --force",
"satellite-installer --foreman-keycloak true --foreman-keycloak-app-name \"foreman-openidc\" --foreman-keycloak-realm \" Satellite_Realm \"",
"keycloak-httpd-client-install --app-name hammer-openidc --keycloak-server-url \" https://RHSSO.example.com \" --keycloak-admin-username \" admin \" --keycloak-realm \" Satellite_Realm \" --keycloak-admin-realm master --keycloak-auth-role root-admin -t openidc -l /users/extlogin --force",
"systemctl restart httpd",
"https://satellite.example.com/users/extlogin/redirect_uri https://satellite.example.com/users/extlogin",
"https://satellite.example.com/users/extlogin/redirect_uri urn:ietf:wg:oauth:2.0:oob",
"hammer settings set --name authorize_login_delegation --value true",
"hammer settings set --name login_delegation_logout_url --value https://satellite.example.com/users/extlogout",
"hammer settings set --name oidc_algorithm --value 'RS256'",
"hammer settings set --name oidc_audience --value \"[' satellite.example.com -hammer-openidc']\"",
"hammer settings set --name oidc_audience --value \"[' satellite.example.com -foreman-openidc', ' satellite.example.com -hammer-openidc']\"",
"hammer settings set --name oidc_issuer --value \" RHSSO.example.com /auth/realms/ RHSSO_Realm \"",
"hammer settings set --name oidc_jwks_url --value \" RHSSO.example.com /auth/realms/ RHSSO_Realm /protocol/openid-connect/certs\"",
"hammer auth-source external list",
"hammer auth-source external update --id Authentication Source ID --location-ids Location ID --organization-ids Organization ID",
"hammer auth login oauth --two-factor --oidc-token-endpoint 'https:// RHSSO.example.com /auth/realms/ssl-realm/protocol/openid-connect/token' --oidc-authorization-endpoint 'https:// RHSSO.example.com /auth' --oidc-client-id ' satellite.example.com -foreman-openidc' --oidc-redirect-uri urn:ietf:wg:oauth:2.0:oob",
"satellite-installer --reset-foreman-keycloak",
"scp root@ dns.example.com :/etc/rndc.key /etc/foreman-proxy/rndc.key",
"restorecon -v /etc/foreman-proxy/rndc.key chown -v root:foreman-proxy /etc/foreman-proxy/rndc.key chmod -v 640 /etc/foreman-proxy/rndc.key",
"echo -e \"server DNS_IP_Address \\n update add aaa.example.com 3600 IN A Host_IP_Address \\n send\\n\" | nsupdate -k /etc/foreman-proxy/rndc.key nslookup aaa.example.com DNS_IP_Address echo -e \"server DNS_IP_Address \\n update delete aaa.example.com 3600 IN A Host_IP_Address \\n send\\n\" | nsupdate -k /etc/foreman-proxy/rndc.key",
"satellite-installer --foreman-proxy-dns=true --foreman-proxy-dns-managed=false --foreman-proxy-dns-provider=nsupdate --foreman-proxy-dns-server=\" DNS_IP_Address \" --foreman-proxy-keyfile=/etc/foreman-proxy/rndc.key",
"dnf install dhcp-server bind-utils",
"tsig-keygen -a hmac-md5 omapi_key",
"cat /etc/dhcp/dhcpd.conf default-lease-time 604800; max-lease-time 2592000; log-facility local7; subnet 192.168.38.0 netmask 255.255.255.0 { range 192.168.38.10 192.168.38.100 ; option routers 192.168.38.1 ; option subnet-mask 255.255.255.0 ; option domain-search \" virtual.lan \"; option domain-name \" virtual.lan \"; option domain-name-servers 8.8.8.8 ; } omapi-port 7911; key omapi_key { algorithm hmac-md5; secret \" My_Secret \"; }; omapi-key omapi_key;",
"firewall-cmd --add-service dhcp",
"firewall-cmd --runtime-to-permanent",
"id -u foreman 993 id -g foreman 990",
"groupadd -g 990 foreman useradd -u 993 -g 990 -s /sbin/nologin foreman",
"chmod o+rx /etc/dhcp/ chmod o+r /etc/dhcp/dhcpd.conf chattr +i /etc/dhcp/ /etc/dhcp/dhcpd.conf",
"systemctl enable --now dhcpd",
"dnf install nfs-utils systemctl enable --now nfs-server",
"mkdir -p /exports/var/lib/dhcpd /exports/etc/dhcp",
"/var/lib/dhcpd /exports/var/lib/dhcpd none bind,auto 0 0 /etc/dhcp /exports/etc/dhcp none bind,auto 0 0",
"mount -a",
"/exports 192.168.38.1 (rw,async,no_root_squash,fsid=0,no_subtree_check) /exports/etc/dhcp 192.168.38.1 (ro,async,no_root_squash,no_subtree_check,nohide) /exports/var/lib/dhcpd 192.168.38.1 (ro,async,no_root_squash,no_subtree_check,nohide)",
"exportfs -rva",
"firewall-cmd --add-port=7911/tcp",
"firewall-cmd --add-service mountd --add-service nfs --add-service rpc-bind --zone public",
"firewall-cmd --runtime-to-permanent",
"satellite-maintain packages install nfs-utils",
"mkdir -p /mnt/nfs/etc/dhcp /mnt/nfs/var/lib/dhcpd",
"chown -R foreman-proxy /mnt/nfs",
"showmount -e DHCP_Server_FQDN rpcinfo -p DHCP_Server_FQDN",
"DHCP_Server_FQDN :/exports/etc/dhcp /mnt/nfs/etc/dhcp nfs ro,vers=3,auto,nosharecache,context=\"system_u:object_r:dhcp_etc_t:s0\" 0 0 DHCP_Server_FQDN :/exports/var/lib/dhcpd /mnt/nfs/var/lib/dhcpd nfs ro,vers=3,auto,nosharecache,context=\"system_u:object_r:dhcpd_state_t:s0\" 0 0",
"mount -a",
"su foreman-proxy -s /bin/bash cat /mnt/nfs/etc/dhcp/dhcpd.conf cat /mnt/nfs/var/lib/dhcpd/dhcpd.leases exit",
"satellite-installer --enable-foreman-proxy-plugin-dhcp-remote-isc --foreman-proxy-dhcp-provider=remote_isc --foreman-proxy-dhcp-server= My_DHCP_Server_FQDN --foreman-proxy-dhcp=true --foreman-proxy-plugin-dhcp-remote-isc-dhcp-config /mnt/nfs/etc/dhcp/dhcpd.conf --foreman-proxy-plugin-dhcp-remote-isc-dhcp-leases /mnt/nfs/var/lib/dhcpd/dhcpd.leases --foreman-proxy-plugin-dhcp-remote-isc-key-name=omapi_key --foreman-proxy-plugin-dhcp-remote-isc-key-secret= My_Secret --foreman-proxy-plugin-dhcp-remote-isc-omapi-port=7911",
"mkdir -p /mnt/nfs/var/lib/tftpboot",
"TFTP_Server_IP_Address :/exports/var/lib/tftpboot /mnt/nfs/var/lib/tftpboot nfs rw,vers=3,auto,nosharecache,context=\"system_u:object_r:tftpdir_rw_t:s0\" 0 0",
"mount -a",
"satellite-installer --foreman-proxy-tftp-root /mnt/nfs/var/lib/tftpboot --foreman-proxy-tftp=true",
"satellite-installer --foreman-proxy-tftp-servername= TFTP_Server_FQDN",
"kinit idm_user",
"ipa service-add capsule/satellite.example.com",
"satellite-maintain packages install ipa-client",
"ipa-client-install",
"kinit admin",
"rm /etc/foreman-proxy/dns.keytab",
"ipa-getkeytab -p capsule/ [email protected] -s idm1.example.com -k /etc/foreman-proxy/dns.keytab",
"chown foreman-proxy:foreman-proxy /etc/foreman-proxy/dns.keytab",
"kinit -kt /etc/foreman-proxy/dns.keytab capsule/ [email protected]",
"grant capsule\\047 [email protected] wildcard * ANY;",
"grant capsule\\047 [email protected] wildcard * ANY;",
"satellite-installer --foreman-proxy-dns-managed=false --foreman-proxy-dns-provider=nsupdate_gss --foreman-proxy-dns-server=\" idm1.example.com \" --foreman-proxy-dns-tsig-keytab=/etc/foreman-proxy/dns.keytab --foreman-proxy-dns-tsig-principal=\"capsule/ [email protected] \" --foreman-proxy-dns=true",
"######################################################################## include \"/etc/rndc.key\"; controls { inet _IdM_Server_IP_Address_ port 953 allow { _Satellite_IP_Address_; } keys { \"rndc-key\"; }; }; ########################################################################",
"systemctl reload named",
"grant \"rndc-key\" zonesub ANY;",
"scp /etc/rndc.key root@ satellite.example.com :/etc/rndc.key",
"restorecon -v /etc/rndc.key chown -v root:named /etc/rndc.key chmod -v 640 /etc/rndc.key",
"usermod -a -G named foreman-proxy",
"satellite-installer --foreman-proxy-dns-managed=false --foreman-proxy-dns-provider=nsupdate --foreman-proxy-dns-server=\" IdM_Server_IP_Address \" --foreman-proxy-dns-ttl=86400 --foreman-proxy-dns=true --foreman-proxy-keyfile=/etc/rndc.key",
"key \"rndc-key\" { algorithm hmac-md5; secret \" secret-key ==\"; };",
"echo -e \"server 192.168.25.1\\n update add test.example.com 3600 IN A 192.168.25.20\\n send\\n\" | nsupdate -k /etc/rndc.key",
"nslookup test.example.com 192.168.25.1 Server: 192.168.25.1 Address: 192.168.25.1#53 Name: test.example.com Address: 192.168.25.20",
"echo -e \"server 192.168.25.1\\n update delete test.example.com 3600 IN A 192.168.25.20\\n send\\n\" | nsupdate -k /etc/rndc.key",
"nslookup test.example.com 192.168.25.1",
"satellite-installer",
"satellite-installer --foreman-proxy-dns-managed=true --foreman-proxy-dns-provider=nsupdate --foreman-proxy-dns-server=\"127.0.0.1\" --foreman-proxy-dns=true",
"dnf module list --enabled",
"dnf module list --enabled",
"dnf module reset ruby",
"dnf module list --enabled",
"dnf module reset postgresql",
"dnf module enable satellite:el8",
"dnf install postgresql-upgrade",
"postgresql-setup --upgrade",
"apache::server_tokens: Prod",
"apache::server_signature: Off",
"cp /etc/dhcp/dhcpd.conf /etc/dhcp/dhcpd.backup",
"journalctl -xe /Stage[main]/Dhcp/File[/etc/dhcp/dhcpd.conf]: Filebucketed /etc/dhcp/dhcpd.conf to puppet with sum 622d9820b8e764ab124367c68f5fa3a1",
"puppet filebucket restore --local --bucket /var/lib/puppet/clientbucket /etc/dhcp/dhcpd.conf \\ 622d9820b8e764ab124367c68f5fa3a1"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html-single/installing_satellite_server_in_a_connected_network_environment/index |
B.38.9. RHSA-2011:0883 - Important: kernel security and bug fix update | B.38.9. RHSA-2011:0883 - Important: kernel security and bug fix update Important This update has already been released as the security errata RHSA-2011:0883 Updated kernel packages that fix multiple security issues and three bugs are now available for Red Hat Enterprise Linux 6.0 Extended Update Support. The Red Hat Security Response Team has rated this update as having important security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links after each description below. The kernel packages contain the Linux kernel, the core of any Linux operating system. This update includes backported fixes for security issues. These issues, except for CVE-2011-1182 , only affected users of Red Hat Enterprise Linux 6.0 Extended Update Support as they have already been addressed for users of Red Hat Enterprise Linux 6 in the 6.1 update, RHSA-2011:0542 . Security fixes * Buffer overflow flaws were found in the Linux kernel's Management Module Support for Message Passing Technology (MPT) based controllers. A local, unprivileged user could use these flaws to cause a denial of service, an information leak, or escalate their privileges. ( CVE-2011-1494 , CVE-2011-1495 , Important) * A flaw was found in the Linux kernel's networking subsystem. If the number of packets received exceeded the receiver's buffer limit, they were queued in a backlog, consuming memory, instead of being discarded. A remote attacker could abuse this flaw to cause a denial of service (out-of-memory condition). ( CVE-2010-4251 , CVE-2010-4805 , Moderate) * A flaw was found in the Linux kernel's Transparent Huge Pages (THP) implementation. A local, unprivileged user could abuse this flaw to allow the user stack (when it is using huge pages) to grow and cause a denial of service. ( CVE-2011-0999 , Moderate) * A flaw in the Linux kernel's Event Poll (epoll) implementation could allow a local, unprivileged user to cause a denial of service. ( CVE-2011-1082 , Moderate) * An inconsistency was found in the interaction between the Linux kernel's method for allocating NFSv4 (Network File System version 4) ACL data and the method by which it was freed. This inconsistency led to a kernel panic which could be triggered by a local, unprivileged user with files owned by said user on an NFSv4 share. ( CVE-2011-1090 , Moderate) * It was found that some structure padding and reserved fields in certain data structures in KVM (Kernel-based Virtual Machine) were not initialized properly before being copied to user-space. A privileged host user with access to /dev/kvm could use this flaw to leak kernel stack memory to user-space. ( CVE-2010-3881 , Low) * A missing validation check was found in the Linux kernel's mac_partition() implementation, used for supporting file systems created on Mac OS operating systems. A local attacker could use this flaw to cause a denial of service by mounting a disk that contains specially-crafted partitions. ( CVE-2011-1010 , Low) * A buffer overflow flaw in the DEC Alpha OSF partition implementation in the Linux kernel could allow a local attacker to cause an information leak by mounting a disk that contains specially-crafted partition tables. ( CVE-2011-1163 , Low) * Missing validations of null-terminated string data structure elements in the do_replace() , compat_do_replace() , do_ipt_get_ctl() , do_ip6t_get_ctl() , and do_arpt_get_ctl() functions could allow a local user who has the CAP_NET_ADMIN capability to cause an information leak. ( CVE-2011-1170 , CVE-2011-1171 , CVE-2011-1172 , Low) * A missing validation check was found in the Linux kernel's signals implementation. A local, unprivileged user could use this flaw to send signals via the sigqueueinfo system call, with the si_code set to SI_TKILL and with spoofed process and user IDs, to other processes. Note: This flaw does not allow existing permission checks to be bypassed; signals can only be sent if your privileges allow you to already do so. ( CVE-2011-1182 , Low) Red Hat would like to thank Dan Rosenberg for reporting CVE-2011-1494 and CVE-2011-1495; Nelson Elhage for reporting CVE-2011-1082; Vasiliy Kulikov for reporting CVE-2010-3881, CVE-2011-1170, CVE-2011-1171, and CVE-2011-1172; Timo Warns for reporting CVE-2011-1010 and CVE-2011-1163; and Julien Tinnes of the Google Security Team for reporting CVE-2011-1182. Bug fixes BZ# 590187 Previously, CPUs kept continuously locking up in the inet_csk_bind_conflict() function until the entire system became unreachable when all the CPUs were unresponsive due to a hash locking issue when using port redirection in the __inet_inherit_port() function. With this update, the underlying source code of the __inet_inherit_port() function has been modified to address this issue, and CPUs no longer lock up. BZ# 709380 A previously released patch for BZ# 625487 introduced a kABI (Kernel Application Binary Interface) workaround that extended struct sock (the network layer representation of sockets) by putting the extension structure in the memory right after the original structure. As a result, the prot->obj_size pointer had to be adjusted in the proto_register function. Prior to this update, the adjustment was done only if the alloc_slab parameter of the proto_register function was not 0 . When the alloc_slab parameter was 0 , drivers performed allocations themselves using sk_alloc and as the allocated memory was lower than needed, a memory corruption could occur. With this update, the underlying source code has been modified to address this issue, and a memory corruption no longer occurs. BZ# 706543 An IDX ACTIVATE timeout occurred during an online setting of an OSN device. This was because an incorrect function was provided on the IDX ACTIVATE . Because OSN devices use the same function level as OSD devices, this update adds OSN devices to the initialization function for the func_level ; thus, resolving this issue. Users should upgrade to these updated packages, which contain backported patches to correct these issues. The system must be rebooted for this update to take effect. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/rhsa-2011-0883 |
Chapter 11. Enabling encryption on a vSphere cluster | Chapter 11. Enabling encryption on a vSphere cluster You can encrypt your virtual machines after installing OpenShift Container Platform 4.15 on vSphere by draining and shutting down your nodes one at a time. While each virtual machine is shutdown, you can enable encryption in the vCenter web interface. 11.1. Encrypting virtual machines You can encrypt your virtual machines with the following process. You can drain your virtual machines, power them down and encrypt them using the vCenter interface. Finally, you can create a storage class to use the encrypted storage. Prerequisites You have configured a Standard key provider in vSphere. For more information, see Adding a KMS to vCenter Server . Important The Native key provider in vCenter is not supported. For more information, see vSphere Native Key Provider Overview . You have enabled host encryption mode on all of the ESXi hosts that are hosting the cluster. For more information, see Enabling host encryption mode . You have a vSphere account which has all cryptographic privileges enabled. For more information, see Cryptographic Operations Privileges . Procedure Drain and cordon one of your nodes. For detailed instructions on node management, see "Working with Nodes". Shutdown the virtual machine associated with that node in the vCenter interface. Right-click on the virtual machine in the vCenter interface and select VM Policies Edit VM Storage Policies . Select an encrypted storage policy and select OK . Start the encrypted virtual machine in the vCenter interface. Repeat steps 1-5 for all nodes that you want to encrypt. Configure a storage class that uses the encrypted storage policy. For more information about configuring an encrypted storage class, see "VMware vSphere CSI Driver Operator". 11.2. Additional resources Working with nodes vSphere encryption Requirements for encrypting virtual machines | null | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_vsphere/vsphere-post-installation-encryption |
Chapter 12. Managing control plane machines | Chapter 12. Managing control plane machines 12.1. About control plane machine sets With control plane machine sets, you can automate management of the control plane machine resources within your OpenShift Container Platform cluster. Important Control plane machine sets cannot manage compute machines, and compute machine sets cannot manage control plane machines. Control plane machine sets provide for control plane machines similar management capabilities as compute machine sets provide for compute machines. However, these two types of machine sets are separate custom resources defined within the Machine API and have several fundamental differences in their architecture and functionality. 12.1.1. Control Plane Machine Set Operator overview The Control Plane Machine Set Operator uses the ControlPlaneMachineSet custom resource (CR) to automate management of the control plane machine resources within your OpenShift Container Platform cluster. When the state of the cluster control plane machine set is set to Active , the Operator ensures that the cluster has the correct number of control plane machines with the specified configuration. This allows the automated replacement of degraded control plane machines and rollout of changes to the control plane. A cluster has only one control plane machine set, and the Operator only manages objects in the openshift-machine-api namespace. 12.1.1.1. Control Plane Machine Set Operator limitations The Control Plane Machine Set Operator has the following limitations: Only Amazon Web Services (AWS), Google Cloud Platform (GCP), IBM Power(R) Virtual Server, Microsoft Azure, Nutanix, VMware vSphere, and Red Hat OpenStack Platform (RHOSP) clusters are supported. Clusters that do not have preexisting machines that represent the control plane nodes cannot use a control plane machine set or enable the use of a control plane machine set after installation. Generally, preexisting control plane machines are only present if a cluster was installed using infrastructure provisioned by the installation program. To determine if a cluster has the required preexisting control plane machines, run the following command as a user with administrator privileges: USD oc get machine \ -n openshift-machine-api \ -l machine.openshift.io/cluster-api-machine-role=master Example output showing preexisting control plane machines NAME PHASE TYPE REGION ZONE AGE <infrastructure_id>-master-0 Running m6i.xlarge us-west-1 us-west-1a 5h19m <infrastructure_id>-master-1 Running m6i.xlarge us-west-1 us-west-1b 5h19m <infrastructure_id>-master-2 Running m6i.xlarge us-west-1 us-west-1a 5h19m Example output missing preexisting control plane machines No resources found in openshift-machine-api namespace. The Operator requires the Machine API Operator to be operational and is therefore not supported on clusters with manually provisioned machines. When installing a OpenShift Container Platform cluster with manually provisioned machines for a platform that creates an active generated ControlPlaneMachineSet custom resource (CR), you must remove the Kubernetes manifest files that define the control plane machine set as instructed in the installation process. Only clusters with three control plane machines are supported. Horizontal scaling of the control plane is not supported. Deploying Azure control plane machines on Ephemeral OS disks increases risk for data loss and is not supported. Deploying control plane machines as AWS Spot Instances, GCP preemptible VMs, or Azure Spot VMs is not supported. Important Attempting to deploy control plane machines as AWS Spot Instances, GCP preemptible VMs, or Azure Spot VMs might cause the cluster to lose etcd quorum. A cluster that loses all control plane machines simultaneously is unrecoverable. Making changes to the control plane machine set during or prior to installation is not supported. You must make any changes to the control plane machine set only after installation. 12.1.2. Additional resources Control Plane Machine Set Operator reference ControlPlaneMachineSet custom resource 12.2. Getting started with control plane machine sets The process for getting started with control plane machine sets depends on the state of the ControlPlaneMachineSet custom resource (CR) in your cluster. Clusters with an active generated CR Clusters that have a generated CR with an active state use the control plane machine set by default. No administrator action is required. Clusters with an inactive generated CR For clusters that include an inactive generated CR, you must review the CR configuration and activate the CR . Clusters without a generated CR For clusters that do not include a generated CR, you must create and activate a CR with the appropriate configuration for your cluster. If you are uncertain about the state of the ControlPlaneMachineSet CR in your cluster, you can verify the CR status . 12.2.1. Supported cloud providers In OpenShift Container Platform 4.18, the control plane machine set is supported for Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, Nutanix, and VMware vSphere clusters. The status of the control plane machine set after installation depends on your cloud provider and the version of OpenShift Container Platform that you installed on your cluster. Table 12.1. Control plane machine set implementation for OpenShift Container Platform 4.18 Cloud provider Active by default Generated CR Manual CR required Amazon Web Services (AWS) X [1] X Google Cloud Platform (GCP) X [2] X Microsoft Azure X [2] X Nutanix X [3] X Red Hat OpenStack Platform (RHOSP) X [3] X VMware vSphere X [4] X AWS clusters that are upgraded from version 4.11 or earlier require CR activation . GCP and Azure clusters that are upgraded from version 4.12 or earlier require CR activation . Nutanix and RHOSP clusters that are upgraded from version 4.13 or earlier require CR activation . vSphere clusters that are upgraded from version 4.15 or earlier require CR activation . 12.2.2. Checking the control plane machine set custom resource state You can verify the existence and state of the ControlPlaneMachineSet custom resource (CR). Procedure Determine the state of the CR by running the following command: USD oc get controlplanemachineset.machine.openshift.io cluster \ --namespace openshift-machine-api A result of Active indicates that the ControlPlaneMachineSet CR exists and is activated. No administrator action is required. A result of Inactive indicates that a ControlPlaneMachineSet CR exists but is not activated. A result of NotFound indicates that there is no existing ControlPlaneMachineSet CR. steps To use the control plane machine set, you must ensure that a ControlPlaneMachineSet CR with the correct settings for your cluster exists. If your cluster has an existing CR, you must verify that the configuration in the CR is correct for your cluster. If your cluster does not have an existing CR, you must create one with the correct configuration for your cluster. 12.2.3. Activating the control plane machine set custom resource To use the control plane machine set, you must ensure that a ControlPlaneMachineSet custom resource (CR) with the correct settings for your cluster exists. On a cluster with a generated CR, you must verify that the configuration in the CR is correct for your cluster and activate it. Note For more information about the parameters in the CR, see "Control plane machine set configuration". Procedure View the configuration of the CR by running the following command: USD oc --namespace openshift-machine-api edit controlplanemachineset.machine.openshift.io cluster Change the values of any fields that are incorrect for your cluster configuration. When the configuration is correct, activate the CR by setting the .spec.state field to Active and saving your changes. Important To activate the CR, you must change the .spec.state field to Active in the same oc edit session that you use to update the CR configuration. If the CR is saved with the state left as Inactive , the control plane machine set generator resets the CR to its original settings. Additional resources Control plane machine set configuration 12.2.4. Creating a control plane machine set custom resource To use the control plane machine set, you must ensure that a ControlPlaneMachineSet custom resource (CR) with the correct settings for your cluster exists. On a cluster without a generated CR, you must create the CR manually and activate it. Note For more information about the structure and parameters of the CR, see "Control plane machine set configuration". Procedure Create a YAML file using the following template: Control plane machine set CR YAML file template apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: replicas: 3 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <cluster_id> 1 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master state: Active 2 strategy: type: RollingUpdate 3 template: machineType: machines_v1beta1_machine_openshift_io machines_v1beta1_machine_openshift_io: failureDomains: platform: <platform> 4 <platform_failure_domains> 5 metadata: labels: machine.openshift.io/cluster-api-cluster: <cluster_id> 6 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master spec: providerSpec: value: <platform_provider_spec> 7 1 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. You must specify this value when you create a ControlPlaneMachineSet CR. If you have the OpenShift CLI ( oc ) installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 Specify the state of the Operator. When the state is Inactive , the Operator is not operational. You can activate the Operator by setting the value to Active . Important Before you activate the CR, you must ensure that its configuration is correct for your cluster requirements. 3 Specify the update strategy for the cluster. Valid values are OnDelete and RollingUpdate . The default value is RollingUpdate . For more information about update strategies, see "Updating the control plane configuration". 4 Specify your cloud provider platform name. Valid values are AWS , Azure , GCP , Nutanix , VSphere , and OpenStack . 5 Add the <platform_failure_domains> configuration for the cluster. The format and values of this section are provider-specific. For more information, see the sample failure domain configuration for your cloud provider. 6 Specify the infrastructure ID. 7 Add the <platform_provider_spec> configuration for the cluster. The format and values of this section are provider-specific. For more information, see the sample provider specification for your cloud provider. Refer to the sample YAML for a control plane machine set CR and populate your file with values that are appropriate for your cluster configuration. Refer to the sample failure domain configuration and sample provider specification for your cloud provider and update those sections of your file with the appropriate values. When the configuration is correct, activate the CR by setting the .spec.state field to Active and saving your changes. Create the CR from your YAML file by running the following command: USD oc create -f <control_plane_machine_set>.yaml where <control_plane_machine_set> is the name of the YAML file that contains the CR configuration. Additional resources Updating the control plane configuration Control plane machine set configuration Provider-specific configuration options 12.3. Managing control plane machines with control plane machine sets Control plane machine sets automate several essential aspects of control plane management. 12.3.1. Updating the control plane configuration You can make changes to the configuration of the machines in the control plane by updating the specification in the control plane machine set custom resource (CR). The Control Plane Machine Set Operator monitors the control plane machines and compares their configuration with the specification in the control plane machine set CR. When there is a discrepancy between the specification in the CR and the configuration of a control plane machine, the Operator marks that control plane machine for replacement. Note For more information about the parameters in the CR, see "Control plane machine set configuration". Prerequisites Your cluster has an activated and functioning Control Plane Machine Set Operator. Procedure Edit your control plane machine set CR by running the following command: USD oc edit controlplanemachineset.machine.openshift.io cluster \ -n openshift-machine-api Change the values of any fields that you want to update in your cluster configuration. Save your changes. steps For clusters that use the default RollingUpdate update strategy, the control plane machine set propagates changes to your control plane configuration automatically. For clusters that are configured to use the OnDelete update strategy, you must replace your control plane machines manually. 12.3.1.1. Automatic updates to the control plane configuration The RollingUpdate update strategy automatically propagates changes to your control plane configuration. This update strategy is the default configuration for the control plane machine set. For clusters that use the RollingUpdate update strategy, the Operator creates a replacement control plane machine with the configuration that is specified in the CR. When the replacement control plane machine is ready, the Operator deletes the control plane machine that is marked for replacement. The replacement machine then joins the control plane. If multiple control plane machines are marked for replacement, the Operator protects etcd health during replacement by repeating this replacement process one machine at a time until it has replaced each machine. 12.3.1.2. Manual updates to the control plane configuration You can use the OnDelete update strategy to propagate changes to your control plane configuration by replacing machines manually. Manually replacing machines allows you to test changes to your configuration on a single machine before applying the changes more broadly. For clusters that are configured to use the OnDelete update strategy, the Operator creates a replacement control plane machine when you delete an existing machine. When the replacement control plane machine is ready, the etcd Operator allows the existing machine to be deleted. The replacement machine then joins the control plane. If multiple control plane machines are deleted, the Operator creates all of the required replacement machines simultaneously. The Operator maintains etcd health by preventing more than one machine being removed from the control plane at once. 12.3.2. Replacing a control plane machine To replace a control plane machine in a cluster that has a control plane machine set, you delete the machine manually. The control plane machine set replaces the deleted machine with one using the specification in the control plane machine set custom resource (CR). Prerequisites If your cluster runs on Red Hat OpenStack Platform (RHOSP) and you need to evacuate a compute server, such as for an upgrade, you must disable the RHOSP compute node that the machine runs on by running the following command: USD openstack compute service set <target_node_host_name> nova-compute --disable For more information, see Preparing to migrate in the RHOSP documentation. Procedure List the control plane machines in your cluster by running the following command: USD oc get machines \ -l machine.openshift.io/cluster-api-machine-role==master \ -n openshift-machine-api Delete a control plane machine by running the following command: USD oc delete machine \ -n openshift-machine-api \ <control_plane_machine_name> 1 1 Specify the name of the control plane machine to delete. Note If you delete multiple control plane machines, the control plane machine set replaces them according to the configured update strategy: For clusters that use the default RollingUpdate update strategy, the Operator replaces one machine at a time until each machine is replaced. For clusters that are configured to use the OnDelete update strategy, the Operator creates all of the required replacement machines simultaneously. Both strategies maintain etcd health during control plane machine replacement. 12.3.3. Additional resources Control plane machine set configuration Provider-specific configuration options 12.4. Control plane machine set configuration This example YAML snippet shows the base structure for a control plane machine set custom resource (CR). 12.4.1. Sample YAML for a control plane machine set custom resource The base of the ControlPlaneMachineSet CR is structured the same way for all platforms. Sample ControlPlaneMachineSet CR YAML file apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster 1 namespace: openshift-machine-api spec: replicas: 3 2 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <cluster_id> 3 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master state: Active 4 strategy: type: RollingUpdate 5 template: machineType: machines_v1beta1_machine_openshift_io machines_v1beta1_machine_openshift_io: failureDomains: platform: <platform> 6 <platform_failure_domains> 7 metadata: labels: machine.openshift.io/cluster-api-cluster: <cluster_id> machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master spec: providerSpec: value: <platform_provider_spec> 8 1 Specifies the name of the ControlPlaneMachineSet CR, which is cluster . Do not change this value. 2 Specifies the number of control plane machines. Only clusters with three control plane machines are supported, so the replicas value is 3 . Horizontal scaling is not supported. Do not change this value. 3 Specifies the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. You must specify this value when you create a ControlPlaneMachineSet CR. If you have the OpenShift CLI ( oc ) installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 4 Specifies the state of the Operator. When the state is Inactive , the Operator is not operational. You can activate the Operator by setting the value to Active . Important Before you activate the Operator, you must ensure that the ControlPlaneMachineSet CR configuration is correct for your cluster requirements. For more information about activating the Control Plane Machine Set Operator, see "Getting started with control plane machine sets". 5 Specifies the update strategy for the cluster. The allowed values are OnDelete and RollingUpdate . The default value is RollingUpdate . For more information about update strategies, see "Updating the control plane configuration". 6 Specifies the cloud provider platform name. Do not change this value. 7 Specifies the <platform_failure_domains> configuration for the cluster. The format and values of this section are provider-specific. For more information, see the sample failure domain configuration for your cloud provider. 8 Specifies the <platform_provider_spec> configuration for the cluster. The format and values of this section are provider-specific. For more information, see the sample provider specification for your cloud provider. Additional resources Getting started with control plane machine sets Updating the control plane configuration 12.4.2. Provider-specific configuration options The <platform_provider_spec> and <platform_failure_domains> sections of the control plane machine set manifests are provider specific. For provider-specific configuration options for your cluster, see the following resources: Control plane configuration options for Amazon Web Services Control plane configuration options for Google Cloud Platform Control plane configuration options for Microsoft Azure Control plane configuration options for Nutanix Control plane configuration options for Red Hat OpenStack Platform (RHOSP) Control plane configuration options for VMware vSphere 12.5. Configuration options for control plane machines 12.5.1. Control plane configuration options for Amazon Web Services You can change the configuration of your Amazon Web Services (AWS) control plane machines and enable features by updating values in the control plane machine set. When you save an update to the control plane machine set, the Control Plane Machine Set Operator updates the control plane machines according to your configured update strategy . 12.5.1.1. Sample YAML for configuring Amazon Web Services clusters The following example YAML snippets show provider specification and failure domain configurations for an AWS cluster. 12.5.1.1.1. Sample AWS provider specification When you create a control plane machine set for an existing cluster, the provider specification must match the providerSpec configuration in the control plane machine custom resource (CR) that is created by the installation program. You can omit any field that is set in the failure domain section of the CR. In the following example, <cluster_id> is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster Sample AWS providerSpec values apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: # ... template: # ... spec: providerSpec: value: ami: id: ami-<ami_id_string> 1 apiVersion: machine.openshift.io/v1beta1 blockDevices: - ebs: 2 encrypted: true iops: 0 kmsKey: arn: "" volumeSize: 120 volumeType: gp3 credentialsSecret: name: aws-cloud-credentials 3 deviceIndex: 0 iamInstanceProfile: id: <cluster_id>-master-profile 4 instanceType: m6i.xlarge 5 kind: AWSMachineProviderConfig 6 loadBalancers: 7 - name: <cluster_id>-int type: network - name: <cluster_id>-ext type: network metadata: creationTimestamp: null metadataServiceOptions: {} placement: 8 region: <region> 9 availabilityZone: "" 10 tenancy: 11 securityGroups: - filters: - name: tag:Name values: - <cluster_id>-master-sg 12 subnet: {} 13 userDataSecret: name: master-user-data 14 1 Specifies the Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Images (AMI) ID for the cluster. The AMI must belong to the same region as the cluster. If you want to use an AWS Marketplace image, you must complete the OpenShift Container Platform subscription from the AWS Marketplace to obtain an AMI ID for your region. 2 Specifies the configuration of an encrypted EBS volume. 3 Specifies the secret name for the cluster. Do not change this value. 4 Specifies the AWS Identity and Access Management (IAM) instance profile. Do not change this value. 5 Specifies the AWS instance type for the control plane. 6 Specifies the cloud provider platform type. Do not change this value. 7 Specifies the internal ( int ) and external ( ext ) load balancers for the cluster. Note You can omit the external ( ext ) load balancer parameters on private OpenShift Container Platform clusters. 8 Specifies where to create the control plane instance in AWS. 9 Specifies the AWS region for the cluster. 10 This parameter is configured in the failure domain and is shown with an empty value here. If a value specified for this parameter differs from the value in the failure domain, the Control Plane Machine Set Operator overwrites it with the value in the failure domain. 11 Specifies the AWS Dedicated Instance configuration for the control plane. For more information, see AWS documentation about Dedicated Instances . The following values are valid: default : The Dedicated Instance runs on shared hardware. dedicated : The Dedicated Instance runs on single-tenant hardware. host : The Dedicated Instance runs on a Dedicated Host, which is an isolated server with configurations that you can control. 12 Specifies the control plane machines security group. 13 This parameter is configured in the failure domain and is shown with an empty value here. If a value specified for this parameter differs from the value in the failure domain, the Control Plane Machine Set Operator overwrites it with the value in the failure domain. Note If the failure domain configuration does not specify a value, the value in the provider specification is used. Configuring a subnet in the failure domain overwrites the subnet value in the provider specification. 14 Specifies the control plane user data secret. Do not change this value. 12.5.1.1.2. Sample AWS failure domain configuration The control plane machine set concept of a failure domain is analogous to existing AWS concept of an Availability Zone (AZ) . The ControlPlaneMachineSet CR spreads control plane machines across multiple failure domains when possible. When configuring AWS failure domains in the control plane machine set, you must specify the availability zone name and the subnet to use. Sample AWS failure domain values apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: # ... template: # ... machines_v1beta1_machine_openshift_io: failureDomains: aws: - placement: availabilityZone: <aws_zone_a> 1 subnet: 2 filters: - name: tag:Name values: - <cluster_id>-private-<aws_zone_a> 3 type: Filters 4 - placement: availabilityZone: <aws_zone_b> 5 subnet: filters: - name: tag:Name values: - <cluster_id>-private-<aws_zone_b> 6 type: Filters platform: AWS 7 # ... 1 Specifies an AWS availability zone for the first failure domain. 2 Specifies a subnet configuration. In this example, the subnet type is Filters , so there is a filters stanza. 3 Specifies the subnet name for the first failure domain, using the infrastructure ID and the AWS availability zone. 4 Specifies the subnet type. The allowed values are: ARN , Filters and ID . The default value is Filters . 5 Specifies the subnet name for an additional failure domain, using the infrastructure ID and the AWS availability zone. 6 Specifies the cluster's infrastructure ID and the AWS availability zone for the additional failure domain. 7 Specifies the cloud provider platform name. Do not change this value. 12.5.1.2. Enabling Amazon Web Services features for control plane machines You can enable features by updating values in the control plane machine set. 12.5.1.2.1. Restricting the API server to private After you deploy a cluster to Amazon Web Services (AWS), you can reconfigure the API server to use only the private zone. Prerequisites Install the OpenShift CLI ( oc ). Have access to the web console as a user with admin privileges. Procedure In the web portal or console for your cloud provider, take the following actions: Locate and delete the appropriate load balancer component: For AWS, delete the external load balancer. The API DNS entry in the private zone already points to the internal load balancer, which uses an identical configuration, so you do not need to modify the internal load balancer. Delete the api.USDclustername.USDyourdomain DNS entry in the public zone. Remove the external load balancers by deleting the following indicated lines in the control plane machine set custom resource: # ... providerSpec: value: # ... loadBalancers: - name: lk4pj-ext 1 type: network 2 - name: lk4pj-int type: network # ... 1 Delete the name value for the external load balancer, which ends in -ext . 2 Delete the type value for the external load balancer. Additional resources Configuring the Ingress Controller endpoint publishing scope to Internal 12.5.1.2.2. Changing the Amazon Web Services instance type by using a control plane machine set You can change the Amazon Web Services (AWS) instance type that your control plane machines use by updating the specification in the control plane machine set custom resource (CR). Prerequisites Your AWS cluster uses a control plane machine set. Procedure Edit the following line under the providerSpec field: providerSpec: value: ... instanceType: <compatible_aws_instance_type> 1 1 Specify a larger AWS instance type with the same base as the selection. For example, you can change m6i.xlarge to m6i.2xlarge or m6i.4xlarge . Save your changes. 12.5.1.2.3. Assigning machines to placement groups for Elastic Fabric Adapter instances by using machine sets You can configure a machine set to deploy machines on Elastic Fabric Adapter (EFA) instances within an existing AWS placement group. EFA instances do not require placement groups, and you can use placement groups for purposes other than configuring an EFA. This example uses both to demonstrate a configuration that can improve network performance for machines within the specified placement group. Prerequisites You created a placement group in the AWS console. Note Ensure that the rules and limitations for the type of placement group that you create are compatible with your intended use case. The control plane machine set spreads the control plane machines across multiple failure domains when possible. To use placement groups for the control plane, you must use a placement group type that can span multiple Availability Zones. Procedure In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following lines under the providerSpec field: apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet # ... spec: template: spec: providerSpec: value: instanceType: <supported_instance_type> 1 networkInterfaceType: EFA 2 placement: availabilityZone: <zone> 3 region: <region> 4 placementGroupName: <placement_group> 5 placementGroupPartition: <placement_group_partition_number> 6 # ... 1 Specify an instance type that supports EFAs . 2 Specify the EFA network interface type. 3 Specify the zone, for example, us-east-1a . 4 Specify the region, for example, us-east-1 . 5 Specify the name of the existing AWS placement group to deploy machines in. 6 Optional: Specify the partition number of the existing AWS placement group to deploy machines in. Verification In the AWS console, find a machine that the machine set created and verify the following in the machine properties: The placement group field has the value that you specified for the placementGroupName parameter in the machine set. The partition number field has the value that you specified for the placementGroupPartition parameter in the machine set. The interface type field indicates that it uses an EFA. 12.5.1.2.4. Machine set options for the Amazon EC2 Instance Metadata Service You can use machine sets to create machines that use a specific version of the Amazon EC2 Instance Metadata Service (IMDS). Machine sets can create machines that allow the use of both IMDSv1 and IMDSv2 or machines that require the use of IMDSv2. Note Using IMDSv2 is only supported on AWS clusters that were created with OpenShift Container Platform version 4.7 or later. Important Before configuring a machine set to create machines that require IMDSv2, ensure that any workloads that interact with the AWS metadata service support IMDSv2. 12.5.1.2.4.1. Configuring IMDS by using machine sets You can specify whether to require the use of IMDSv2 by adding or editing the value of metadataServiceOptions.authentication in the machine set YAML file for your machines. Prerequisites To use IMDSv2, your AWS cluster must have been created with OpenShift Container Platform version 4.7 or later. Procedure Add or edit the following lines under the providerSpec field: providerSpec: value: metadataServiceOptions: authentication: Required 1 1 To require IMDSv2, set the parameter value to Required . To allow the use of both IMDSv1 and IMDSv2, set the parameter value to Optional . If no value is specified, both IMDSv1 and IMDSv2 are allowed. 12.5.1.2.5. Machine sets that deploy machines as Dedicated Instances You can create a machine set running on AWS that deploys machines as Dedicated Instances. Dedicated Instances run in a virtual private cloud (VPC) on hardware that is dedicated to a single customer. These Amazon EC2 instances are physically isolated at the host hardware level. The isolation of Dedicated Instances occurs even if the instances belong to different AWS accounts that are linked to a single payer account. However, other instances that are not dedicated can share hardware with Dedicated Instances if they belong to the same AWS account. Instances with either public or dedicated tenancy are supported by the Machine API. Instances with public tenancy run on shared hardware. Public tenancy is the default tenancy. Instances with dedicated tenancy run on single-tenant hardware. 12.5.1.2.5.1. Creating Dedicated Instances by using machine sets You can run a machine that is backed by a Dedicated Instance by using Machine API integration. Set the tenancy field in your machine set YAML file to launch a Dedicated Instance on AWS. Procedure Specify a dedicated tenancy under the providerSpec field: providerSpec: placement: tenancy: dedicated 12.5.2. Control plane configuration options for Microsoft Azure You can change the configuration of your Microsoft Azure control plane machines and enable features by updating values in the control plane machine set. When you save an update to the control plane machine set, the Control Plane Machine Set Operator updates the control plane machines according to your configured update strategy . 12.5.2.1. Sample YAML for configuring Microsoft Azure clusters The following example YAML snippets show provider specification and failure domain configurations for an Azure cluster. 12.5.2.1.1. Sample Azure provider specification When you create a control plane machine set for an existing cluster, the provider specification must match the providerSpec configuration in the control plane Machine CR that is created by the installation program. You can omit any field that is set in the failure domain section of the CR. In the following example, <cluster_id> is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster Sample Azure providerSpec values apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: # ... template: # ... spec: providerSpec: value: acceleratedNetworking: true apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials 1 namespace: openshift-machine-api diagnostics: {} image: 2 offer: "" publisher: "" resourceID: /resourceGroups/<cluster_id>-rg/providers/Microsoft.Compute/galleries/gallery_<cluster_id>/images/<cluster_id>-gen2/versions/412.86.20220930 3 sku: "" version: "" internalLoadBalancer: <cluster_id>-internal 4 kind: AzureMachineProviderSpec 5 location: <region> 6 managedIdentity: <cluster_id>-identity metadata: creationTimestamp: null name: <cluster_id> networkResourceGroup: <cluster_id>-rg osDisk: 7 diskSettings: {} diskSizeGB: 1024 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: <cluster_id> 8 resourceGroup: <cluster_id>-rg subnet: <cluster_id>-master-subnet 9 userDataSecret: name: master-user-data 10 vmSize: Standard_D8s_v3 vnet: <cluster_id>-vnet zone: "1" 11 1 Specifies the secret name for the cluster. Do not change this value. 2 Specifies the image details for your control plane machine set. 3 Specifies an image that is compatible with your instance type. The Hyper-V generation V2 images created by the installation program have a -gen2 suffix, while V1 images have the same name without the suffix. 4 Specifies the internal load balancer for the control plane. This field might not be preconfigured but is required in both the ControlPlaneMachineSet and control plane Machine CRs. 5 Specifies the cloud provider platform type. Do not change this value. 6 Specifies the region to place control plane machines on. 7 Specifies the disk configuration for the control plane. 8 Specifies the public load balancer for the control plane. Note You can omit the publicLoadBalancer parameter on private OpenShift Container Platform clusters that have user-defined outbound routing. 9 Specifies the subnet for the control plane. 10 Specifies the control plane user data secret. Do not change this value. 11 Specifies the zone configuration for clusters that use a single zone for all failure domains. Note If the cluster is configured to use a different zone for each failure domain, this parameter is configured in the failure domain. If you specify this value in the provider specification when using different zones for each failure domain, the Control Plane Machine Set Operator ignores it. 12.5.2.1.2. Sample Azure failure domain configuration The control plane machine set concept of a failure domain is analogous to existing Azure concept of an Azure availability zone . The ControlPlaneMachineSet CR spreads control plane machines across multiple failure domains when possible. When configuring Azure failure domains in the control plane machine set, you must specify the availability zone name. An Azure cluster uses a single subnet that spans multiple zones. Sample Azure failure domain values apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: # ... template: # ... machines_v1beta1_machine_openshift_io: failureDomains: azure: - zone: "1" 1 - zone: "2" - zone: "3" platform: Azure 2 # ... 1 Each instance of zone specifies an Azure availability zone for a failure domain. Note If the cluster is configured to use a single zone for all failure domains, the zone parameter is configured in the provider specification instead of in the failure domain configuration. 2 Specifies the cloud provider platform name. Do not change this value. 12.5.2.2. Enabling Microsoft Azure features for control plane machines You can enable features by updating values in the control plane machine set. 12.5.2.2.1. Restricting the API server to private After you deploy a cluster to Amazon Web Services (AWS), you can reconfigure the API server to use only the private zone. Prerequisites Install the OpenShift CLI ( oc ). Have access to the web console as a user with admin privileges. Procedure In the web portal or console for your cloud provider, take the following actions: Locate and delete the appropriate load balancer component: Delete the api.USDclustername.USDyourdomain DNS entry in the public zone. Remove the external load balancers by deleting the following indicated lines in the control plane machine set custom resource: # ... providerSpec: value: # ... loadBalancers: - name: lk4pj-ext 1 type: network 2 - name: lk4pj-int type: network # ... 1 Delete the name value for the external load balancer, which ends in -ext . 2 Delete the type value for the external load balancer. Additional resources Configuring the Ingress Controller endpoint publishing scope to Internal 12.5.2.2.2. Using the Azure Marketplace offering You can create a machine set running on Azure that deploys machines that use the Azure Marketplace offering. To use this offering, you must first obtain the Azure Marketplace image. When obtaining your image, consider the following: While the images are the same, the Azure Marketplace publisher is different depending on your region. If you are located in North America, specify redhat as the publisher. If you are located in EMEA, specify redhat-limited as the publisher. The offer includes a rh-ocp-worker SKU and a rh-ocp-worker-gen1 SKU. The rh-ocp-worker SKU represents a Hyper-V generation version 2 VM image. The default instance types used in OpenShift Container Platform are version 2 compatible. If you plan to use an instance type that is only version 1 compatible, use the image associated with the rh-ocp-worker-gen1 SKU. The rh-ocp-worker-gen1 SKU represents a Hyper-V version 1 VM image. Important Installing images with the Azure marketplace is not supported on clusters with 64-bit ARM instances. Prerequisites You have installed the Azure CLI client (az) . Your Azure account is entitled for the offer and you have logged into this account with the Azure CLI client. Procedure Display all of the available OpenShift Container Platform images by running one of the following commands: North America: USD az vm image list --all --offer rh-ocp-worker --publisher redhat -o table Example output Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocp-worker:4.15.2024072409 4.15.2024072409 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409 4.15.2024072409 EMEA: USD az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table Example output Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.15.2024072409 4.15.2024072409 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409 4.15.2024072409 Note Use the latest image that is available for compute and control plane nodes. If required, your VMs are automatically upgraded as part of the installation process. Inspect the image for your offer by running one of the following commands: North America: USD az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Review the terms of the offer by running one of the following commands: North America: USD az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Accept the terms of the offering by running one of the following commands: North America: USD az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Record the image details of your offer, specifically the values for publisher , offer , sku , and version . Add the following parameters to the providerSpec section of your machine set YAML file using the image details for your offer: Sample providerSpec image values for Azure Marketplace machines providerSpec: value: image: offer: rh-ocp-worker publisher: redhat resourceID: "" sku: rh-ocp-worker type: MarketplaceWithPlan version: 413.92.2023101700 12.5.2.2.3. Enabling Azure boot diagnostics You can enable boot diagnostics on Azure machines that your machine set creates. Prerequisites Have an existing Microsoft Azure cluster. Procedure Add the diagnostics configuration that is applicable to your storage type to the providerSpec field in your machine set YAML file: For an Azure Managed storage account: providerSpec: diagnostics: boot: storageAccountType: AzureManaged 1 1 Specifies an Azure Managed storage account. For an Azure Unmanaged storage account: providerSpec: diagnostics: boot: storageAccountType: CustomerManaged 1 customerManaged: storageAccountURI: https://<storage-account>.blob.core.windows.net 2 1 Specifies an Azure Unmanaged storage account. 2 Replace <storage-account> with the name of your storage account. Note Only the Azure Blob Storage data service is supported. Verification On the Microsoft Azure portal, review the Boot diagnostics page for a machine deployed by the machine set, and verify that you can see the serial logs for the machine. 12.5.2.2.4. Machine sets that deploy machines with ultra disks as data disks You can create a machine set running on Azure that deploys machines with ultra disks. Ultra disks are high-performance storage that are intended for use with the most demanding data workloads. Additional resources Microsoft Azure ultra disks documentation 12.5.2.2.4.1. Creating machines with ultra disks by using machine sets You can deploy machines with ultra disks on Azure by editing your machine set YAML file. Prerequisites Have an existing Microsoft Azure cluster. Procedure Create a custom secret in the openshift-machine-api namespace using the master data secret by running the following command: USD oc -n openshift-machine-api \ get secret <role>-user-data \ 1 --template='{{index .data.userData | base64decode}}' | jq > userData.txt 2 1 Replace <role> with master . 2 Specify userData.txt as the name of the new custom secret. In a text editor, open the userData.txt file and locate the final } character in the file. On the immediately preceding line, add a , . Create a new line after the , and add the following configuration details: "storage": { "disks": [ 1 { "device": "/dev/disk/azure/scsi1/lun0", 2 "partitions": [ 3 { "label": "lun0p1", 4 "sizeMiB": 1024, 5 "startMiB": 0 } ] } ], "filesystems": [ 6 { "device": "/dev/disk/by-partlabel/lun0p1", "format": "xfs", "path": "/var/lib/lun0p1" } ] }, "systemd": { "units": [ 7 { "contents": "[Unit]\nBefore=local-fs.target\n[Mount]\nWhere=/var/lib/lun0p1\nWhat=/dev/disk/by-partlabel/lun0p1\nOptions=defaults,pquota\n[Install]\nWantedBy=local-fs.target\n", 8 "enabled": true, "name": "var-lib-lun0p1.mount" } ] } 1 The configuration details for the disk that you want to attach to a node as an ultra disk. 2 Specify the lun value that is defined in the dataDisks stanza of the machine set you are using. For example, if the machine set contains lun: 0 , specify lun0 . You can initialize multiple data disks by specifying multiple "disks" entries in this configuration file. If you specify multiple "disks" entries, ensure that the lun value for each matches the value in the machine set. 3 The configuration details for a new partition on the disk. 4 Specify a label for the partition. You might find it helpful to use hierarchical names, such as lun0p1 for the first partition of lun0 . 5 Specify the total size in MiB of the partition. 6 Specify the filesystem to use when formatting a partition. Use the partition label to specify the partition. 7 Specify a systemd unit to mount the partition at boot. Use the partition label to specify the partition. You can create multiple partitions by specifying multiple "partitions" entries in this configuration file. If you specify multiple "partitions" entries, you must specify a systemd unit for each. 8 For Where , specify the value of storage.filesystems.path . For What , specify the value of storage.filesystems.device . Extract the disabling template value to a file called disableTemplating.txt by running the following command: USD oc -n openshift-machine-api get secret <role>-user-data \ 1 --template='{{index .data.disableTemplating | base64decode}}' | jq > disableTemplating.txt 1 Replace <role> with master . Combine the userData.txt file and disableTemplating.txt file to create a data secret file by running the following command: USD oc -n openshift-machine-api create secret generic <role>-user-data-x5 \ 1 --from-file=userData=userData.txt \ --from-file=disableTemplating=disableTemplating.txt 1 For <role>-user-data-x5 , specify the name of the secret. Replace <role> with master . Edit your control plane machine set CR by running the following command: USD oc --namespace openshift-machine-api edit controlplanemachineset.machine.openshift.io cluster Add the following lines in the positions indicated: apiVersion: machine.openshift.io/v1beta1 kind: ControlPlaneMachineSet spec: template: spec: metadata: labels: disk: ultrassd 1 providerSpec: value: ultraSSDCapability: Enabled 2 dataDisks: 3 - nameSuffix: ultrassd lun: 0 diskSizeGB: 4 deletionPolicy: Delete cachingType: None managedDisk: storageAccountType: UltraSSD_LRS userDataSecret: name: <role>-user-data-x5 4 1 Specify a label to use to select a node that is created by this machine set. This procedure uses disk.ultrassd for this value. 2 3 These lines enable the use of ultra disks. For dataDisks , include the entire stanza. 4 Specify the user data secret created earlier. Replace <role> with master . Save your changes. For clusters that use the default RollingUpdate update strategy, the Operator automatically propagates the changes to your control plane configuration. For clusters that are configured to use the OnDelete update strategy, you must replace your control plane machines manually. Verification Validate that the machines are created by running the following command: USD oc get machines The machines should be in the Running state. For a machine that is running and has a node attached, validate the partition by running the following command: USD oc debug node/<node-name> -- chroot /host lsblk In this command, oc debug node/<node-name> starts a debugging shell on the node <node-name> and passes a command with -- . The passed command chroot /host provides access to the underlying host OS binaries, and lsblk shows the block devices that are attached to the host OS machine. steps To use an ultra disk on the control plane, reconfigure your workload to use the control plane's ultra disk mount point. 12.5.2.2.4.2. Troubleshooting resources for machine sets that enable ultra disks Use the information in this section to understand and recover from issues you might encounter. 12.5.2.2.4.2.1. Incorrect ultra disk configuration If an incorrect configuration of the ultraSSDCapability parameter is specified in the machine set, the machine provisioning fails. For example, if the ultraSSDCapability parameter is set to Disabled , but an ultra disk is specified in the dataDisks parameter, the following error message appears: StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set. To resolve this issue, verify that your machine set configuration is correct. 12.5.2.2.4.2.2. Unsupported disk parameters If a region, availability zone, or instance size that is not compatible with ultra disks is specified in the machine set, the machine provisioning fails. Check the logs for the following error message: failed to create vm <machine_name>: failure sending request for machine <machine_name>: cannot create vm: compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="BadRequest" Message="Storage Account type 'UltraSSD_LRS' is not supported <more_information_about_why>." To resolve this issue, verify that you are using this feature in a supported environment and that your machine set configuration is correct. 12.5.2.2.4.2.3. Unable to delete disks If the deletion of ultra disks as data disks is not working as expected, the machines are deleted and the data disks are orphaned. You must delete the orphaned disks manually if desired. 12.5.2.2.5. Enabling customer-managed encryption keys for a machine set You can supply an encryption key to Azure to encrypt data on managed disks at rest. You can enable server-side encryption with customer-managed keys by using the Machine API. An Azure Key Vault, a disk encryption set, and an encryption key are required to use a customer-managed key. The disk encryption set must be in a resource group where the Cloud Credential Operator (CCO) has granted permissions. If not, an additional reader role is required to be granted on the disk encryption set. Prerequisites Create an Azure Key Vault instance . Create an instance of a disk encryption set . Grant the disk encryption set access to key vault . Procedure Configure the disk encryption set under the providerSpec field in your machine set YAML file. For example: providerSpec: value: osDisk: diskSizeGB: 128 managedDisk: diskEncryptionSet: id: /subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Compute/diskEncryptionSets/<disk_encryption_set_name> storageAccountType: Premium_LRS Additional resources Azure documentation about customer-managed keys 12.5.2.2.6. Configuring trusted launch for Azure virtual machines by using machine sets Important Using trusted launch for Azure virtual machines is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenShift Container Platform 4.18 supports trusted launch for Azure virtual machines (VMs). By editing the machine set YAML file, you can configure the trusted launch options that a machine set uses for machines that it deploys. For example, you can configure these machines to use UEFI security features such as Secure Boot or a dedicated virtual Trusted Platform Module (vTPM) instance. Note Some feature combinations result in an invalid configuration. Table 12.2. UEFI feature combination compatibility Secure Boot [1] vTPM [2] Valid configuration Enabled Enabled Yes Enabled Disabled Yes Enabled Omitted Yes Disabled Enabled Yes Omitted Enabled Yes Disabled Disabled No Omitted Disabled No Omitted Omitted No Using the secureBoot field. Using the virtualizedTrustedPlatformModule field. For more information about related features and functionality, see the Microsoft Azure documentation about Trusted launch for Azure virtual machines . Procedure In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following section under the providerSpec field to provide a valid configuration: Sample valid configuration with UEFI Secure Boot and vTPM enabled apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet # ... spec: template: machines_v1beta1_machine_openshift_io: spec: providerSpec: value: securityProfile: settings: securityType: TrustedLaunch 1 trustedLaunch: uefiSettings: 2 secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 # ... 1 Enables the use of trusted launch for Azure virtual machines. This value is required for all valid configurations. 2 Specifies which UEFI security features to use. This section is required for all valid configurations. 3 Enables UEFI Secure Boot. 4 Enables the use of a vTPM. Verification On the Azure portal, review the details for a machine deployed by the machine set and verify that the trusted launch options match the values that you configured. 12.5.2.2.7. Configuring Azure confidential virtual machines by using machine sets Important Using Azure confidential virtual machines is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenShift Container Platform 4.18 supports Azure confidential virtual machines (VMs). Note Confidential VMs are currently not supported on 64-bit ARM architectures. By editing the machine set YAML file, you can configure the confidential VM options that a machine set uses for machines that it deploys. For example, you can configure these machines to use UEFI security features such as Secure Boot or a dedicated virtual Trusted Platform Module (vTPM) instance. Warning Not all instance types support confidential VMs. Do not change the instance type for a control plane machine set that is configured to use confidential VMs to a type that is incompatible. Using an incompatible instance type can cause your cluster to become unstable. For more information about related features and functionality, see the Microsoft Azure documentation about Confidential virtual machines . Procedure In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following section under the providerSpec field: Sample configuration apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet # ... spec: template: spec: providerSpec: value: osDisk: # ... managedDisk: securityProfile: 1 securityEncryptionType: VMGuestStateOnly 2 # ... securityProfile: 3 settings: securityType: ConfidentialVM 4 confidentialVM: uefiSettings: 5 secureBoot: Disabled 6 virtualizedTrustedPlatformModule: Enabled 7 vmSize: Standard_DC16ads_v5 8 # ... 1 Specifies security profile settings for the managed disk when using a confidential VM. 2 Enables encryption of the Azure VM Guest State (VMGS) blob. This setting requires the use of vTPM. 3 Specifies security profile settings for the confidential VM. 4 Enables the use of confidential VMs. This value is required for all valid configurations. 5 Specifies which UEFI security features to use. This section is required for all valid configurations. 6 Disables UEFI Secure Boot. 7 Enables the use of a vTPM. 8 Specifies an instance type that supports confidential VMs. Verification On the Azure portal, review the details for a machine deployed by the machine set and verify that the confidential VM options match the values that you configured. 12.5.2.2.8. Accelerated Networking for Microsoft Azure VMs Accelerated Networking uses single root I/O virtualization (SR-IOV) to provide Microsoft Azure VMs with a more direct path to the switch. This enhances network performance. This feature can be enabled after installation. 12.5.2.2.8.1. Limitations Consider the following limitations when deciding whether to use Accelerated Networking: Accelerated Networking is only supported on clusters where the Machine API is operational. Accelerated Networking requires an Azure VM size that includes at least four vCPUs. To satisfy this requirement, you can change the value of vmSize in your machine set. For information about Azure VM sizes, see Microsoft Azure documentation . 12.5.2.2.9. Configuring Capacity Reservation by using machine sets OpenShift Container Platform version 4.18 and later supports on-demand Capacity Reservation with Capacity Reservation groups on Microsoft Azure clusters. You can configure a machine set to deploy machines on any available resources that match the parameters of a capacity request that you define. These parameters specify the VM size, region, and number of instances that you want to reserve. If your Azure subscription quota can accommodate the capacity request, the deployment succeeds. For more information, including limitations and suggested use cases for this Azure instance type, see the Microsoft Azure documentation about On-demand Capacity Reservation . Note You cannot change an existing Capacity Reservation configuration for a machine set. To use a different Capacity Reservation group, you must replace the machine set and the machines that the machine set deployed. Prerequisites You have access to the cluster with cluster-admin privileges. You installed the OpenShift CLI ( oc ). You created a Capacity Reservation group. For more information, see the Microsoft Azure documentation Create a Capacity Reservation . Procedure In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following section under the providerSpec field: Sample configuration apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet # ... spec: template: machines_v1beta1_machine_openshift_io: spec: providerSpec: value: capacityReservationGroupID: <capacity_reservation_group> 1 # ... 1 Specify the ID of the Capacity Reservation group that you want the machine set to deploy machines on. Verification To verify machine deployment, list the machines that the machine set created by running the following command: USD oc get machine \ -n openshift-machine-api \ -l machine.openshift.io/cluster-api-machine-role=master In the output, verify that the characteristics of the listed machines match the parameters of your Capacity Reservation. 12.5.2.2.9.1. Enabling Accelerated Networking on an existing Microsoft Azure cluster You can enable Accelerated Networking on Azure by adding acceleratedNetworking to your machine set YAML file. Prerequisites Have an existing Microsoft Azure cluster where the Machine API is operational. Procedure Add the following to the providerSpec field: providerSpec: value: acceleratedNetworking: true 1 vmSize: <azure-vm-size> 2 1 This line enables Accelerated Networking. 2 Specify an Azure VM size that includes at least four vCPUs. For information about VM sizes, see Microsoft Azure documentation . Verification On the Microsoft Azure portal, review the Networking settings page for a machine provisioned by the machine set, and verify that the Accelerated networking field is set to Enabled . 12.5.3. Control plane configuration options for Google Cloud Platform You can change the configuration of your Google Cloud Platform (GCP) control plane machines and enable features by updating values in the control plane machine set. When you save an update to the control plane machine set, the Control Plane Machine Set Operator updates the control plane machines according to your configured update strategy . 12.5.3.1. Sample YAML for configuring Google Cloud Platform clusters The following example YAML snippets show provider specification and failure domain configurations for a GCP cluster. 12.5.3.1.1. Sample GCP provider specification When you create a control plane machine set for an existing cluster, the provider specification must match the providerSpec configuration in the control plane machine custom resource (CR) that is created by the installation program. You can omit any field that is set in the failure domain section of the CR. Values obtained by using the OpenShift CLI In the following example, you can obtain some of the values for your cluster by using the OpenShift CLI. Infrastructure ID The <cluster_id> string is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster Image path The <path_to_image> string is the path to the image that was used to create the disk. If you have the OpenShift CLI installed, you can obtain the path to the image by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.machines_v1beta1_machine_openshift_io.spec.providerSpec.value.disks[0].image}{"\n"}' \ get ControlPlaneMachineSet/cluster Sample GCP providerSpec values apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: # ... template: # ... spec: providerSpec: value: apiVersion: machine.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials 1 deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 2 labels: null sizeGb: 200 type: pd-ssd kind: GCPMachineProviderSpec 3 machineType: e2-standard-4 metadata: creationTimestamp: null metadataServiceOptions: {} networkInterfaces: - network: <cluster_id>-network subnetwork: <cluster_id>-master-subnet projectID: <project_name> 4 region: <region> 5 serviceAccounts: 6 - email: <cluster_id>-m@<project_name>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform shieldedInstanceConfig: {} tags: - <cluster_id>-master targetPools: - <cluster_id>-api userDataSecret: name: master-user-data 7 zone: "" 8 1 Specifies the secret name for the cluster. Do not change this value. 2 Specifies the path to the image that was used to create the disk. To use a GCP Marketplace image, specify the offer to use: OpenShift Container Platform: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-ocp-413-x86-64-202305021736 OpenShift Platform Plus: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-opp-413-x86-64-202305021736 OpenShift Kubernetes Engine: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-oke-413-x86-64-202305021736 3 Specifies the cloud provider platform type. Do not change this value. 4 Specifies the name of the GCP project that you use for your cluster. 5 Specifies the GCP region for the cluster. 6 Specifies a single service account. Multiple service accounts are not supported. 7 Specifies the control plane user data secret. Do not change this value. 8 This parameter is configured in the failure domain, and is shown with an empty value here. If a value specified for this parameter differs from the value in the failure domain, the Operator overwrites it with the value in the failure domain. 12.5.3.1.2. Sample GCP failure domain configuration The control plane machine set concept of a failure domain is analogous to the existing GCP concept of a zone . The ControlPlaneMachineSet CR spreads control plane machines across multiple failure domains when possible. When configuring GCP failure domains in the control plane machine set, you must specify the zone name to use. Sample GCP failure domain values apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: # ... template: # ... machines_v1beta1_machine_openshift_io: failureDomains: gcp: - zone: <gcp_zone_a> 1 - zone: <gcp_zone_b> 2 - zone: <gcp_zone_c> - zone: <gcp_zone_d> platform: GCP 3 # ... 1 Specifies a GCP zone for the first failure domain. 2 Specifies an additional failure domain. Further failure domains are added the same way. 3 Specifies the cloud provider platform name. Do not change this value. 12.5.3.2. Enabling Google Cloud Platform features for control plane machines You can enable features by updating values in the control plane machine set. 12.5.3.2.1. Configuring persistent disk types by using machine sets You can configure the type of persistent disk that a machine set deploys machines on by editing the machine set YAML file. For more information about persistent disk types, compatibility, regional availability, and limitations, see the GCP Compute Engine documentation about persistent disks . Procedure In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following line under the providerSpec field: apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet ... spec: template: spec: providerSpec: value: disks: type: pd-ssd 1 1 Control plane nodes must use the pd-ssd disk type. Verification Using the Google Cloud console, review the details for a machine deployed by the machine set and verify that the Type field matches the configured disk type. 12.5.3.2.2. Configuring Confidential VM by using machine sets By editing the machine set YAML file, you can configure the Confidential VM options that a machine set uses for machines that it deploys. For more information about Confidential VM features, functions, and compatibility, see the GCP Compute Engine documentation about Confidential VM . Note Confidential VMs are currently not supported on 64-bit ARM architectures. Important OpenShift Container Platform 4.18 does not support some Confidential Compute features, such as Confidential VMs with AMD Secure Encrypted Virtualization Secure Nested Paging (SEV-SNP). Procedure In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following section under the providerSpec field: apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet ... spec: template: spec: providerSpec: value: confidentialCompute: Enabled 1 onHostMaintenance: Terminate 2 machineType: n2d-standard-8 3 ... 1 Specify whether Confidential VM is enabled. Valid values are Disabled or Enabled . 2 Specify the behavior of the VM during a host maintenance event, such as a hardware or software update. For a machine that uses Confidential VM, this value must be set to Terminate , which stops the VM. Confidential VM does not support live VM migration. 3 Specify a machine type that supports Confidential VM. Confidential VM supports the N2D and C2D series of machine types. Verification On the Google Cloud console, review the details for a machine deployed by the machine set and verify that the Confidential VM options match the values that you configured. 12.5.3.2.3. Configuring Shielded VM options by using machine sets By editing the machine set YAML file, you can configure the Shielded VM options that a machine set uses for machines that it deploys. For more information about Shielded VM features and functionality, see the GCP Compute Engine documentation about Shielded VM . Procedure In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following section under the providerSpec field: apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet # ... spec: template: spec: providerSpec: value: shieldedInstanceConfig: 1 integrityMonitoring: Enabled 2 secureBoot: Disabled 3 virtualizedTrustedPlatformModule: Enabled 4 # ... 1 In this section, specify any Shielded VM options that you want. 2 Specify whether integrity monitoring is enabled. Valid values are Disabled or Enabled . Note When integrity monitoring is enabled, you must not disable virtual trusted platform module (vTPM). 3 Specify whether UEFI Secure Boot is enabled. Valid values are Disabled or Enabled . 4 Specify whether vTPM is enabled. Valid values are Disabled or Enabled . Verification Using the Google Cloud console, review the details for a machine deployed by the machine set and verify that the Shielded VM options match the values that you configured. Additional resources What is Shielded VM? Secure Boot Virtual Trusted Platform Module (vTPM) Integrity monitoring 12.5.3.2.4. Enabling customer-managed encryption keys for a machine set Google Cloud Platform (GCP) Compute Engine allows users to supply an encryption key to encrypt data on disks at rest. The key is used to encrypt the data encryption key, not to encrypt the customer's data. By default, Compute Engine encrypts this data by using Compute Engine keys. You can enable encryption with a customer-managed key in clusters that use the Machine API. You must first create a KMS key and assign the correct permissions to a service account. The KMS key name, key ring name, and location are required to allow a service account to use your key. Note If you do not want to use a dedicated service account for the KMS encryption, the Compute Engine default service account is used instead. You must grant the default service account permission to access the keys if you do not use a dedicated service account. The Compute Engine default service account name follows the service-<project_number>@compute-system.iam.gserviceaccount.com pattern. Procedure To allow a specific service account to use your KMS key and to grant the service account the correct IAM role, run the following command with your KMS key name, key ring name, and location: USD gcloud kms keys add-iam-policy-binding <key_name> \ --keyring <key_ring_name> \ --location <key_ring_location> \ --member "serviceAccount:service-<project_number>@compute-system.iam.gserviceaccount.com" \ --role roles/cloudkms.cryptoKeyEncrypterDecrypter Configure the encryption key under the providerSpec field in your machine set YAML file. For example: apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet ... spec: template: spec: providerSpec: value: disks: - type: encryptionKey: kmsKey: name: machine-encryption-key 1 keyRing: openshift-encrpytion-ring 2 location: global 3 projectID: openshift-gcp-project 4 kmsKeyServiceAccount: openshift-service-account@openshift-gcp-project.iam.gserviceaccount.com 5 1 The name of the customer-managed encryption key that is used for the disk encryption. 2 The name of the KMS key ring that the KMS key belongs to. 3 The GCP location in which the KMS key ring exists. 4 Optional: The ID of the project in which the KMS key ring exists. If a project ID is not set, the machine set projectID in which the machine set was created is used. 5 Optional: The service account that is used for the encryption request for the given KMS key. If a service account is not set, the Compute Engine default service account is used. When a new machine is created by using the updated providerSpec object configuration, the disk encryption key is encrypted with the KMS key. 12.5.4. Control plane configuration options for Nutanix You can change the configuration of your Nutanix control plane machines by updating values in the control plane machine set. When you save an update to the control plane machine set, the Control Plane Machine Set Operator updates the control plane machines according to your configured update strategy . 12.5.4.1. Sample YAML for configuring Nutanix clusters The following example YAML snippet shows a provider specification configuration for a Nutanix cluster. 12.5.4.1.1. Sample Nutanix provider specification When you create a control plane machine set for an existing cluster, the provider specification must match the providerSpec configuration in the control plane machine custom resource (CR) that is created by the installation program. Values obtained by using the OpenShift CLI In the following example, you can obtain some of the values for your cluster by using the OpenShift CLI. Infrastructure ID The <cluster_id> string is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster Sample Nutanix providerSpec values apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: # ... template: # ... spec: providerSpec: value: apiVersion: machine.openshift.io/v1 bootType: "" 1 categories: 2 - key: <category_name> value: <category_value> cluster: 3 type: uuid uuid: <cluster_uuid> credentialsSecret: name: nutanix-credentials 4 image: 5 name: <cluster_id>-rhcos type: name kind: NutanixMachineProviderConfig 6 memorySize: 16Gi 7 metadata: creationTimestamp: null project: 8 type: name name: <project_name> subnets: 9 - type: uuid uuid: <subnet_uuid> systemDiskSize: 120Gi 10 userDataSecret: name: master-user-data 11 vcpuSockets: 8 12 vcpusPerSocket: 1 13 1 Specifies the boot type that the control plane machines use. For more information about boot types, see Understanding UEFI, Secure Boot, and TPM in the Virtualized Environment . Valid values are Legacy , SecureBoot , or UEFI . The default is Legacy . Note You must use the Legacy boot type in OpenShift Container Platform 4.18. 2 Specifies one or more Nutanix Prism categories to apply to control plane machines. This stanza requires key and value parameters for a category key-value pair that exists in Prism Central. For more information about categories, see Category management . 3 Specifies a Nutanix Prism Element cluster configuration. In this example, the cluster type is uuid , so there is a uuid stanza. Note Clusters that use OpenShift Container Platform version 4.15 or later can use failure domain configurations. If the cluster uses a failure domain, configure this parameter in the failure domain. If you specify this value in the provider specification when using failure domains, the Control Plane Machine Set Operator ignores it. 4 Specifies the secret name for the cluster. Do not change this value. 5 Specifies the image that was used to create the disk. 6 Specifies the cloud provider platform type. Do not change this value. 7 Specifies the memory allocated for the control plane machines. 8 Specifies the Nutanix project that you use for your cluster. In this example, the project type is name , so there is a name stanza. 9 Specify one or more Prism Element subnet objects. In this example, the subnet type is uuid , so there is a uuid stanza. A maximum of 32 subnets for each Prism Element failure domain in the cluster is supported. Important The following known issues with configuring multiple subnets for an existing Nutanix cluster by using a control plane machine set exist in OpenShift Container Platform version 4.18: Adding subnets above the existing subnet in the subnets stanza causes a control plane node to become stuck in the Deleting state. As a workaround, only add subnets below the existing subnet in the subnets stanza. Sometimes, after adding a subnet, the updated control plane machines appear in the Nutanix console but the OpenShift Container Platform cluster is unreachable. There is no workaround for this issue. These issues occur on clusters that use a control plane machine set to configure subnets regardless of whether subnets are specified in a failure domain or the provider specification. For more information, see OCPBUGS-50904 . The CIDR IP address prefix for one of the specified subnets must contain the virtual IP addresses that the OpenShift Container Platform cluster uses. All subnet UUID values must be unique. Note Clusters that use OpenShift Container Platform version 4.15 or later can use failure domain configurations. If the cluster uses a failure domain, configure this parameter in the failure domain. If you specify this value in the provider specification when using failure domains, the Control Plane Machine Set Operator ignores it. 10 Specifies the VM disk size for the control plane machines. 11 Specifies the control plane user data secret. Do not change this value. 12 Specifies the number of vCPU sockets allocated for the control plane machines. 13 Specifies the number of vCPUs for each control plane vCPU socket. 12.5.4.1.2. Failure domains for Nutanix clusters To add or update the failure domain configuration on a Nutanix cluster, you must make coordinated changes to several resources. The following actions are required: Modify the cluster infrastructure custom resource (CR). Modify the cluster control plane machine set CR. Modify or replace the compute machine set CRs. For more information, see "Adding failure domains to an existing Nutanix cluster" in the Post-installation configuration content. Additional resources Adding failure domains to an existing Nutanix cluster 12.5.5. Control plane configuration options for Red Hat OpenStack Platform You can change the configuration of your Red Hat OpenStack Platform (RHOSP) control plane machines and enable features by updating values in the control plane machine set. When you save an update to the control plane machine set, the Control Plane Machine Set Operator updates the control plane machines according to your configured update strategy . 12.5.5.1. Sample YAML for configuring Red Hat OpenStack Platform (RHOSP) clusters The following example YAML snippets show provider specification and failure domain configurations for an RHOSP cluster. 12.5.5.1.1. Sample RHOSP provider specification When you create a control plane machine set for an existing cluster, the provider specification must match the providerSpec configuration in the control plane machine custom resource (CR) that is created by the installation program. Sample OpenStack providerSpec values apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: # ... template: # ... spec: providerSpec: value: apiVersion: machine.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials 1 namespace: openshift-machine-api flavor: m1.xlarge 2 image: ocp1-2g2xs-rhcos kind: OpenstackProviderSpec 3 metadata: creationTimestamp: null networks: - filter: {} subnets: - filter: name: ocp1-2g2xs-nodes tags: openshiftClusterID=ocp1-2g2xs securityGroups: - filter: {} name: ocp1-2g2xs-master 4 serverGroupName: ocp1-2g2xs-master serverMetadata: Name: ocp1-2g2xs-master openshiftClusterID: ocp1-2g2xs tags: - openshiftClusterID=ocp1-2g2xs trunk: true userDataSecret: name: master-user-data 1 The secret name for the cluster. Do not change this value. 2 The RHOSP flavor type for the control plane. 3 The RHOSP cloud provider platform type. Do not change this value. 4 The control plane machines security group. 12.5.5.1.2. Sample RHOSP failure domain configuration The control plane machine set concept of a failure domain is analogous to the existing Red Hat OpenStack Platform (RHOSP) concept of an availability zone . The ControlPlaneMachineSet CR spreads control plane machines across multiple failure domains when possible. The following example demonstrates the use of multiple Nova availability zones as well as Cinder availability zones. Sample OpenStack failure domain values apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: # ... template: # ... machines_v1beta1_machine_openshift_io: failureDomains: platform: OpenStack openstack: - availabilityZone: nova-az0 rootVolume: availabilityZone: cinder-az0 - availabilityZone: nova-az1 rootVolume: availabilityZone: cinder-az1 - availabilityZone: nova-az2 rootVolume: availabilityZone: cinder-az2 # ... 12.5.5.2. Enabling Red Hat OpenStack Platform (RHOSP) features for control plane machines You can enable features by updating values in the control plane machine set. 12.5.5.2.1. Changing the RHOSP compute flavor by using a control plane machine set You can change the Red Hat OpenStack Platform (RHOSP) compute service (Nova) flavor that your control plane machines use by updating the specification in the control plane machine set custom resource. In RHOSP, flavors define the compute, memory, and storage capacity of computing instances. By increasing or decreasing the flavor size, you can scale your control plane vertically. Prerequisites Your RHOSP cluster uses a control plane machine set. Procedure Edit the following line under the providerSpec field: providerSpec: value: # ... flavor: m1.xlarge 1 1 Specify a RHOSP flavor type that has the same base as the existing selection. For example, you can change m6i.xlarge to m6i.2xlarge or m6i.4xlarge . You can choose larger or smaller flavors depending on your vertical scaling needs. Save your changes. After you save your changes, machines are replaced with ones that use the flavor you chose. 12.5.6. Control plane configuration options for VMware vSphere You can change the configuration of your VMware vSphere control plane machines by updating values in the control plane machine set. When you save an update to the control plane machine set, the Control Plane Machine Set Operator updates the control plane machines according to your configured update strategy . 12.5.6.1. Sample YAML for configuring VMware vSphere clusters The following example YAML snippets show provider specification and failure domain configurations for a vSphere cluster. 12.5.6.1.1. Sample VMware vSphere provider specification When you create a control plane machine set for an existing cluster, the provider specification must match the providerSpec configuration in the control plane machine custom resource (CR) that is created by the installation program. Sample vSphere providerSpec values apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: # ... template: # ... spec: providerSpec: value: apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials 1 diskGiB: 120 2 kind: VSphereMachineProviderSpec 3 memoryMiB: 16384 4 metadata: creationTimestamp: null network: 5 devices: - networkName: <vm_network_name> numCPUs: 4 6 numCoresPerSocket: 4 7 snapshot: "" template: <vm_template_name> 8 userDataSecret: name: master-user-data 9 workspace: 10 datacenter: <vcenter_data_center_name> 11 datastore: <vcenter_datastore_name> 12 folder: <path_to_vcenter_vm_folder> 13 resourcePool: <vsphere_resource_pool> 14 server: <vcenter_server_ip> 15 1 Specifies the secret name for the cluster. Do not change this value. 2 Specifies the VM disk size for the control plane machines. 3 Specifies the cloud provider platform type. Do not change this value. 4 Specifies the memory allocated for the control plane machines. 5 Specifies the network on which the control plane is deployed. Note If the cluster is configured to use a failure domain, this parameter is configured in the failure domain. If you specify this value in the provider specification when using failure domains, the Control Plane Machine Set Operator ignores it. 6 Specifies the number of CPUs allocated for the control plane machines. 7 Specifies the number of cores for each control plane CPU. 8 Specifies the vSphere VM template to use, such as user-5ddjd-rhcos . Note If the cluster is configured to use a failure domain, this parameter is configured in the failure domain. If you specify this value in the provider specification when using failure domains, the Control Plane Machine Set Operator ignores it. 9 Specifies the control plane user data secret. Do not change this value. 10 Specifies the workspace details for the control plane. Note If the cluster is configured to use a failure domain, these parameters are configured in the failure domain. If you specify these values in the provider specification when using failure domains, the Control Plane Machine Set Operator ignores them. 11 Specifies the vCenter data center for the control plane. 12 Specifies the vCenter datastore for the control plane. 13 Specifies the path to the vSphere VM folder in vCenter, such as /dc1/vm/user-inst-5ddjd . 14 Specifies the vSphere resource pool for your VMs. 15 Specifies the vCenter server IP or fully qualified domain name. 12.5.6.1.2. Sample VMware vSphere failure domain configuration On VMware vSphere infrastructure, the cluster-wide infrastructure Custom Resource Definition (CRD), infrastructures.config.openshift.io , defines failure domains for your cluster. The providerSpec in the ControlPlaneMachineSet custom resource (CR) specifies names for failure domains that the control plane machine set uses to ensure control plane nodes are deployed to the appropriate failure domain. A failure domain is an infrastructure resource made up of a control plane machine set, a vCenter data center, vCenter datastore, and a network. By using a failure domain resource, you can use a control plane machine set to deploy control plane machines on separate clusters or data centers. A control plane machine set also balances control plane machines across defined failure domains to provide fault tolerance capabilities to your infrastructure. Note If you modify the ProviderSpec configuration in the ControlPlaneMachineSet CR, the control plane machine set updates all control plane machines deployed on the primary infrastructure and each failure domain infrastructure. Sample VMware vSphere failure domain values apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: # ... template: # ... machines_v1beta1_machine_openshift_io: failureDomains: 1 platform: VSphere vsphere: 2 - name: <failure_domain_name_1> - name: <failure_domain_name_2> # ... 1 Specifies the vCenter location for OpenShift Container Platform cluster nodes. 2 Specifies failure domains by name for the control plane machine set. Important Each name field value in this section must match the corresponding value in the failureDomains.name field of the cluster-wide infrastructure CRD. You can find the value of the failureDomains.name field by running the following command: USD oc get infrastructure cluster -o=jsonpath={.spec.platformSpec.vsphere.failureDomains[0].name} The name field is the only supported failure domain field that you can specify in the ControlPlaneMachineSet CR. For an example of a cluster-wide infrastructure CRD that defines resources for each failure domain, see "Specifying multiple regions and zones for your cluster on vSphere." Additional resources Specifying multiple regions and zones for your cluster on vSphere 12.5.6.2. Enabling VMware vSphere features for control plane machines You can enable features by updating values in the control plane machine set. 12.5.6.2.1. Adding tags to machines by using machine sets OpenShift Container Platform adds a cluster-specific tag to each virtual machine (VM) that it creates. The installation program uses these tags to select the VMs to delete when uninstalling a cluster. In addition to the cluster-specific tags assigned to VMs, you can configure a machine set to add up to 10 additional vSphere tags to the VMs it provisions. Prerequisites You have access to an OpenShift Container Platform cluster installed on vSphere using an account with cluster-admin permissions. You have access to the VMware vCenter console associated with your cluster. You have created a tag in the vCenter console. You have installed the OpenShift CLI ( oc ). Procedure Use the vCenter console to find the tag ID for any tag that you want to add to your machines: Log in to the vCenter console. From the Home menu, click Tags & Custom Attributes . Select a tag that you want to add to your machines. Use the browser URL for the tag that you select to identify the tag ID. Example tag URL https://vcenter.example.com/ui/app/tags/tag/urn:vmomi:InventoryServiceTag:208e713c-cae3-4b7f-918e-4051ca7d1f97:GLOBAL/permissions Example tag ID urn:vmomi:InventoryServiceTag:208e713c-cae3-4b7f-918e-4051ca7d1f97:GLOBAL In a text editor, open the YAML file for an existing machine set or create a new one. Edit the following lines under the providerSpec field: apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet # ... spec: template: spec: providerSpec: value: tagIDs: 1 - <tag_id_value> 2 # ... 1 Specify a list of up to 10 tags to add to the machines that this machine set provisions. 2 Specify the value of the tag that you want to add to your machines. For example, urn:vmomi:InventoryServiceTag:208e713c-cae3-4b7f-918e-4051ca7d1f97:GLOBAL . 12.6. Control plane resiliency and recovery You can use the control plane machine set to improve the resiliency of the control plane for your OpenShift Container Platform cluster. 12.6.1. High availability and fault tolerance with failure domains When possible, the control plane machine set spreads the control plane machines across multiple failure domains. This configuration provides high availability and fault tolerance within the control plane. This strategy can help protect the control plane when issues arise within the infrastructure provider. 12.6.1.1. Failure domain platform support and configuration The control plane machine set concept of a failure domain is analogous to existing concepts on cloud providers. Not all platforms support the use of failure domains. Table 12.3. Failure domain support matrix Cloud provider Support for failure domains Provider nomenclature Amazon Web Services (AWS) X Availability Zone (AZ) Google Cloud Platform (GCP) X zone Microsoft Azure X Azure availability zone Nutanix X failure domain Red Hat OpenStack Platform (RHOSP) X OpenStack Nova availability zones and OpenStack Cinder availability zones VMware vSphere X failure domain mapped to a vSphere Zone [1] For more information, see "Regions and zones for a VMware vCenter". The failure domain configuration in the control plane machine set custom resource (CR) is platform-specific. For more information about failure domain parameters in the CR, see the sample failure domain configuration for your provider. Additional resources Sample Amazon Web Services failure domain configuration Sample Google Cloud Platform failure domain configuration Sample Microsoft Azure failure domain configuration Adding failure domains to an existing Nutanix cluster Sample Red Hat OpenStack Platform (RHOSP) failure domain configuration Sample VMware vSphere failure domain configuration Regions and zones for a VMware vCenter 12.6.1.2. Balancing control plane machines The control plane machine set balances control plane machines across the failure domains that are specified in the custom resource (CR). When possible, the control plane machine set uses each failure domain equally to ensure appropriate fault tolerance. If there are fewer failure domains than control plane machines, failure domains are selected for reuse alphabetically by name. For clusters with no failure domains specified, all control plane machines are placed within a single failure domain. Some changes to the failure domain configuration cause the control plane machine set to rebalance the control plane machines. For example, if you add failure domains to a cluster with fewer failure domains than control plane machines, the control plane machine set rebalances the machines across all available failure domains. 12.6.2. Recovery of failed control plane machines The Control Plane Machine Set Operator automates the recovery of control plane machines. When a control plane machine is deleted, the Operator creates a replacement with the configuration that is specified in the ControlPlaneMachineSet custom resource (CR). For clusters that use control plane machine sets, you can configure a machine health check. The machine health check deletes unhealthy control plane machines so that they are replaced. Important If you configure a MachineHealthCheck resource for the control plane, set the value of maxUnhealthy to 1 . This configuration ensures that the machine health check takes no action when multiple control plane machines appear to be unhealthy. Multiple unhealthy control plane machines can indicate that the etcd cluster is degraded or that a scaling operation to replace a failed machine is in progress. If the etcd cluster is degraded, manual intervention might be required. If a scaling operation is in progress, the machine health check should allow it to finish. Additional resources Deploying machine health checks 12.6.3. Quorum protection with machine lifecycle hooks For OpenShift Container Platform clusters that use the Machine API Operator, the etcd Operator uses lifecycle hooks for the machine deletion phase to implement a quorum protection mechanism. By using a preDrain lifecycle hook, the etcd Operator can control when the pods on a control plane machine are drained and removed. To protect etcd quorum, the etcd Operator prevents the removal of an etcd member until it migrates that member onto a new node within the cluster. This mechanism allows the etcd Operator precise control over the members of the etcd quorum and allows the Machine API Operator to safely create and remove control plane machines without specific operational knowledge of the etcd cluster. 12.6.3.1. Control plane deletion with quorum protection processing order When a control plane machine is replaced on a cluster that uses a control plane machine set, the cluster temporarily has four control plane machines. When the fourth control plane node joins the cluster, the etcd Operator starts a new etcd member on the replacement node. When the etcd Operator observes that the old control plane machine is marked for deletion, it stops the etcd member on the old node and promotes the replacement etcd member to join the quorum of the cluster. The control plane machine Deleting phase proceeds in the following order: A control plane machine is slated for deletion. The control plane machine enters the Deleting phase. To satisfy the preDrain lifecycle hook, the etcd Operator takes the following actions: The etcd Operator waits until a fourth control plane machine is added to the cluster as an etcd member. This new etcd member has a state of Running but not ready until it receives the full database update from the etcd leader. When the new etcd member receives the full database update, the etcd Operator promotes the new etcd member to a voting member and removes the old etcd member from the cluster. After this transition is complete, it is safe for the old etcd pod and its data to be removed, so the preDrain lifecycle hook is removed. The control plane machine status condition Drainable is set to True . The machine controller attempts to drain the node that is backed by the control plane machine. If draining fails, Drained is set to False and the machine controller attempts to drain the node again. If draining succeeds, Drained is set to True . The control plane machine status condition Drained is set to True . If no other Operators have added a preTerminate lifecycle hook, the control plane machine status condition Terminable is set to True . The machine controller removes the instance from the infrastructure provider. The machine controller deletes the Node object. YAML snippet demonstrating the etcd quorum protection preDrain lifecycle hook apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: ... spec: lifecycleHooks: preDrain: - name: EtcdQuorumOperator 1 owner: clusteroperator/etcd 2 ... 1 The name of the preDrain lifecycle hook. 2 The hook-implementing controller that manages the preDrain lifecycle hook. Additional resources Lifecycle hooks for the machine deletion phase 12.7. Troubleshooting the control plane machine set Use the information in this section to understand and recover from issues you might encounter. 12.7.1. Checking the control plane machine set custom resource state You can verify the existence and state of the ControlPlaneMachineSet custom resource (CR). Procedure Determine the state of the CR by running the following command: USD oc get controlplanemachineset.machine.openshift.io cluster \ --namespace openshift-machine-api A result of Active indicates that the ControlPlaneMachineSet CR exists and is activated. No administrator action is required. A result of Inactive indicates that a ControlPlaneMachineSet CR exists but is not activated. A result of NotFound indicates that there is no existing ControlPlaneMachineSet CR. steps To use the control plane machine set, you must ensure that a ControlPlaneMachineSet CR with the correct settings for your cluster exists. If your cluster has an existing CR, you must verify that the configuration in the CR is correct for your cluster. If your cluster does not have an existing CR, you must create one with the correct configuration for your cluster. Additional resources Activating the control plane machine set custom resource Creating a control plane machine set custom resource 12.7.2. Adding a missing Azure internal load balancer The internalLoadBalancer parameter is required in both the ControlPlaneMachineSet and control plane Machine custom resources (CRs) for Azure. If this parameter is not preconfigured on your cluster, you must add it to both CRs. For more information about where this parameter is located in the Azure provider specification, see the sample Azure provider specification. The placement in the control plane Machine CR is similar. Procedure List the control plane machines in your cluster by running the following command: USD oc get machines \ -l machine.openshift.io/cluster-api-machine-role==master \ -n openshift-machine-api For each control plane machine, edit the CR by running the following command: USD oc edit machine <control_plane_machine_name> Add the internalLoadBalancer parameter with the correct details for your cluster and save your changes. Edit your control plane machine set CR by running the following command: USD oc edit controlplanemachineset.machine.openshift.io cluster \ -n openshift-machine-api Add the internalLoadBalancer parameter with the correct details for your cluster and save your changes. steps For clusters that use the default RollingUpdate update strategy, the Operator automatically propagates the changes to your control plane configuration. For clusters that are configured to use the OnDelete update strategy, you must replace your control plane machines manually. Additional resources Sample Microsoft Azure provider specification 12.7.3. Recovering a degraded etcd Operator Certain situations can cause the etcd Operator to become degraded. For example, while performing remediation, the machine health check might delete a control plane machine that is hosting etcd. If the etcd member is not reachable at that time, the etcd Operator becomes degraded. When the etcd Operator is degraded, manual intervention is required to force the Operator to remove the failed member and restore the cluster state. Procedure List the control plane machines in your cluster by running the following command: USD oc get machines \ -l machine.openshift.io/cluster-api-machine-role==master \ -n openshift-machine-api \ -o wide Any of the following conditions might indicate a failed control plane machine: The STATE value is stopped . The PHASE value is Failed . The PHASE value is Deleting for more than ten minutes. Important Before continuing, ensure that your cluster has two healthy control plane machines. Performing the actions in this procedure on more than one control plane machine risks losing etcd quorum and can cause data loss. If you have lost the majority of your control plane hosts, leading to etcd quorum loss, then you must follow the disaster recovery procedure "Restoring to a cluster state" instead of this procedure. Edit the machine CR for the failed control plane machine by running the following command: USD oc edit machine <control_plane_machine_name> Remove the contents of the lifecycleHooks parameter from the failed control plane machine and save your changes. The etcd Operator removes the failed machine from the cluster and can then safely add new etcd members. Additional resources Restoring to a cluster state 12.7.4. Upgrading clusters that run on RHOSP For clusters that run on Red Hat OpenStack Platform (RHOSP) that were created with OpenShift Container Platform 4.13 or earlier, you might have to perform post-upgrade tasks before you can use control plane machine sets. 12.7.4.1. Configuring RHOSP clusters that have machines with root volume availability zones after an upgrade For some clusters that run on Red Hat OpenStack Platform (RHOSP) that you upgrade, you must manually update machine resources before you can use control plane machine sets if the following configurations are true: The upgraded cluster was created with OpenShift Container Platform 4.13 or earlier. The cluster infrastructure is installer-provisioned. Machines were distributed across multiple availability zones. Machines were configured to use root volumes for which block storage availability zones were not defined. To understand why this procedure is necessary, see Solution #7024383 . Procedure For all control plane machines, edit the provider spec for all control plane machines that match the environment. For example, to edit the machine master-0 , enter the following command: USD oc edit machine/<cluster_id>-master-0 -n openshift-machine-api where: <cluster_id> Specifies the ID of the upgraded cluster. In the provider spec, set the value of the property rootVolume.availabilityZone to the volume of the availability zone you want to use. An example RHOSP provider spec providerSpec: value: apiVersion: machine.openshift.io/v1alpha1 availabilityZone: az0 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: m1.xlarge image: rhcos-4.14 kind: OpenstackProviderSpec metadata: creationTimestamp: null networks: - filter: {} subnets: - filter: name: refarch-lv7q9-nodes tags: openshiftClusterID=refarch-lv7q9 rootVolume: availabilityZone: nova 1 diskSize: 30 sourceUUID: rhcos-4.12 volumeType: fast-0 securityGroups: - filter: {} name: refarch-lv7q9-master serverGroupName: refarch-lv7q9-master serverMetadata: Name: refarch-lv7q9-master openshiftClusterID: refarch-lv7q9 tags: - openshiftClusterID=refarch-lv7q9 trunk: true userDataSecret: name: master-user-data 1 Set the zone name as this value. Note If you edited or recreated machine resources after your initial cluster deployment, you might have to adapt these steps for your configuration. In your RHOSP cluster, find the availability zone of the root volumes for your machines and use that as the value. Run the following command to retrieve information about the control plane machine set resource: USD oc describe controlplanemachineset.machine.openshift.io/cluster --namespace openshift-machine-api Run the following command to edit the resource: USD oc edit controlplanemachineset.machine.openshift.io/cluster --namespace openshift-machine-api For that resource, set the value of the spec.state property to Active to activate control plane machine sets for your cluster. Your control plane is ready to be managed by the Cluster Control Plane Machine Set Operator. 12.7.4.2. Configuring RHOSP clusters that have control plane machines with availability zones after an upgrade For some clusters that run on Red Hat OpenStack Platform (RHOSP) that you upgrade, you must manually update machine resources before you can use control plane machine sets if the following configurations are true: The upgraded cluster was created with OpenShift Container Platform 4.13 or earlier. The cluster infrastructure is installer-provisioned. Control plane machines were distributed across multiple compute availability zones. To understand why this procedure is necessary, see Solution #7013893 . Procedure For the master-1 and master-2 control plane machines, open the provider specs for editing. For example, to edit the first machine, enter the following command: USD oc edit machine/<cluster_id>-master-1 -n openshift-machine-api where: <cluster_id> Specifies the ID of the upgraded cluster. For the master-1 and master-2 control plane machines, edit the value of the serverGroupName property in their provider specs to match that of the machine master-0 . An example RHOSP provider spec providerSpec: value: apiVersion: machine.openshift.io/v1alpha1 availabilityZone: az0 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: m1.xlarge image: rhcos-4.18 kind: OpenstackProviderSpec metadata: creationTimestamp: null networks: - filter: {} subnets: - filter: name: refarch-lv7q9-nodes tags: openshiftClusterID=refarch-lv7q9 securityGroups: - filter: {} name: refarch-lv7q9-master serverGroupName: refarch-lv7q9-master-az0 1 serverMetadata: Name: refarch-lv7q9-master openshiftClusterID: refarch-lv7q9 tags: - openshiftClusterID=refarch-lv7q9 trunk: true userDataSecret: name: master-user-data 1 This value must match for machines master-0 , master-1 , and master-3 . Note If you edited or recreated machine resources after your initial cluster deployment, you might have to adapt these steps for your configuration. In your RHOSP cluster, find the server group that your control plane instances are in and use that as the value. Run the following command to retrieve information about the control plane machine set resource: USD oc describe controlplanemachineset.machine.openshift.io/cluster --namespace openshift-machine-api Run the following command to edit the resource: USD oc edit controlplanemachineset.machine.openshift.io/cluster --namespace openshift-machine-api For that resource, set the value of the spec.state property to Active to activate control plane machine sets for your cluster. Your control plane is ready to be managed by the Cluster Control Plane Machine Set Operator. 12.8. Disabling the control plane machine set The .spec.state field in an activated ControlPlaneMachineSet custom resource (CR) cannot be changed from Active to Inactive . To disable the control plane machine set, you must delete the CR so that it is removed from the cluster. When you delete the CR, the Control Plane Machine Set Operator performs cleanup operations and disables the control plane machine set. The Operator then removes the CR from the cluster and creates an inactive control plane machine set with default settings. 12.8.1. Deleting the control plane machine set To stop managing control plane machines with the control plane machine set on your cluster, you must delete the ControlPlaneMachineSet custom resource (CR). Procedure Delete the control plane machine set CR by running the following command: USD oc delete controlplanemachineset.machine.openshift.io cluster \ -n openshift-machine-api Verification Check the control plane machine set custom resource state. A result of Inactive indicates that the removal and replacement process is successful. A ControlPlaneMachineSet CR exists but is not activated. 12.8.2. Checking the control plane machine set custom resource state You can verify the existence and state of the ControlPlaneMachineSet custom resource (CR). Procedure Determine the state of the CR by running the following command: USD oc get controlplanemachineset.machine.openshift.io cluster \ --namespace openshift-machine-api A result of Active indicates that the ControlPlaneMachineSet CR exists and is activated. No administrator action is required. A result of Inactive indicates that a ControlPlaneMachineSet CR exists but is not activated. A result of NotFound indicates that there is no existing ControlPlaneMachineSet CR. 12.8.3. Re-enabling the control plane machine set To re-enable the control plane machine set, you must ensure that the configuration in the CR is correct for your cluster and activate it. Additional resources Activating the control plane machine set custom resource | [
"oc get machine -n openshift-machine-api -l machine.openshift.io/cluster-api-machine-role=master",
"NAME PHASE TYPE REGION ZONE AGE <infrastructure_id>-master-0 Running m6i.xlarge us-west-1 us-west-1a 5h19m <infrastructure_id>-master-1 Running m6i.xlarge us-west-1 us-west-1b 5h19m <infrastructure_id>-master-2 Running m6i.xlarge us-west-1 us-west-1a 5h19m",
"No resources found in openshift-machine-api namespace.",
"oc get controlplanemachineset.machine.openshift.io cluster --namespace openshift-machine-api",
"oc --namespace openshift-machine-api edit controlplanemachineset.machine.openshift.io cluster",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: replicas: 3 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <cluster_id> 1 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master state: Active 2 strategy: type: RollingUpdate 3 template: machineType: machines_v1beta1_machine_openshift_io machines_v1beta1_machine_openshift_io: failureDomains: platform: <platform> 4 <platform_failure_domains> 5 metadata: labels: machine.openshift.io/cluster-api-cluster: <cluster_id> 6 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master spec: providerSpec: value: <platform_provider_spec> 7",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc create -f <control_plane_machine_set>.yaml",
"oc edit controlplanemachineset.machine.openshift.io cluster -n openshift-machine-api",
"openstack compute service set <target_node_host_name> nova-compute --disable",
"oc get machines -l machine.openshift.io/cluster-api-machine-role==master -n openshift-machine-api",
"oc delete machine -n openshift-machine-api <control_plane_machine_name> 1",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster 1 namespace: openshift-machine-api spec: replicas: 3 2 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <cluster_id> 3 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master state: Active 4 strategy: type: RollingUpdate 5 template: machineType: machines_v1beta1_machine_openshift_io machines_v1beta1_machine_openshift_io: failureDomains: platform: <platform> 6 <platform_failure_domains> 7 metadata: labels: machine.openshift.io/cluster-api-cluster: <cluster_id> machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master spec: providerSpec: value: <platform_provider_spec> 8",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: spec: providerSpec: value: ami: id: ami-<ami_id_string> 1 apiVersion: machine.openshift.io/v1beta1 blockDevices: - ebs: 2 encrypted: true iops: 0 kmsKey: arn: \"\" volumeSize: 120 volumeType: gp3 credentialsSecret: name: aws-cloud-credentials 3 deviceIndex: 0 iamInstanceProfile: id: <cluster_id>-master-profile 4 instanceType: m6i.xlarge 5 kind: AWSMachineProviderConfig 6 loadBalancers: 7 - name: <cluster_id>-int type: network - name: <cluster_id>-ext type: network metadata: creationTimestamp: null metadataServiceOptions: {} placement: 8 region: <region> 9 availabilityZone: \"\" 10 tenancy: 11 securityGroups: - filters: - name: tag:Name values: - <cluster_id>-master-sg 12 subnet: {} 13 userDataSecret: name: master-user-data 14",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: machines_v1beta1_machine_openshift_io: failureDomains: aws: - placement: availabilityZone: <aws_zone_a> 1 subnet: 2 filters: - name: tag:Name values: - <cluster_id>-private-<aws_zone_a> 3 type: Filters 4 - placement: availabilityZone: <aws_zone_b> 5 subnet: filters: - name: tag:Name values: - <cluster_id>-private-<aws_zone_b> 6 type: Filters platform: AWS 7",
"providerSpec: value: loadBalancers: - name: lk4pj-ext 1 type: network 2 - name: lk4pj-int type: network",
"providerSpec: value: instanceType: <compatible_aws_instance_type> 1",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet spec: template: spec: providerSpec: value: instanceType: <supported_instance_type> 1 networkInterfaceType: EFA 2 placement: availabilityZone: <zone> 3 region: <region> 4 placementGroupName: <placement_group> 5 placementGroupPartition: <placement_group_partition_number> 6",
"providerSpec: value: metadataServiceOptions: authentication: Required 1",
"providerSpec: placement: tenancy: dedicated",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: spec: providerSpec: value: acceleratedNetworking: true apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials 1 namespace: openshift-machine-api diagnostics: {} image: 2 offer: \"\" publisher: \"\" resourceID: /resourceGroups/<cluster_id>-rg/providers/Microsoft.Compute/galleries/gallery_<cluster_id>/images/<cluster_id>-gen2/versions/412.86.20220930 3 sku: \"\" version: \"\" internalLoadBalancer: <cluster_id>-internal 4 kind: AzureMachineProviderSpec 5 location: <region> 6 managedIdentity: <cluster_id>-identity metadata: creationTimestamp: null name: <cluster_id> networkResourceGroup: <cluster_id>-rg osDisk: 7 diskSettings: {} diskSizeGB: 1024 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: <cluster_id> 8 resourceGroup: <cluster_id>-rg subnet: <cluster_id>-master-subnet 9 userDataSecret: name: master-user-data 10 vmSize: Standard_D8s_v3 vnet: <cluster_id>-vnet zone: \"1\" 11",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: machines_v1beta1_machine_openshift_io: failureDomains: azure: - zone: \"1\" 1 - zone: \"2\" - zone: \"3\" platform: Azure 2",
"providerSpec: value: loadBalancers: - name: lk4pj-ext 1 type: network 2 - name: lk4pj-int type: network",
"az vm image list --all --offer rh-ocp-worker --publisher redhat -o table",
"Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocp-worker:4.15.2024072409 4.15.2024072409 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409 4.15.2024072409",
"az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table",
"Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.15.2024072409 4.15.2024072409 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.15.2024072409 4.15.2024072409",
"az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"providerSpec: value: image: offer: rh-ocp-worker publisher: redhat resourceID: \"\" sku: rh-ocp-worker type: MarketplaceWithPlan version: 413.92.2023101700",
"providerSpec: diagnostics: boot: storageAccountType: AzureManaged 1",
"providerSpec: diagnostics: boot: storageAccountType: CustomerManaged 1 customerManaged: storageAccountURI: https://<storage-account>.blob.core.windows.net 2",
"oc -n openshift-machine-api get secret <role>-user-data \\ 1 --template='{{index .data.userData | base64decode}}' | jq > userData.txt 2",
"\"storage\": { \"disks\": [ 1 { \"device\": \"/dev/disk/azure/scsi1/lun0\", 2 \"partitions\": [ 3 { \"label\": \"lun0p1\", 4 \"sizeMiB\": 1024, 5 \"startMiB\": 0 } ] } ], \"filesystems\": [ 6 { \"device\": \"/dev/disk/by-partlabel/lun0p1\", \"format\": \"xfs\", \"path\": \"/var/lib/lun0p1\" } ] }, \"systemd\": { \"units\": [ 7 { \"contents\": \"[Unit]\\nBefore=local-fs.target\\n[Mount]\\nWhere=/var/lib/lun0p1\\nWhat=/dev/disk/by-partlabel/lun0p1\\nOptions=defaults,pquota\\n[Install]\\nWantedBy=local-fs.target\\n\", 8 \"enabled\": true, \"name\": \"var-lib-lun0p1.mount\" } ] }",
"oc -n openshift-machine-api get secret <role>-user-data \\ 1 --template='{{index .data.disableTemplating | base64decode}}' | jq > disableTemplating.txt",
"oc -n openshift-machine-api create secret generic <role>-user-data-x5 \\ 1 --from-file=userData=userData.txt --from-file=disableTemplating=disableTemplating.txt",
"oc --namespace openshift-machine-api edit controlplanemachineset.machine.openshift.io cluster",
"apiVersion: machine.openshift.io/v1beta1 kind: ControlPlaneMachineSet spec: template: spec: metadata: labels: disk: ultrassd 1 providerSpec: value: ultraSSDCapability: Enabled 2 dataDisks: 3 - nameSuffix: ultrassd lun: 0 diskSizeGB: 4 deletionPolicy: Delete cachingType: None managedDisk: storageAccountType: UltraSSD_LRS userDataSecret: name: <role>-user-data-x5 4",
"oc get machines",
"oc debug node/<node-name> -- chroot /host lsblk",
"StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set.",
"failed to create vm <machine_name>: failure sending request for machine <machine_name>: cannot create vm: compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code=\"BadRequest\" Message=\"Storage Account type 'UltraSSD_LRS' is not supported <more_information_about_why>.\"",
"providerSpec: value: osDisk: diskSizeGB: 128 managedDisk: diskEncryptionSet: id: /subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Compute/diskEncryptionSets/<disk_encryption_set_name> storageAccountType: Premium_LRS",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet spec: template: machines_v1beta1_machine_openshift_io: spec: providerSpec: value: securityProfile: settings: securityType: TrustedLaunch 1 trustedLaunch: uefiSettings: 2 secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet spec: template: spec: providerSpec: value: osDisk: # managedDisk: securityProfile: 1 securityEncryptionType: VMGuestStateOnly 2 # securityProfile: 3 settings: securityType: ConfidentialVM 4 confidentialVM: uefiSettings: 5 secureBoot: Disabled 6 virtualizedTrustedPlatformModule: Enabled 7 vmSize: Standard_DC16ads_v5 8",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet spec: template: machines_v1beta1_machine_openshift_io: spec: providerSpec: value: capacityReservationGroupID: <capacity_reservation_group> 1",
"oc get machine -n openshift-machine-api -l machine.openshift.io/cluster-api-machine-role=master",
"providerSpec: value: acceleratedNetworking: true 1 vmSize: <azure-vm-size> 2",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.machines_v1beta1_machine_openshift_io.spec.providerSpec.value.disks[0].image}{\"\\n\"}' get ControlPlaneMachineSet/cluster",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: spec: providerSpec: value: apiVersion: machine.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials 1 deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 2 labels: null sizeGb: 200 type: pd-ssd kind: GCPMachineProviderSpec 3 machineType: e2-standard-4 metadata: creationTimestamp: null metadataServiceOptions: {} networkInterfaces: - network: <cluster_id>-network subnetwork: <cluster_id>-master-subnet projectID: <project_name> 4 region: <region> 5 serviceAccounts: 6 - email: <cluster_id>-m@<project_name>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform shieldedInstanceConfig: {} tags: - <cluster_id>-master targetPools: - <cluster_id>-api userDataSecret: name: master-user-data 7 zone: \"\" 8",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: machines_v1beta1_machine_openshift_io: failureDomains: gcp: - zone: <gcp_zone_a> 1 - zone: <gcp_zone_b> 2 - zone: <gcp_zone_c> - zone: <gcp_zone_d> platform: GCP 3",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet spec: template: spec: providerSpec: value: disks: type: pd-ssd 1",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet spec: template: spec: providerSpec: value: confidentialCompute: Enabled 1 onHostMaintenance: Terminate 2 machineType: n2d-standard-8 3",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet spec: template: spec: providerSpec: value: shieldedInstanceConfig: 1 integrityMonitoring: Enabled 2 secureBoot: Disabled 3 virtualizedTrustedPlatformModule: Enabled 4",
"gcloud kms keys add-iam-policy-binding <key_name> --keyring <key_ring_name> --location <key_ring_location> --member \"serviceAccount:service-<project_number>@compute-system.iam.gserviceaccount.com\" --role roles/cloudkms.cryptoKeyEncrypterDecrypter",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet spec: template: spec: providerSpec: value: disks: - type: encryptionKey: kmsKey: name: machine-encryption-key 1 keyRing: openshift-encrpytion-ring 2 location: global 3 projectID: openshift-gcp-project 4 kmsKeyServiceAccount: openshift-service-account@openshift-gcp-project.iam.gserviceaccount.com 5",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: spec: providerSpec: value: apiVersion: machine.openshift.io/v1 bootType: \"\" 1 categories: 2 - key: <category_name> value: <category_value> cluster: 3 type: uuid uuid: <cluster_uuid> credentialsSecret: name: nutanix-credentials 4 image: 5 name: <cluster_id>-rhcos type: name kind: NutanixMachineProviderConfig 6 memorySize: 16Gi 7 metadata: creationTimestamp: null project: 8 type: name name: <project_name> subnets: 9 - type: uuid uuid: <subnet_uuid> systemDiskSize: 120Gi 10 userDataSecret: name: master-user-data 11 vcpuSockets: 8 12 vcpusPerSocket: 1 13",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: spec: providerSpec: value: apiVersion: machine.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials 1 namespace: openshift-machine-api flavor: m1.xlarge 2 image: ocp1-2g2xs-rhcos kind: OpenstackProviderSpec 3 metadata: creationTimestamp: null networks: - filter: {} subnets: - filter: name: ocp1-2g2xs-nodes tags: openshiftClusterID=ocp1-2g2xs securityGroups: - filter: {} name: ocp1-2g2xs-master 4 serverGroupName: ocp1-2g2xs-master serverMetadata: Name: ocp1-2g2xs-master openshiftClusterID: ocp1-2g2xs tags: - openshiftClusterID=ocp1-2g2xs trunk: true userDataSecret: name: master-user-data",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: machines_v1beta1_machine_openshift_io: failureDomains: platform: OpenStack openstack: - availabilityZone: nova-az0 rootVolume: availabilityZone: cinder-az0 - availabilityZone: nova-az1 rootVolume: availabilityZone: cinder-az1 - availabilityZone: nova-az2 rootVolume: availabilityZone: cinder-az2",
"providerSpec: value: flavor: m1.xlarge 1",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: spec: providerSpec: value: apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials 1 diskGiB: 120 2 kind: VSphereMachineProviderSpec 3 memoryMiB: 16384 4 metadata: creationTimestamp: null network: 5 devices: - networkName: <vm_network_name> numCPUs: 4 6 numCoresPerSocket: 4 7 snapshot: \"\" template: <vm_template_name> 8 userDataSecret: name: master-user-data 9 workspace: 10 datacenter: <vcenter_data_center_name> 11 datastore: <vcenter_datastore_name> 12 folder: <path_to_vcenter_vm_folder> 13 resourcePool: <vsphere_resource_pool> 14 server: <vcenter_server_ip> 15",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: machines_v1beta1_machine_openshift_io: failureDomains: 1 platform: VSphere vsphere: 2 - name: <failure_domain_name_1> - name: <failure_domain_name_2>",
"oc get infrastructure cluster -o=jsonpath={.spec.platformSpec.vsphere.failureDomains[0].name}",
"https://vcenter.example.com/ui/app/tags/tag/urn:vmomi:InventoryServiceTag:208e713c-cae3-4b7f-918e-4051ca7d1f97:GLOBAL/permissions",
"urn:vmomi:InventoryServiceTag:208e713c-cae3-4b7f-918e-4051ca7d1f97:GLOBAL",
"apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet spec: template: spec: providerSpec: value: tagIDs: 1 - <tag_id_value> 2",
"apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: spec: lifecycleHooks: preDrain: - name: EtcdQuorumOperator 1 owner: clusteroperator/etcd 2",
"oc get controlplanemachineset.machine.openshift.io cluster --namespace openshift-machine-api",
"oc get machines -l machine.openshift.io/cluster-api-machine-role==master -n openshift-machine-api",
"oc edit machine <control_plane_machine_name>",
"oc edit controlplanemachineset.machine.openshift.io cluster -n openshift-machine-api",
"oc get machines -l machine.openshift.io/cluster-api-machine-role==master -n openshift-machine-api -o wide",
"oc edit machine <control_plane_machine_name>",
"oc edit machine/<cluster_id>-master-0 -n openshift-machine-api",
"providerSpec: value: apiVersion: machine.openshift.io/v1alpha1 availabilityZone: az0 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: m1.xlarge image: rhcos-4.14 kind: OpenstackProviderSpec metadata: creationTimestamp: null networks: - filter: {} subnets: - filter: name: refarch-lv7q9-nodes tags: openshiftClusterID=refarch-lv7q9 rootVolume: availabilityZone: nova 1 diskSize: 30 sourceUUID: rhcos-4.12 volumeType: fast-0 securityGroups: - filter: {} name: refarch-lv7q9-master serverGroupName: refarch-lv7q9-master serverMetadata: Name: refarch-lv7q9-master openshiftClusterID: refarch-lv7q9 tags: - openshiftClusterID=refarch-lv7q9 trunk: true userDataSecret: name: master-user-data",
"oc describe controlplanemachineset.machine.openshift.io/cluster --namespace openshift-machine-api",
"oc edit controlplanemachineset.machine.openshift.io/cluster --namespace openshift-machine-api",
"oc edit machine/<cluster_id>-master-1 -n openshift-machine-api",
"providerSpec: value: apiVersion: machine.openshift.io/v1alpha1 availabilityZone: az0 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: m1.xlarge image: rhcos-4.18 kind: OpenstackProviderSpec metadata: creationTimestamp: null networks: - filter: {} subnets: - filter: name: refarch-lv7q9-nodes tags: openshiftClusterID=refarch-lv7q9 securityGroups: - filter: {} name: refarch-lv7q9-master serverGroupName: refarch-lv7q9-master-az0 1 serverMetadata: Name: refarch-lv7q9-master openshiftClusterID: refarch-lv7q9 tags: - openshiftClusterID=refarch-lv7q9 trunk: true userDataSecret: name: master-user-data",
"oc describe controlplanemachineset.machine.openshift.io/cluster --namespace openshift-machine-api",
"oc edit controlplanemachineset.machine.openshift.io/cluster --namespace openshift-machine-api",
"oc delete controlplanemachineset.machine.openshift.io cluster -n openshift-machine-api",
"oc get controlplanemachineset.machine.openshift.io cluster --namespace openshift-machine-api"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/machine_management/managing-control-plane-machines |
Chapter 16. File and Print Servers | Chapter 16. File and Print Servers This chapter guides you through the installation and configuration of Samba , an open source implementation of the Server Message Block ( SMB ) and common Internet file system ( CIFS ) protocol, and vsftpd , the primary FTP server shipped with Red Hat Enterprise Linux. Additionally, it explains how to use the Print Settings tool to configure printers. 16.1. Samba Samba implements the Server Message Block (SMB) protocol in Red Hat Enterprise Linux. The SMB protocol is used to access resources on a server, such as file shares and shared printers. Additionally, Samba implements the Distributed Computing Environment Remote Procedure Call (DCE RPC) protocol used by Microsoft Windows. You can run Samba as: An Active Directory (AD) or NT4 domain member A standalone server An NT4 Primary Domain Controller (PDC) or Backup Domain Controller (BDC) Note Red Hat supports these modes only in existing installations with Windows versions which support NT4 domains. Red Hat recommends not setting up a new Samba NT4 domain, because Microsoft operating systems later than Windows 7 and Windows Server 2008 R2 do not support NT4 domains. Independently of the installation mode, you can optionally share directories and printers. This enables Samba to act as a file and print server. Note Red Hat does not support running Samba as an AD domain controller (DC). 16.1.1. The Samba Services Samba provides the following services: smbd This service provides file sharing and printing services using the SMB protocol. Additionally, the service is responsible for resource locking and for authenticating connecting users. The smb systemd service starts and stops the smbd daemon. To use the smbd service, install the samba package. nmbd This service provides host name and IP resolution using the NetBIOS over IPv4 protocol. Additionally to the name resolution, the nmbd service enables browsing the SMB network to locate domains, work groups, hosts, file shares, and printers. For this, the service either reports this information directly to the broadcasting client or forwards it to a local or master browser. The nmb systemd service starts and stops the nmbd daemon. Note that modern SMB networks use DNS to resolve clients and IP addresses. To use the nmbd service, install the samba package. winbindd The winbindd service provides an interface for the Name Service Switch (NSS) to use AD or NT4 domain users and groups on the local system. This enables, for example, domain users to authenticate to services hosted on a Samba server or to other local services. The winbind systemd service starts and stops the winbindd daemon. If you set up Samba as a domain member, winbindd must be started before the smbd service. Otherwise, domain users and groups are not available to the local system. To use the winbindd service, install the samba-winbind package. Important Red Hat only supports running Samba as a server with the winbindd service to provide domain users and groups to the local system. Due to certain limitations, such as missing Windows access control list (ACL) support and NT LAN Manager (NTLM) fallback, use of the System Security Services Daemon (SSSD) with Samba is currently not supported for these use cases. For further details, see the Red Hat Knowledgebase article What is the support status for Samba file server running on IdM clients or directly enrolled AD clients where SSSD is used as the client daemon . 16.1.2. Verifying the smb.conf File by Using the testparm Utility The testparm utility verifies that the Samba configuration in the /etc/samba/smb.conf file is correct. The utility detects invalid parameters and values, but also incorrect settings, such as for ID mapping. If testparm reports no problem, the Samba services will successfully load the /etc/samba/smb.conf file. Note that testparm cannot verify that the configured services will be available or work as expected. Important Red Hat recommends that you verify the /etc/samba/smb.conf file by using testparm after each modification of this file. To verify the /etc/samba/smb.conf file, run the testparm utility as the root user. If testparm reports incorrect parameters, values, or other errors in the configuration, fix the problem and run the utility again. Example 16.1. Using testparm The following output reports a non-existent parameter and an incorrect ID mapping configuration: 16.1.3. Understanding the Samba Security Modes The security parameter in the [global] section in the /etc/samba/smb.conf file manages how Samba authenticates users that are connecting to the service. Depending on the mode you install Samba in, the parameter must be set to different values: On an AD domain member, set security = ads . In this mode, Samba uses Kerberos to authenticate AD users. For details about setting up Samba as a domain member, see Section 16.1.5, "Setting up Samba as a Domain Member" . On a standalone server, set security = user . In this mode, Samba uses a local database to authenticate connecting users. For details about setting up Samba as a standalone server, see Section 16.1.4, "Setting up Samba as a Standalone Server" . On an NT4 PDC or BDC, set security = user . In this mode, Samba authenticates users to a local or LDAP database. On an NT4 domain member, set security = domain . In this mode, Samba authenticates connecting users to an NT4 PDC or BDC. You cannot use this mode on AD domain members. For details about setting up Samba as a domain member, see Section 16.1.5, "Setting up Samba as a Domain Member" . For further details, see the description of the security parameter in the smb.conf (5) man page. 16.1.4. Setting up Samba as a Standalone Server In certain situations, administrators want to set up a Samba server that is not a member of a domain. In this installation mode, Samba authenticates users to a local database instead of to a central DC. Additionally, you can enable guest access to allow users to connect to one or multiple services without authentication. 16.1.4.1. Setting up the Server Configuration for the Standalone Server To set up Samba as a standalone server: Setting up Samba as a Standalone Server Install the samba package: Edit the /etc/samba/smb.conf file and set the following parameters: This configuration defines a standalone server named Server within the Example-WG work group. Additionally, this configuration enables logging on a minimal level ( 1 ) and log files will be stored in the /var/log/samba/ directory. Samba will expand the %m macro in the log file parameter to the NetBIOS name of connecting clients. This enables individual log files for each client. For further details, see the parameter descriptions in the smb.conf (5) man page. Configure file or printer sharing. See: Section 16.1.6, "Configuring File Shares on a Samba Server" Section 16.1.7, "Setting up a Samba Print Server" Verify the /etc/samba/smb.conf file: For details, see Section 16.1.2, "Verifying the smb.conf File by Using the testparm Utility" . If you set up shares that require authentication, create the user accounts. For details, see Section 16.1.4.2, "Creating and Enabling Local User Accounts" . Open the required ports and reload the firewall configuration by using the firewall-cmd utility: Start the smb service: Optionally, enable the smb service to start automatically when the system boots: 16.1.4.2. Creating and Enabling Local User Accounts To enable users to authenticate when they connect to a share, you must create the accounts on the Samba host both in the operating system and in the Samba database. Samba requires the operating system account to validate the Access Control Lists (ACL) on file system objects and the Samba account to authenticate connecting users. If you use the passdb backend = tdbsam default setting, Samba stores user accounts in the /var/lib/samba/private/passdb.tdb database. For example, to create the example Samba user: Creating a Samba User Create the operating system account: The command adds the example account without creating a home directory. If the account is only used to authenticate to Samba, assign the /sbin/nologin command as shell to prevent the account from logging in locally. Set a password to the operating system account to enable it: Samba does not use the password set on the operating system account to authenticate. However, you need to set a password to enable the account. If an account is disabled, Samba denies access if this user connects. Add the user to the Samba database and set a password to the account: Use this password to authenticate when using this account to connect to a Samba share. Enable the Samba account: 16.1.5. Setting up Samba as a Domain Member Administrators running an AD or NT4 domain often want to use Samba to join their Red Hat Enterprise Linux server as a member to the domain. This enables you to: Access domain resources on other domain members Authenticate domain users to local services, such as sshd Share directories and printers hosted on the server to act as a file and print server 16.1.5.1. Joining a Domain To join a Red Hat Enterprise Linux system to a domain: Joining a Red Hat Enterprise Linux System to a Domain Install the following packages: To share directories or printers on the domain member, install the samba package: If you join an AD, additionally install the samba-winbind-krb5-locator package: This plug-in enables Kerberos to locate the Key Distribution Center (KDC) based on AD sites using DNS service records. Optionally, rename the existing /etc/samba/smb.conf Samba configuration file: Join the domain. For example, to join a domain named ad.example.com Using the command, the realm utility automatically: Creates a /etc/samba/smb.conf file for a membership in the ad.example.com domain Adds the winbind module for user and group lookups to the /etc/nsswitch.conf file Updates the Pluggable Authentication Module (PAM) configuration files in the /etc/pam.d/ directory Starts the winbind service and enables the service to start when the system boots For further details about the realm utility, see the realm (8) man page and the corresponding section in the Red Hat Windows Integration Guide . Optionally, set an alternative ID mapping back end or customized ID mapping settings in the /etc/samba/smb.conf file. For details, see Section 16.1.5.3, "Understanding ID Mapping" . Optionally, verify the configuration. See Section 16.1.5.2, "Verifying That Samba Was Correctly Joined As a Domain Member" . Verify that the winbindd is running: Important To enable Samba to query domain user and group information, the winbindd service must be running before you start smbd . If you installed the samba package to share directories and printers, start the smbd service: 16.1.5.2. Verifying That Samba Was Correctly Joined As a Domain Member After you joined a Red Hat Enterprise Linux as a domain member, you can run different tests to verify that the join succeeded. See: the section called "Verifying That the Operating System Can Retrieve Domain User Accounts and Groups" the section called "Verifying If AD Domain Users Can Obtain a Kerberos Ticket" the section called "Listing the Available Domains" Verifying That the Operating System Can Retrieve Domain User Accounts and Groups Use the getent utility to verify that the operating system can retrieve domain users and groups. For example: To query the administrator account in the AD domain: To query the members of the Domain Users group in the AD domain: If the command works correctly, verify that you can use domain users and groups when you set permissions on files and directories. For example, to set the owner of the /srv/samba/example.txt file to AD\administrator and the group to AD\Domain Users : Verifying If AD Domain Users Can Obtain a Kerberos Ticket In an AD environment, users can obtain a Kerberos ticket from the DC. For example, to verify if the administrator user can obtain a Kerberos ticket: Note To use the kinit and klist utilities, install the krb5-workstation package on the Samba domain member. Obtaining a Kerberos Ticket Obtain a ticket for the [email protected] principal: Display the cached Kerberos ticket: Listing the Available Domains To list all domains available through the winbindd service, enter: If Samba was successfully joined as a domain member, the command displays the built-in and local host name, as well as the domain Samba is a member of including trusted domains. Example 16.2. Displaying the Available Domains 16.1.5.3. Understanding ID Mapping Windows domains distinguish users and groups by unique Security Identifiers (SID). However, Linux requires unique UIDs and GIDs for each user and group. If you run Samba as a domain member, the winbindd service is responsible for providing information about domain users and groups to the operating system. To enable the winbindd service to provide unique IDs for users and groups to Linux, you must configure ID mapping in the /etc/samba/smb.conf file for: The local database (default domain) The AD or NT4 domain the Samba server is a member of Each trusted domain from which users must be able to access resources on this Samba server 16.1.5.3.1. Planning ID Ranges Regardless of whether you store the Linux UIDs and GIDs in AD or if you configure Samba to generate them, each domain configuration requires a unique ID range that must not overlap with any of the other domains. Warning If you set overlapping ID ranges, Samba fails to work correctly. Example 16.3. Unique ID Ranges The following shows non-overlapping ID mapping ranges for the default ( * ), AD-DOM , and the TRUST-DOM domains. Important You can only assign one range per domain. Therefore, leave enough space between the domains ranges. This enables you to extend the range later if your domain grows. If you later assign a different range to a domain, the ownership of files and directories previously created by these users and groups will be lost. 16.1.5.3.2. The * Default Domain In a domain environment, you add one ID mapping configuration for each of the following: The domain the Samba server is a member of Each trusted domain that should be able to access the Samba server However, for all other objects, Samba assigns IDs from the default domain. This includes: Local Samba users and groups Samba built-in accounts and groups, such as BUILTIN\Administrators Important You must configure the default domain as described in this section to enable Samba to operate correctly. The default domain back end must be writable to permanently store the assigned IDs. For the default domain, you can use one of the following back ends: tdb When you configure the default domain to use the tdb back end, set an ID range that is big enough to include objects that will be created in the future and that are not part of a defined domain ID mapping configuration. For example, set the following in the [global] section in the /etc/samba/smb.conf file: For further details, see Section 16.1.5.4.1, "Using the tdb ID Mapping Back End" . autorid When you configure the default domain to use the autorid back end, adding additional ID mapping configurations for domains is optional. For example, set the following in the [global] section in the /etc/samba/smb.conf file: For further details, see Configuring the autorid Back End . 16.1.5.4. The Different ID Mapping Back Ends Samba provides different ID mapping back ends for specific configurations. The most frequently used back ends are: Table 16.1. Frequently Used ID Mapping Back Ends Back End Use Case tdb The * default domain only ad AD domains only rid AD and NT4 domains autorid AD, NT4, and the * default domain The following sections describe the benefits, recommended scenarios where to use the back end, and how to configure it. 16.1.5.4.1. Using the tdb ID Mapping Back End The winbindd service uses the writable tdb ID mapping back end by default to store Security Identifier (SID), UID, and GID mapping tables. This includes local users, groups, and built-in principals. Use this back end only for the * default domain. For example: For further details about the * default domain, see Section 16.1.5.3.2, "The * Default Domain" . 16.1.5.4.2. Using the ad ID Mapping Back End The ad ID mapping back end implements a read-only API to read account and group information from AD. This provides the following benefits: All user and group settings are stored centrally in AD. User and group IDs are consistent on all Samba servers that use this back end. The IDs are not stored in a local database which can corrupt, and therefore file ownerships cannot be lost. The ad back end reads the following attributes from AD: Table 16.2. Attributes the ad Back End Reads from User and Group Objects AD Attribute Name Object Type Mapped to sAMAccountName User and group User or group name, depending on the object uidNumber User User ID (UID) gidNumber Group Group ID (GID) loginShell [a] User Path to the shell of the user unixHomeDirectory User Path to the home directory of the user primaryGroupID [b] User Primary group ID [a] Samba only reads this attribute if you set idmap config DOMAIN :unix_nss_info = yes . [b] Samba only reads this attribute if you set idmap config DOMAIN :unix_primary_group = yes . Prerequisites of the ad Back End To use the ad ID mapping back end: Both users and groups must have unique IDs set in AD, and the IDs must be within the range configured in the /etc/samba/smb.conf file. Objects whose IDs are outside of the range will not be available on the Samba server. Users and groups must have all required attributes set in AD. If required attributes are missing, the user or group will not be available on the Samba server. The required attributes depend on your configuration. See Table 16.2, "Attributes the ad Back End Reads from User and Group Objects" . Configuring the ad Back End To configure a Samba AD member to use the ad ID mapping back end: Configuring the ad Back End on a Domain Member Edit the [global] section in the /etc/samba/smb.conf file: Add an ID mapping configuration for the default domain ( * ) if it does not exist. For example: For further details about the default domain configuration, see Section 16.1.5.3.2, "The * Default Domain" . Enable the ad ID mapping back end for the AD domain: Set the range of IDs that is assigned to users and groups in the AD domain. For example: Important The range must not overlap with any other domain configuration on this server. Additionally, the range must be set big enough to include all IDs assigned in the future. For further details, see Section 16.1.5.3.1, "Planning ID Ranges" . Set that Samba uses the RFC 2307 schema when reading attributes from AD: To enable Samba to read the login shell and the path to the users home directory from the corresponding AD attribute, set: Alternatively, you can set a uniform domain-wide home directory path and login shell that is applied to all users. For example: For details about variable substitution, see the VARIABLE SUBSTITUTIONS section in the smb.conf (5) man page. By default, Samba uses the primaryGroupID attribute of a user object as the user's primary group on Linux. Alternatively, you can configure Samba to use the value set in the gidNumber attribute instead: Verify the /etc/samba/smb.conf file: For details, see Section 16.1.2, "Verifying the smb.conf File by Using the testparm Utility" . Reload the Samba configuration: Verify that the settings work as expected. See the section called "Verifying That the Operating System Can Retrieve Domain User Accounts and Groups" . For further details, see the smb.conf (5) and idmap_ad (8) man pages. 16.1.5.4.3. Using the rid ID Mapping Back End Samba can use the relative identifier (RID) of a Windows SID to generate an ID on Red Hat Enterprise Linux. Note The RID is the last part of a SID. For example, if the SID of a user is S-1-5-21-5421822485-1151247151-421485315-30014 , then 30014 is the corresponding RID. For details, how Samba calculates the local ID, see the idmap_rid (8) man page. The rid ID mapping back end implements a read-only API to calculate account and group information based on an algorithmic mapping scheme for AD and NT4 domains. When you configure the back end, you must set the lowest and highest RID in the idmap config DOMAIN : range parameter. Samba will not map users or groups with a lower or higher RID than set in this parameter. Important As a read-only back end, rid cannot assign new IDs, such as for BUILTIN groups. Therefore, do not use this back end for the * default domain. Benefits All domain users and groups that have an RID within the configured range are automatically available on the domain member. You do not need to manually assign IDs, home directories, and login shells. Drawbacks All domain users get the same login shell and home directory assigned. However, you can use variables. User and group IDs are only the same across Samba domain members if all use the rid back end with the same ID range settings. You cannot exclude individual users or groups from being available on the domain member. Only users and groups outside of the configured range are excluded. Based on the formulas the winbindd service uses to calculate the IDs, duplicate IDs can occur in multi-domain environments if objects in different domains have the same RID. Configuring the rid Back End To configure a Samba domain member to use the rid ID mapping back end: Configuring the rid Back End on a Domain Member Edit the [global] section in the /etc/samba/smb.conf file: Add an ID mapping configuration for the default domain ( * ) if it does not exist. For example: For further details about the default domain configuration, see Section 16.1.5.3.2, "The * Default Domain" . Enable the rid ID mapping back end for the domain: Set a range that is big enough to include all RIDs that will be assigned in the future. For example: Samba ignores users and groups whose RIDs in this domain are not within the range. Important The range must not overlap with any other domain configuration on this server. For further details, see Section 16.1.5.3.1, "Planning ID Ranges" . Set a shell and home directory path that will be assigned to all mapped users. For example: For details about variable substitution, see the VARIABLE SUBSTITUTIONS section in the smb.conf (5) man page. Verify the /etc/samba/smb.conf file: For details, see Section 16.1.2, "Verifying the smb.conf File by Using the testparm Utility" . Reload the Samba configuration: Verify that the settings work as expected. See the section called "Verifying That the Operating System Can Retrieve Domain User Accounts and Groups" . 16.1.5.4.4. Using the autorid ID Mapping Back End The autorid back end works similar to the rid ID mapping back end, but can automatically assign IDs for different domains. This enables you to use the autorid back end in the following situations: Only for the * default domain. For the * default domain and additional domains, without the need to create ID mapping configurations for each of the additional domains. Only for specific domains. Benefits All domain users and groups whose calculated UID and GID is within the configured range are automatically available on the domain member. You do not need to manually assign IDs, home directories, and login shells. No duplicate IDs, even if multiple objects in a multi-domain environment have the same RID. Drawbacks User and group IDs are not the same across Samba domain members. All domain users get the same login shell and home directory assigned. However, you can use variables. You cannot exclude individual users or groups from being available on the domain member. Only users and groups whose calculated UID or GID is outside of the configured range are excluded. Configuring the autorid Back End To configure a Samba domain member to use the autorid ID mapping back end for the * default domain: Note If you use autorid for the default domain, adding additional ID mapping configuration for domains is optional. Configuring the autorid Back End on a Domain Member Edit the [global] section in the /etc/samba/smb.conf file: Enable the autorid ID mapping back end for the * default domain: Set a range that is big enough to assign IDs for all existing and future objects. For example: Samba ignores users and groups whose calculated IDs in this domain are not within the range. For details about how the back end calculated IDs, see the THE MAPPING FORMULAS section in the idmap_autorid (8) man page. Warning After you set the range and Samba starts using it, you can only increase the upper limit of the range. Any other change to the range can result in new ID assignments, and thus in loosing file ownerships. Optionally, set a range size. For example: Samba assigns this number of continuous IDs for each domain's object until all IDs from the range set in the idmap config * : range parameter are taken. For further details, see the rangesize parameter description in the idmap_autorid (8) man page. Set a shell and home directory path that will be assigned to all mapped users. For example: For details about variable substitution, see the VARIABLE SUBSTITUTIONS section in the smb.conf (5) man page. Optionally, add additional ID mapping configuration for domains. If no configuration for an individual domain is available, Samba calculates the ID using the autorid back end settings in the previously configured * default domain. Important If you configure additional back ends for individual domains, the ranges for all ID mapping configuration must not overlap. For further details, see Section 16.1.5.3.1, "Planning ID Ranges" . Verify the /etc/samba/smb.conf file: For details, see Section 16.1.2, "Verifying the smb.conf File by Using the testparm Utility" . Reload the Samba configuration: Verify that the settings work as expected. See the section called "Verifying That the Operating System Can Retrieve Domain User Accounts and Groups" . 16.1.6. Configuring File Shares on a Samba Server To use Samba as a file server, add shares to the /etc/samba/smb.conf file of your standalone or domain member configuration. You can add shares that uses either: POSIX ACLs. See Section 16.1.6.1, "Setting up a Share That Uses POSIX ACLs" . Fine-granular Windows ACLs. See Section 16.1.6.2, "Setting up a Share That Uses Windows ACLs" . 16.1.6.1. Setting up a Share That Uses POSIX ACLs As a Linux service, Samba supports shares with POSIX ACLs. They enable you to manage permissions locally on the Samba server using utilities, such as chmod . If the share is stored on a file system that supports extended attributes, you can define ACLs with multiple users and groups. Note If you need to use fine-granular Windows ACLs instead, see Section 16.1.6.2, "Setting up a Share That Uses Windows ACLs" . Before you can add a share, set up Samba. See: Section 16.1.4, "Setting up Samba as a Standalone Server" Section 16.1.5, "Setting up Samba as a Domain Member" 16.1.6.1.1. Adding a Share That Uses POSIX ACLs To create a share named example , that provides the content of the /srv/samba/example/ directory, and uses POSIX ACLs: Adding a Share That Uses POSIX ACLs Optionally, create the folder if it does not exist. For example: If you run SELinux in enforcing mode, set the samba_share_t context on the directory: Set file system ACLs on the directory. For details, see Section 16.1.6.1.2, "Setting ACLs" . Add the example share to the /etc/samba/smb.conf file. For example, to add the share write-enabled: Note Regardless of the file system ACLs; if you do not set read only = no , Samba shares the directory in read-only mode. Verify the /etc/samba/smb.conf file: For details, see Section 16.1.2, "Verifying the smb.conf File by Using the testparm Utility" . Open the required ports and reload the firewall configuration using the firewall-cmd utility: Restart the smb service: Optionally, enable the smb service to start automatically at boot time: 16.1.6.1.2. Setting ACLs Shares that use POSIX ACLs support: Standard Linux ACLs. For details, see Setting Standard Linux ACLs . Extended ACLs. For details, see Setting Extended ACLs . Setting Standard Linux ACLs The standard ACLs on Linux support setting permissions for one owner, one group, and for all other undefined users. You can use the chown , chgrp , and chmod utility to update the ACLs. If you require precise control, then you use the more complex POSIX ACLs, see Setting Extended ACLs . For example, to set the owner of the /srv/samba/example/ directory to the root user, grant read and write permissions to the Domain Users group, and deny access to all other users: Note Enabling the set-group-ID (SGID) bit on a directory automatically sets the default group for all new files and subdirectories to that of the directory group, instead of the usual behavior of setting it to the primary group of the user who created the new directory entry. For further details about permissions, see the chown (1) and chmod (1) man pages. Setting Extended ACLs If the file system the shared directory is stored on supports extended ACLs, you can use them to set complex permissions. Extended ACLs can contain permissions for multiple users and groups. Extended POSIX ACLs enable you to configure complex ACLs with multiple users and groups. However, you can only set the following permissions: No access Read access Write access Full control If you require the fine-granular Windows permissions, such as Create folder / append data , configure the share to use Windows ACLs. See Section 16.1.6.2, "Setting up a Share That Uses Windows ACLs" . To use extended POSIX ACLs on a share: Enabling Extended POSIX ACLs on a Share Enable the following parameter in the share's section in the /etc/samba/smb.conf file to enable ACL inheritance of extended ACLs: For details, see the parameter description in the smb.conf (5) man page. Restart the smb service: Optionally, enable the smb service to start automatically at boot time: Set the ACLs on the directory. For details about using extended ACLs, see Chapter 5, Access Control Lists . Example 16.4. Setting Extended ACLs The following procedure sets read, write, and execute permissions for the Domain Admins group, read, and execute permissions for the Domain Users group, and deny access to everyone else on the /srv/samba/example/ directory: Setting Extended ACLs Disable auto-granting permissions to the primary group of user accounts: The primary group of the directory is additionally mapped to the dynamic CREATOR GROUP principal. When you use extended POSIX ACLs on a Samba share, this principal is automatically added and you cannot remove it. Set the permissions on the directory: Grant read, write, and execute permissions to the Domain Admins group: Grant read and execute permissions to the Domain Users group: Set permissions for the other ACL entry to deny access to users that do not match the other ACL entries: These settings apply only to this directory. In Windows, these ACLs are mapped to the This folder only mode. To enable the permissions set in the step to be inherited by new file system objects created in this directory: With these settings, the This folder only mode for the principals is now set to This folder, subfolders, and files . Samba maps the previously set permissions to the following Windows ACLs: Principal Access Applies to DOMAIN \Domain Admins Full control This folder, subfolders, and files DOMAIN \Domain Users Read & execute This folder, subfolders, and files Everyone [a] None This folder, subfolders, and files owner (Unix Userpass:attributes[] owner ) [b] Full control This folder only primary_group (Unix Userpass:attributes[] primary_group ) [c] None This folder only CREATOR OWNER [d] [e] Full control Subfolders and files only CREATOR GROUP [f] None Subfolders and files only [a] Samba maps the permissions for this principal from the other ACL entry. [b] Samba maps the owner of the directory to this entry. [c] Samba maps the primary group of the directory to this entry. [d] On new file system objects, the creator inherits automatically the permissions of this principal. [e] Configuring or removing these principals from the ACLs not supported on shares that use POSIX ACLs. [f] On new file system objects, the creator's primary group inherits automatically the permissions of this principal. 16.1.6.1.3. Setting Permissions on a Share Optionally, to limit or grant access to a Samba share, you can set certain parameters in the share's section in the /etc/samba/smb.conf file. Note Share-based permissions manage if a user, group, or host is able to access a share. These settings do not affect file system ACLs. Use share-based settings to restrict access to shares. For example, to deny access from specific hosts. Configuring User and Group-based Share Access User and group-based access control enables you to grant or deny access to a share for certain users and groups. For example, to enable all members of the Domain Users group to access a share while access is denied for the user account, add the following parameters to the share's configuration: The invalid users parameter has a higher priority than valid users parameter. For example, if the user account is a member of the Domain Users group, access is denied to this account when you use the example. For further details, see the parameter descriptions in the smb.conf (5) man page. Configuring Host-based Share Access Host-based access control enables you to grant or deny access to a share based on client's host names, IP addresses, or IP ranges. For example, to enable the 127.0.0.1 IP address, the 192.0.2.0/24 IP range, and the client1.example.com host to access a share, and additionally deny access for the client2.example.com host: Configuring Host-based Share Access Add the following parameters to the configuration of the share in the /etc/samba/smb.conf : Reload the Samba configuration The hosts deny parameter has a higher priority than hosts allow . For example, if client1.example.com resolves to an IP address that is listed in the hosts allow parameter, access for this host is denied. For further details, see the parameter description in the smb.conf (5) man page. 16.1.6.2. Setting up a Share That Uses Windows ACLs Samba supports setting Windows ACLs on shares and file system object. This enables you to: Use the fine-granular Windows ACLs Manage share permissions and file system ACLs using Windows Alternatively, you can configure a share to use POSIX ACLs. For details, see Section 16.1.6.1, "Setting up a Share That Uses POSIX ACLs" . 16.1.6.2.1. Granting the SeDiskOperatorPrivilege Privilege Only users and groups having the SeDiskOperatorPrivilege privilege granted can configure permissions on shares that use Windows ACLs. For example, to grant the privilege to the DOMAIN \Domain Admins group: Note In a domain environment, grant SeDiskOperatorPrivilege to a domain group. This enables you to centrally manage the privilege by updating a user's group membership. To list all users and groups having SeDiskOperatorPrivilege granted: 16.1.6.2.2. Enabling Windows ACL Support To configure shares that support Windows ACLs, you must enable this feature in Samba. To enable it globally for all shares, add the following settings to the [global] section of the /etc/samba/smb.conf file: Alternatively, you can enable Windows ACL support for individual shares, by adding the same parameters to a share's section instead. 16.1.6.2.3. Adding a Share That Uses Windows ACLs To create a share named example , that shares the content of the /srv/samba/example/ directory, and uses Windows ACLs: Adding a Share That Uses Windows ACLs Optionally, create the folder if it does not exists. For example: If you run SELinux in enforcing mode, set the samba_share_t context on the directory: Add the example share to the /etc/samba/smb.conf file. For example, to add the share write-enabled: Note Regardless of the file system ACLs; if you do not set read only = no , Samba shares the directory in read-only mode. If you have not enabled Windows ACL support in the [global] section for all shares, add the following parameters to the [example] section to enable this feature for this share: Verify the /etc/samba/smb.conf file: For details, see Section 16.1.2, "Verifying the smb.conf File by Using the testparm Utility" . Open the required ports and reload the firewall configuration using the firewall-cmd utility: Restart the smb service: Optionally, enable the smb service to start automatically at boot time: 16.1.6.2.4. Managing Share Permissions and File System ACLs of a Share That Uses Windows ACLs To manage share and file system ACLs on a Samba share that uses Windows ACLs, use a Windows applications, such as Computer Management . For details, see your Windows documentation. Alternatively, use the smbcacls utility to manage ACLs. For details, see Section 16.1.6.3, "Managing ACLs on an SMB Share Using smbcacls " . Note To modify the file system permissions from Windows, you must use an account that has the SeDiskOperatorPrivilege privilege granted. See Section 16.1.6.2.1, "Granting the SeDiskOperatorPrivilege Privilege" . 16.1.6.3. Managing ACLs on an SMB Share Using smbcacls The smbcacls utility can list, set, and delete ACLs of files and directories stored on an SMB share. You can use smbcacls to manage file system ACLs: On a local or remote Samba server that uses advanced Windows ACLs or POSIX ACLs. On Red Hat Enterprise Linux to remotely manage ACLs on a share hosted on Windows. 16.1.6.3.1. Understanding Access Control Entries Each ACL entry of a file system object contains Access Control Entries (ACE) in the following format: Example 16.5. Access Control Entries If the AD\Domain Users group has Modify permissions that apply to This folder, subfolders, and files on Windows, the ACL contains the following ACEs: The following describes the individual ACEs: Security principal The security principal is the user, group, or SID the permissions in the ACL are applied to. Access right Defines if access to an object is granted or denied. The value can be ALLOWED or DENIED . Inheritance information The following values exist: Table 16.3. Inheritance Settings Value Description Maps to OI Object Inherit This folder and files CI Container Inherit This folder and subfolders IO Inherit Only The ACE does not apply to the current file or directory. ID Inherited The ACE was inherited from the parent directory. Additionally, the values can be combined as follows: Table 16.4. Inheritance Settings Combinations Value Combinations Maps to the Windows Applies to Setting OI/CI This folder, subfolders, and files OI/CI/IO Subfolders and files only CI/IO Subfolders only OI/IO Files only Permissions This value can be either a hex value that represents one or more Windows permissions or an smbcacls alias: A hex value that represents one or more Windows permissions. The following table displays the advanced Windows permissions and their corresponding value in hex format: Table 16.5. Windows Permissions and Their Corresponding smbcacls Value in Hex Format Windows Permissions Hex Values Full control 0x001F01FF Traverse folder / execute file 0x00100020 List folder / read data 0x00100001 Read attributes 0x00100080 Read extended attributes 0x00100008 Create files / write data 0x00100002 Create folders / append data 0x00100004 Write attributes 0x00100100 Write extended attributes 0x00100010 Delete subfolders and files 0x00100040 Delete 0x00110000 Read permissions 0x00120000 Change permissions 0x00140000 Take ownership 0x00180000 Multiple permissions can be combined as a single hex value using the bit-wise OR operation. For details, see Section 16.1.6.3.3, "Calculating an ACE Mask" . An smbcacls alias. The following table displays the available aliases: Table 16.6. Existing smbcacls Aliases and Their Corresponding Windows Permission smbcacls Alias Maps to Windows Permission R Read READ Read & execute W Special Create files / write data Create folders / append data Write attributes Write extended attributes Read permissions D Delete P Change permissions O Take ownership X Traverse / execute CHANGE Modify FULL Full control Note You can combine single-letter aliases when you set permissions. For example, you can set RD to apply the Windows permission Read and Delete . However, you can neither combine multiple non-single-letter aliases nor combine aliases and hex values. 16.1.6.3.2. Displaying ACLs Using smbcacls If you run smbcacls without any operation parameter, such as --add , the utility displays the ACLs of a file system object. For example, to list the ACLs of the root directory of the //server/example share: The output of the command displays: REVISION : The internal Windows NT ACL revision of the security descriptor CONTROL : Security descriptor control OWNER : Name or SID of the security descriptor's owner GROUP : Name or SID of the security descriptor's group ACL entries. For details, see Section 16.1.6.3.1, "Understanding Access Control Entries" . 16.1.6.3.3. Calculating an ACE Mask In most situations, when you add or update an ACE, you use the smbcacls aliases listed in Table 16.6, "Existing smbcacls Aliases and Their Corresponding Windows Permission" . However, if you want to set advanced Windows permissions as listed in Table 16.5, "Windows Permissions and Their Corresponding smbcacls Value in Hex Format" , you must use the bit-wise OR operation to calculate the correct value. You can use the following shell command to calculate the value: Example 16.6. Calculating an ACE Mask You want set following permissions: Traverse folder / execute file ( 0x00100020 ) List folder / read data ( 0x00100001 ) Read attributes ( 0x00100080 ) To calculate the hex value for the permissions, enter: Use the returned value when you set or update an ACE. 16.1.6.3.4. Adding, Updating, And Removing an ACL Using smbcacls Depending on the parameter you pass to the smbcacls utility, you can add, update, and remove ACLs from a file or directory. Adding an ACL To add an ACL to the root of the //server/example share that grants CHANGE permissions for This folder, subfolders, and files to the AD\Domain Users group: Updating an ACL Updating an ACL is similar to adding a new ACL. You update an ACL by overriding the ACL using the --modify parameter with an existing security principal. If smbcacls finds the security principal in the ACL list, the utility updates the permissions. Otherwise the command fails with an error: For example, to update the permissions of the AD\Domain Users group and set them to READ for This folder, subfolders, and files : Deleting an ACL To delete an ACL, pass the --delete with the exact ACL to the smbcacls utility. For example: 16.1.6.4. Enabling Users to Share Directories on a Samba Server On a Samba server, you can configure that users can share directories without root permissions. 16.1.6.4.1. Enabling the User Shares Feature Before users can share directories, the administrator must enable user shares in Samba. For example, to enable only members of the local example group to create user shares: Enabling User Shares Create the local example group, if it does not exist: Prepare the directory for Samba to store the user share definitions and set its permissions properly. For example: Create the directory: Set write permissions for the example group: Set the sticky bit to prevent users to rename or delete files stored by other users in this directory. Edit the /etc/samba/smb.conf file and add the following to the [global] section: Set the path to the directory you configured to store the user share definitions. For example: Set how many user shares Samba allows to be created on this server. For example: If you use the default of 0 for the usershare max shares parameter, user shares are disabled. Optionally, set a list of absolute directory paths. For example, to configure that Samba only allows to share subdirectories of the /data and /srv directory to be shared, set: For a list of further user share-related parameters you can set, see the USERSHARES section in the smb.conf (5) man page. Verify the /etc/samba/smb.conf file: For details, see Section 16.1.2, "Verifying the smb.conf File by Using the testparm Utility" . Reload the Samba configuration: Users are now able to create user shares. For details, see Section 16.1.6.4.2, "Adding a User Share" . 16.1.6.4.2. Adding a User Share After you configured Samba according to Section 16.1.6.4.1, "Enabling the User Shares Feature" , users can share directories on the Samba server without root permissions by running the net usershare add command. Synopsis of the net usershare add command: net usershare add share_namepathcommentACLsguest_ok=y|n Important If you set ACLs when you create a user share, you must specify the comment parameter prior to the ACLs. To set an empty comment, use an empty string in double quotes. Note that users can only enable guest access on a user share, if the administrator set usershare allow guests = yes in the [global] section in the /etc/samba/smb.conf file. Example 16.7. Adding a User Share A user wants to share the /srv/samba/ directory on a Samba server. The share should be named example , have no comment set, and should be accessible by guest users. Additionally, the share permissions should be set to full access for the AD\Domain Users group and read permissions for other users. To add this share, run as the user: 16.1.6.4.3. Updating Settings of a User Share If you want to update settings of a user share, override the share by using the net usershare add command with the same share name and the new settings. See Section 16.1.6.4.2, "Adding a User Share" . 16.1.6.4.4. Displaying Information About Existing User Shares Users can enter the net usershare info command on a Samba server to display user shares and their settings. To display all user shares created by any user: To list only shares created by the user who runs the command, omit the -l parameter. To display only the information about specific shares, pass the share name or wild cards to the command. For example, to display the information about shares whose name starts with share_ : 16.1.6.4.5. Listing User Shares If you want to list only the available user shares without their settings on a Samba server, use the net usershare list command. To list the shares created by any user: To list only shares created by the user who runs the command, omit the -l parameter. To list only specific shares, pass the share name or wild cards to the command. For example, to list only shares whose name starts with share_ : 16.1.6.4.6. Deleting a User Share To delete a user share, enter as the user who created the share or as the root user: 16.1.6.5. Enabling Guest Access to a Share In certain situations, you want to share a directory to which users can connect without authentication. To configure this, enable guest access on a share. Warning Shares that do not require authentication can be a security risk. If guest access is enabled on a share, Samba maps guest connections to the operating system account set in the guest account parameter. Guest users can access these files if at least one of the following conditions is satisfied: The account is listed in file system ACLs The POSIX permissions for other users allow it Example 16.8. Guest Share Permissions If you configured Samba to map the guest account to nobody , which is the default, the ACLs in the following example: Allow guest users to read file1.txt Allow guest users to read and modify file2.txt . Prevent guest users to read or modify file3.txt For example, to enable guest access for the existing [example] share: Setting up a Guest Share Edit the /etc/samba/smb.conf file: If this is the first guest share you set up on this server: Set map to guest = Bad User in the [global] section: With this setting, Samba rejects login attempts that use an incorrect password unless the user name does not exist. If the specified user name does not exist and guest access is enabled on a share, Samba treats the connection as a guest log in. By default, Samba maps the guest account to the nobody account on Red Hat Enterprise Linux. Optionally, you can set a different account. For example: The account set in this parameter must exist locally on the Samba server. For security reasons, Red Hat recommends using an account that does not have a valid shell assigned. Add the guest ok = yes setting to the [example] section: Verify the /etc/samba/smb.conf file: For details, see Section 16.1.2, "Verifying the smb.conf File by Using the testparm Utility" . Reload the Samba configuration: 16.1.7. Setting up a Samba Print Server If you set up Samba as a print server, clients in your network can use Samba to print. Additionally, Windows clients can, if configured, download the driver from the Samba server. Before you can share a printer, set up Samba: Section 16.1.4, "Setting up Samba as a Standalone Server" Section 16.1.5, "Setting up Samba as a Domain Member" 16.1.7.1. The Samba spoolssd Service The Samba spoolssd is a service that is integrated into the smbd service. Enable spoolssd in the Samba configuration to significantly increase the performance on print servers with a high number of jobs or printers. Without spoolssd , Samba forks the smbd process and initializes the printcap cache for each print job. In case of a large number of printers, the smbd service can become unresponsive for multiple seconds while the cache is initialized. The spoolssd service enables you to start pre-forked smbd processes that are processing print jobs without any delays. The main spoolssd smbd process uses a low amount of memory, and forks and terminates child processes. To enable the spoolssd service: Enabling the spoolssd Service Edit the [global] section in the /etc/samba/smb.conf file: Add the following parameters: Optionally, you can set the following parameters: Parameter Default Description spoolssd:prefork_min_children 5 Minimum number of child processes spoolssd:prefork_max_children 25 Maximum number of child processes spoolssd:prefork_spawn_rate 5 Samba forks the number of new child processes set in this parameter, up to the value set in spoolssd:prefork_max_children , if a new connection is established spoolssd:prefork_max_allowed_clients 100 Number of clients, a child process serves spoolssd:prefork_child_min_life 60 Minimum lifetime of a child process in seconds. 60 seconds is the minimum. Verify the /etc/samba/smb.conf file: For details, see Section 16.1.2, "Verifying the smb.conf File by Using the testparm Utility" . Restart the smb service: After you restarted the service, Samba automatically starts smbd child processes: 16.1.7.2. Enabling Print Server Support in Samba To enable the print server support: Enabling Print Server Support in Samba On the Samba server, set up CUPS and add the printer to the CUPS back end. For details, see Section 16.3, "Print Settings" . Note Samba can only forward the print jobs to CUPS if CUPS is installed locally on the Samba print server. Edit the /etc/samba/smb.conf file: If you want to enable the spoolssd service, add the following parameters to the [global] section: For further details, see Section 16.1.7.1, "The Samba spoolssd Service" . To configure the printing back end, add the [printers] section: Important The printers share name is hard-coded and cannot be changed. Verify the /etc/samba/smb.conf file: For details, see Section 16.1.2, "Verifying the smb.conf File by Using the testparm Utility" . Open the required ports and reload the firewall configuration using the firewall-cmd utility: Restart the smb service: After restarting the service, Samba automatically shares all printers that are configured in the CUPS back end. If you want to manually share only specific printers, see Section 16.1.7.3, "Manually Sharing Specific Printers" . 16.1.7.3. Manually Sharing Specific Printers If you configured Samba as a print server, by default, Samba shares all printers that are configured in the CUPS back end. To share only specific printers: Manually Sharing a Specific Printer Edit the /etc/samba/smb.conf file: In the [global] section, disable automatic printer sharing by setting: Add a section for each printer you want to share. For example, to share the printer named example in the CUPS back end as Example-Printer in Samba, add the following section: You do not need individual spool directories for each printer. You can set the same spool directory in the path parameter for the printer as you set in the [printers] section. Verify the /etc/samba/smb.conf file: For details, see Section 16.1.2, "Verifying the smb.conf File by Using the testparm Utility" . Reload the Samba configuration: 16.1.7.4. Setting up Automatic Printer Driver Downloads for Windows Clients If you are running a Samba print server for Windows clients, you can upload drivers and preconfigure printers. If a user connects to a printer, Windows automatically downloads and installs the driver locally on the client. The user does not require local administrator permissions for the installation. Additionally, Windows applies preconfigured driver settings, such as the number of trays. Note Before setting up automatic printer driver download, must configure Samba as a print server and share a printer. For details, see Section 16.1.7, "Setting up a Samba Print Server" . 16.1.7.4.1. Basic Information about Printer Drivers This section provides general information about printer drivers. Supported Driver Model Version Samba only supports the printer driver model version 3 which is supported in Windows 2000 and later, and Windows Server 2000 and later. Samba does not support the driver model version 4, introduced in Windows 8 and Windows Server 2012. However, these and later Windows versions also support version 3 drivers. Package-aware Drivers Samba does not support package-aware drivers. Preparing a Printer Driver for Being Uploaded Before you can upload a driver to a Samba print server: Unpack the driver if it is provided in a compressed format. Some drivers require to start a setup application that installs the driver locally on a Windows host. In certain situations, the installer extracts the individual files into the operating system's temporary folder during the setup runs. To use the driver files for uploading: Start the installer. Copy the files from the temporary folder to a new location. Cancel the installation. Ask your printer manufacturer for drivers that support uploading to a print server. Providing 32-bit and 64-bit Drivers for a Printer to a Client To provide the driver for a printer for both 32-bit and 64-bit Windows clients, you must upload a driver with exactly the same name for both architectures. For example, if you are uploading the 32-bit driver named Example PostScript and the 64-bit driver named Example PostScript (v1.0) , the names do not match. Consequently, you can only assign one of the drivers to a printer and the driver will not be available for both architectures. 16.1.7.4.2. Enabling Users to Upload and Preconfigure Drivers To be able to upload and preconfigure printer drivers, a user or a group needs to have the SePrintOperatorPrivilege privilege granted. A user must be added into the printadmin group. Red Hat Enterprise Linux creates this group automatically when you install the samba package. The printadmin group gets assigned the lowest available dynamic system GID that is lower than 1000. To grant the SePrintOperatorPrivilege privilege to the printadmin group: Note In a domain environment, grant SePrintOperatorPrivilege to a domain group. This enables you to centrally manage the privilege by updating a user's group membership. To list all users and groups having SePrintOperatorPrivilege granted: 16.1.7.4.3. Setting up the printUSD Share Windows operating systems download printer drivers from a share named printUSD from a print server. This share name is hard-coded in Windows and cannot be changed. To share the /var/lib/samba/drivers/ directory as printUSD , and enable members of the local printadmin group to upload printer drivers: Setting up the printUSD Share Add the [printUSD] section to the /etc/samba/smb.conf file: Using these settings: Only members of the printadmin group can upload printer drivers to the share. The group of new created files and directories will be set to printadmin . The permissions of new files will be set to 664 . The permissions of new directories will be set to 2775 . To upload only 64-bit drivers for a printer, include this setting in the [global] section in the /etc/samba/smb.conf file: Without this setting, Windows only displays drivers for which you have uploaded at least the 32-bit version. Verify the /etc/samba/smb.conf file: For details, see Section 16.1.2, "Verifying the smb.conf File by Using the testparm Utility" . Reload the Samba configuration Create the printadmin group if it does not exists: Grant the SePrintOperatorPrivilege privilege to the printadmin group. For further details, see Section 16.1.7.4.2, "Enabling Users to Upload and Preconfigure Drivers" . If you run SELinux in enforcing mode, set the samba_share_t context on the directory: Set the permissions on the /var/lib/samba/drivers/ directory: If you use POSIX ACLs, set: If you use Windows ACLs, set: Principal Access Apply to CREATOR OWNER Full control Subfolders and files only Authenticated Users Read & execute, List folder contents, Read This folder, subfolders and files printadmin Full control This folder, subfolders and files For details about setting ACLs on Windows, see your Windows documentation. 16.1.7.4.4. Creating a GPO to Enable Clients to Trust the Samba Print Server For security reasons, recent Windows operating systems prevent clients from downloading non-package-aware printer drivers from an untrusted server. If your print server is a member in an AD, you can create a Group Policy Object (GPO) in your domain to trust the Samba server. To create GPOs, the Windows computer you are using must have the Windows Remote Server Administration Tools (RSAT) installed. For details, see your Windows documentation. Creating a GPO to Enable Clients to Trust the Samba Print Server Log into a Windows computer using an account that is allowed to edit group policies, such as the AD domain Administrator user. Open the Group Policy Management Console. Right-click to your AD domain and select Create a GPO in this domain, and Link it here Enter a name for the GPO, such as Legacy printer Driver Policy and click OK . The new GPO will be displayed under the domain entry. Right-click to the newly-created GPO and select Edit to open the Group Policy Management Editor . Navigate to Computer Configuration Policies Administrative Templates Printers . On the right side of the window, double-click Point and Print Restriction to edit the policy: Enable the policy and set the following options: Select Users can only point and print to these servers and enter the fully-qualified domain name (FQDN) of the Samba print server to the field to this option. In both check boxes under Security Prompts , select Do not show warning or elevation prompt . Click OK . Double-click Package Point and Print - Approved servers to edit the policy: Enable the policy and click the Show button. Enter the FQDN of the Samba print server. Close both the Show Contents and policy properties window by clicking OK . Close the Group Policy Management Editor . Close the Group Policy Management Console. After the Windows domain members applied the group policy, printer drivers are automatically downloaded from the Samba server when a user connects to a printer. For further details about using group policies, see your Windows documentation. 16.1.7.4.5. Uploading Drivers and Preconfiguring Printers Use the Print Management application on a Windows client to upload drivers and preconfigure printers hosted on the Samba print server. For further details, see your Windows documentation. 16.1.8. Tuning the Performance of a Samba Server This section describes what settings can improve the performance of Samba in certain situations, and which settings can have a negative performance impact. 16.1.8.1. Setting the SMB Protocol Version Each new SMB version adds features and improves the performance of the protocol. The recent Windows and Windows Server operating systems always supports the latest protocol version. If Samba also uses the latest protocol version, Windows clients connecting to Samba benefit from the performance improvements. In Samba, the default value of the server max protocol is set to the latest supported stable SMB protocol version. To always have the latest stable SMB protocol version enabled, do not set the server max protocol parameter. If you set the parameter manually, you will need to modify the setting with each new version of the SMB protocol, to have the latest protocol version enabled. To unset, remove the server max protocol parameter from the [global] section in the /etc/samba/smb.conf file. 16.1.8.2. Tuning Shares with Directories That Contain a Large Number of Files To improve the performance of shares that contain directories with more than 100.000 files: Tuning Shares with Directories That Contain a Large Number of Files Rename all files on the share to lowercase. Note Using the settings in this procedure, files with names other than in lowercase will no longer be displayed. Set the following parameters in the share's section: For details about the parameters, see their descriptions in the smb.conf (5) man page. Reload the Samba configuration: After you applied these settings, the names of all newly created files on this share use lowercase. Because of these settings, Samba no longer needs to scan the directory for uppercase and lowercase, which improves the performance. 16.1.8.3. Settings That Can Have a Negative Performance Impact By default, the kernel in Red Hat Enterprise Linux is tuned for high network performance. For example, the kernel uses an auto-tuning mechanism for buffer sizes. Setting the socket options parameter in the /etc/samba/smb.conf file overrides these kernel settings. As a result, setting this parameter decreases the Samba network performance in most cases. To use the optimized settings from the Kernel, remove the socket options parameter from the [global] section in the /etc/samba/smb.conf . 16.1.9. Frequently Used Samba Command-line Utilities This section describes frequently used commands when working with a Samba server. 16.1.9.1. Using the net Utility The net utility enables you to perform several administration tasks on a Samba server. This section describes the most frequently used subcommands of the net utility. For further details, see the net (8) man page. 16.1.9.1.1. Using the net ads join and net rpc join Commands Using the join subcommand of the net utility, you can join Samba to an AD or NT4 domain. To join the domain, you must create the /etc/samba/smb.conf file manually, and optionally update additional configurations, such as PAM. Important Red Hat recommends using the realm utility to join a domain. The realm utility automatically updates all involved configuration files. For details, see Section 16.1.5.1, "Joining a Domain" . To join a domain using the net command: Joining a Domain Using the net Command Manually create the /etc/samba/smb.conf file with the following settings: For an AD domain member: For an NT4 domain member: Add an ID mapping configuration for the * default domain and for the domain you want to join to the [global] section in the /etc/samba/smb.conf . For details, see Section 16.1.5.3, "Understanding ID Mapping" . Verify the /etc/samba/smb.conf file: For details, see Section 16.1.2, "Verifying the smb.conf File by Using the testparm Utility" . Join the domain as the domain administrator: To join an AD domain: To join an NT4 domain: Append the winbind source to the passwd and group database entry in the /etc/nsswitch.conf file: Enable and start the winbind service: Optionally, configure PAM using the authconf utility. For details, see the Using Pluggable Authentication Modules (PAM) section in the Red Hat System-Level Authentication Guide . Optionally for AD environments, configure the Kerberos client. For details, see the Configuring a Kerberos Client section in the Red Hat System-Level Authentication Guide . 16.1.9.1.2. Using the net rpc rights Command In Windows, you can assign privileges to accounts and groups to perform special operations, such as setting ACLs on a share or upload printer drivers. On a Samba server, you can use the net rpc rights command to manage privileges. Listing Privileges To list all available privileges and their owners, use the net rpc rights list command. For example: Granting Privileges To grant a privilege to an account or group, use the net rpc rights grant command. For example, grant the SePrintOperatorPrivilege privilege to the DOMAIN \printadmin group: Revoking Privileges To revoke a privilege from an account or group, use the net rpc rights revoke . For example, to revoke the SePrintOperatorPrivilege privilege from the DOMAIN \printadmin group: 16.1.9.1.3. Using the net rpc share Command The net rpc share command provides the capability to list, add, and remove shares on a local or remote Samba or Windows server. Listing Shares To list the shares on an SMB server, use the net rpc share list command . Optionally, pass the -S server_name parameter to the command to list the shares of a remote server. For example: Note Shares hosted on a Samba server that have browseable = no set in their section in the /etc/samba/smb.conf file are not displayed in the output. Adding a Share The net rpc share add command enables you to add a share to an SMB server. For example, to add a share named example on a remote Windows server that shares the C:\example\ directory: Note You must omit the trailing backslash in the path when specifying a Windows directory name. To use the command to add a share to a Samba server: The user specified in the -U parameter must have the SeDiskOperatorPrivilege privilege granted. You must write a script that adds a share section to the /etc/samba/smb.conf file and reloads Samba. The script must be set in the add share command parameter in the [global] section in /etc/samba/smb.conf . For further details, see the add share command description in the smb.conf (5) man page. Removing a Share The net rpc share delete command enables you to remove a share from an SMB server. For example, to remove the share named example from a remote Windows server: To use the command to remove a share from a Samba server: The user specified in the -U parameter must have the SeDiskOperatorPrivilege privilege granted. You must write a script that removes the share's section from the /etc/samba/smb.conf file and reloads Samba. The script must be set in the delete share command parameter in the [global] section in /etc/samba/smb.conf . For further details, see the delete share command description in the smb.conf (5) man page. 16.1.9.1.4. Using the net user Command The net user command enables you to perform the following actions on an AD DC or NT4 PDC: List all user accounts Add users Remove Users Note Specifying a connection method, such as ads for AD domains or rpc for NT4 domains, is only required when you list domain user accounts. Other user-related subcommands can auto-detect the connection method. Pass the -U user_name parameter to the command to specify a user that is allowed to perform the requested action. Listing Domain User Accounts To list all users in an AD domain: To list all users in an NT4 domain: Adding a User Account to the Domain On a Samba domain member, you can use the net user add command to add a user account to the domain. For example, add the user account to the domain: Adding a User Account to the Domain Add the account: Optionally, use the remote procedure call (RPC) shell to enable the account on the AD DC or NT4 PDC. For example: Deleting a User Account from the Domain On a Samba domain member, you can use the net user delete command to remove a user account from the domain. For example, to remove the user account from the domain: 16.1.9.1.5. Using the net usershare Command See Section 16.1.6.4, "Enabling Users to Share Directories on a Samba Server" . 16.1.9.2. Using the rpcclient Utility The rpcclient utility enables you to manually execute client-side Microsoft Remote Procedure Call (MS-RPC) functions on a local or remote SMB server. However, most of the features are integrated into separate utilities provided by Samba. Use rpcclient only for testing MS-PRC functions. For example, you can use the utility to: Manage the printer Spool Subsystem (SPOOLSS). Example 16.9. Assigning a Driver to a Printer Retrieve information about an SMB server. Example 16.10. Listing all File Shares and Shared Printers Perform actions using the Security Account Manager Remote (SAMR) protocol. Example 16.11. Listing Users on an SMB Server If you run the command against a standalone server or a domain member, it lists the users in the local database. Running the command against an AD DC or NT4 PDC lists the domain users. For a complete list of supported subcommands, see COMMANDS section in the rpcclient (1) man page. 16.1.9.3. Using the samba-regedit Application Certain settings, such as printer configurations, are stored in the registry on the Samba server. You can use the ncurses-based samba-regedit application to edit the registry of a Samba server. To start the application, enter: Use the following keys: Cursor up and cursor down: Navigate through the registry tree and the values. Enter : Opens a key or edits a value. Tab : Switches between the Key and Value pane. Ctrl + C : Closes the application. 16.1.9.4. Using the smbcacls Utility See Section 16.1.6.3, "Managing ACLs on an SMB Share Using smbcacls " . 16.1.9.5. Using the smbclient Utility The smbclient utility enables you to access file shares on an SMB server, similarly to a command-line FTP client. You can use it, for example, to upload and download files to and from a share. For example, to authenticate to the example share hosted on server using the DOMAIN\user account: After smbclient connected successfully to the share, the utility enters the interactive mode and shows the following prompt: To display all available commands in the interactive shell, enter: To display the help for a specific command, enter: For further details and descriptions of the commands available in the interactive shell, see the smbclient (1) man page. 16.1.9.5.1. Using smbclient in Interactive Mode If you use smbclient without the -c parameter, the utility enters the interactive mode. The following procedure shows how to connect to an SMB share and download a file from a subdirectory: Downloading a File from an SMB Share Using smbclient Connect to the share: Change into the /example/ directory: List the files in the directory: Download the example.txt file: Disconnect from the share: 16.1.9.5.2. Using smbclient in Scripting Mode If you pass the -c commands parameter to smbclient , you can automatically execute the commands on the remote SMB share. This enables you to use smbclient in scripts. The following command shows how to connect to an SMB share and download a file from a subdirectory: 16.1.9.6. Using the smbcontrol Utility The smbcontrol utility enables you to send command messages to the smbd , nmbd , winbindd , or all of these services. These control messages instruct the service, for example, to reload its configuration. Example 16.12. Reloading the Configuration of the smbd , nmbd , and winbindd Service For example, to reload the configuration of the smbd , nmbd , winbindd , send the reload-config message-type to the all destination: For further details and a list of available command message types, see the smbcontrol (1) man page. 16.1.9.7. Using the smbpasswd Utility The smbpasswd utility manages user accounts and passwords in the local Samba database. If you run the command as a user, smbpasswd changes the Samba password of the user. For example: If you run smbpasswd as the root user, you can use the utility, for example, to: Create a new user: Note Before you can add a user to the Samba database, you must create the account in the local operating system. See Section 4.3.1, "Adding a New User" Enable a Samba user: Disable a Samba user: Delete a user: For further details, see the smbpasswd (8) man page. 16.1.9.8. Using the smbstatus Utility The smbstatus utility reports on: Connections per PID of each smbd daemon to the Samba server. This report includes the user name, primary group, SMB protocol version, encryption, and signing information. Connections per Samba share. This report includes the PID of the smbd daemon, the IP of the connecting machine, the time stamp when the connection was established, encryption, and signing information. A list of locked files. The report entries include further details, such as opportunistic lock (oplock) types Example 16.13. Output of the smbstatus Utility For further details, see the smbstatus (1) man page. 16.1.9.9. Using the smbtar Utility The smbtar utility backs up the content of an SMB share or a subdirectory of it and stores the content in a tar archive. Alternatively, you can write the content to a tape device. For example, to back up the content of the demo directory on the //server/example/ share and store the content in the /root/example.tar archive: For further details, see the smbtar (1) man page. 16.1.9.10. Using the testparm Utility See Section 16.1.2, "Verifying the smb.conf File by Using the testparm Utility" . 16.1.9.11. Using the wbinfo Utility The wbinfo utility queries and returns information created and used by the winbindd service. Note The winbindd service must be configured and running to use wbinfo . You can use wbinfo , for example, to: List domain users: List domain groups: Display the SID of a user: Display information about domains and trusts: For further details, see the wbinfo (1) man page. 16.1.10. Additional Resources The Red Hat Samba packages include manual pages for all Samba commands and configuration files the package installs. For example, to display the man page of the /etc/samba/smb.conf file that explains all configuration parameters you can set in this file: /usr/share/docs/samba- version / : Contains general documentation, example scripts, and LDAP schema files, provided by the Samba project. Red Hat Cluster Storage Administration Guide : Provides information about setting up Samba and the Clustered Trivial Database (CDTB) to share directories stored on an GlusterFS volume. The An active/active Samba Server in a Red Hat High Availability Cluster chapter in the Red Hat Enterprise Linux High Availability Add-on Administration guide describes how to up a Samba high-availability installation. For details about mounting an SMB share on Red Hat Enterprise Linux, see the corresponding section in the Red Hat Storage Administration Guide . 16.2. FTP The File Transfer Protocol ( FTP ) is one of the oldest and most commonly used protocols found on the Internet today. Its purpose is to reliably transfer files between computer hosts on a network without requiring the user to log directly in to the remote host or to have knowledge of how to use the remote system. It allows users to access files on remote systems using a standard set of simple commands. This section outlines the basics of the FTP protocol and introduces vsftpd , which is the preferred FTP server in Red Hat Enterprise Linux. 16.2.1. The File Transfer Protocol FTP uses a client-server architecture to transfer files using the TCP network protocol. Because FTP is a rather old protocol, it uses unencrypted user name and password authentication. For this reason, it is considered an insecure protocol and should not be used unless absolutely necessary. However, because FTP is so prevalent on the Internet, it is often required for sharing files to the public. System administrators, therefore, should be aware of FTP 's unique characteristics. This section describes how to configure vsftpd to establish connections secured by TLS and how to secure an FTP server with the help of SELinux . A good substitute for FTP is sftp from the OpenSSH suite of tools. For information about configuring OpenSSH and about the SSH protocol in general, refer to Chapter 12, OpenSSH . Unlike most protocols used on the Internet, FTP requires multiple network ports to work properly. When an FTP client application initiates a connection to an FTP server, it opens port 21 on the server - known as the command port . This port is used to issue all commands to the server. Any data requested from the server is returned to the client via a data port . The port number for data connections, and the way in which data connections are initialized, vary depending upon whether the client requests the data in active or passive mode. The following defines these modes: active mode Active mode is the original method used by the FTP protocol for transferring data to the client application. When an active-mode data transfer is initiated by the FTP client, the server opens a connection from port 20 on the server to the IP address and a random, unprivileged port (greater than 1024) specified by the client. This arrangement means that the client machine must be allowed to accept connections over any port above 1024. With the growth of insecure networks, such as the Internet, the use of firewalls for protecting client machines is now prevalent. Because these client-side firewalls often deny incoming connections from active-mode FTP servers, passive mode was devised. passive mode Passive mode, like active mode, is initiated by the FTP client application. When requesting data from the server, the FTP client indicates it wants to access the data in passive mode and the server provides the IP address and a random, unprivileged port (greater than 1024) on the server. The client then connects to that port on the server to download the requested information. While passive mode does resolve issues for client-side firewall interference with data connections, it can complicate administration of the server-side firewall. You can reduce the number of open ports on a server by limiting the range of unprivileged ports on the FTP server. This also simplifies the process of configuring firewall rules for the server. 16.2.2. The vsftpd Server The Very Secure FTP Daemon ( vsftpd ) is designed from the ground up to be fast, stable, and, most importantly, secure. vsftpd is the only stand-alone FTP server distributed with Red Hat Enterprise Linux, due to its ability to handle large numbers of connections efficiently and securely. The security model used by vsftpd has three primary aspects: Strong separation of privileged and non-privileged processes - Separate processes handle different tasks, and each of these processes runs with the minimal privileges required for the task. Tasks requiring elevated privileges are handled by processes with the minimal privilege necessary - By taking advantage of compatibilities found in the libcap library, tasks that usually require full root privileges can be executed more safely from a less privileged process. Most processes run in a chroot jail - Whenever possible, processes are change-rooted to the directory being shared; this directory is then considered a chroot jail. For example, if the /var/ftp/ directory is the primary shared directory, vsftpd reassigns /var/ftp/ to the new root directory, known as / . This disallows any potential malicious hacker activities for any directories not contained in the new root directory. Use of these security practices has the following effect on how vsftpd deals with requests: The parent process runs with the least privileges required - The parent process dynamically calculates the level of privileges it requires to minimize the level of risk. Child processes handle direct interaction with the FTP clients and run with as close to no privileges as possible. All operations requiring elevated privileges are handled by a small parent process - Much like the Apache HTTP Server , vsftpd launches unprivileged child processes to handle incoming connections. This allows the privileged, parent process to be as small as possible and handle relatively few tasks. All requests from unprivileged child processes are distrusted by the parent process - Communication with child processes is received over a socket, and the validity of any information from child processes is checked before being acted on. Most interactions with FTP clients are handled by unprivileged child processes in a chroot jail - Because these child processes are unprivileged and only have access to the directory being shared, any crashed processes only allow the attacker access to the shared files. 16.2.2.1. Starting and Stopping vsftpd To start the vsftpd service in the current session, type the following at a shell prompt as root : To stop the service in the current session, type as root : To restart the vsftpd service, run the following command as root : This command stops and immediately starts the vsftpd service, which is the most efficient way to make configuration changes take effect after editing the configuration file for this FTP server. Alternatively, you can use the following command to restart the vsftpd service only if it is already running: By default, the vsftpd service does not start automatically at boot time. To configure the vsftpd service to start at boot time, type the following at a shell prompt as root : For more information on how to manage system services in Red Hat Enterprise Linux 7, see Chapter 10, Managing Services with systemd . 16.2.2.2. Starting Multiple Copies of vsftpd Sometimes, one computer is used to serve multiple FTP domains. This is a technique called multihoming . One way to multihome using vsftpd is by running multiple copies of the daemon, each with its own configuration file. To do this, first assign all relevant IP addresses to network devices or alias network devices on the system. For more information about configuring network devices, device aliases, and additional information about network configuration scripts, see the Red Hat Enterprise Linux 7 Networking Guide . , the DNS server for the FTP domains must be configured to reference the correct machine. For information about BIND , the DNS protocol implementation used in Red Hat Enterprise Linux, and its configuration files, see the Red Hat Enterprise Linux 7 Networking Guide . For vsftpd to answer requests on different IP addresses, multiple copies of the daemon must be running. To facilitate launching multiple instances of the vsftpd daemon, a special systemd service unit ( [email protected] ) for launching vsftpd as an instantiated service is supplied in the vsftpd package. In order to make use of this service unit, a separate vsftpd configuration file for each required instance of the FTP server must be created and placed in the /etc/vsftpd/ directory. Note that each of these configuration files must have a unique name (such as /etc/vsftpd/ vsftpd-site-2 .conf ) and must be readable and writable only by the root user. Within each configuration file for each FTP server listening on an IPv4 network, the following directive must be unique: Replace N.N.N.N with a unique IP address for the FTP site being served. If the site is using IPv6 , use the listen_address6 directive instead. Once there are multiple configuration files present in the /etc/vsftpd/ directory, individual instances of the vsftpd daemon can be started by executing the following command as root : In the above command, replace configuration-file-name with the unique name of the requested server's configuration file, such as vsftpd-site-2 . Note that the configuration file's .conf extension should not be included in the command. If you want to start several instances of the vsftpd daemon at once, you can make use of a systemd target unit file ( vsftpd.target ), which is supplied in the vsftpd package. This systemd target causes an independent vsftpd daemon to be launched for each available vsftpd configuration file in the /etc/vsftpd/ directory. Execute the following command as root to enable the target: The above command configures the systemd service manager to launch the vsftpd service (along with the configured vsftpd server instances) at boot time. To start the service immediately, without rebooting the system, execute the following command as root : See Section 10.3, "Working with systemd Targets" for more information on how to use systemd targets to manage services. Other directives to consider altering on a per-server basis are: anon_root local_root vsftpd_log_file xferlog_file 16.2.2.3. Encrypting vsftpd Connections Using TLS In order to counter the inherently insecure nature of FTP , which transmits user names, passwords, and data without encryption by default, the vsftpd daemon can be configured to utilize the TLS protocol to authenticate connections and encrypt all transfers. Note that an FTP client that supports TLS is needed to communicate with vsftpd with TLS enabled. Note SSL (Secure Sockets Layer) is the name of an older implementation of the security protocol. The new versions are called TLS (Transport Layer Security). Only the newer versions ( TLS ) should be used as SSL suffers from serious security vulnerabilities. The documentation included with the vsftpd server, as well as the configuration directives used in the vsftpd.conf file, use the SSL name when referring to security-related matters, but TLS is supported and used by default when the ssl_enable directive is set to YES . Set the ssl_enable configuration directive in the vsftpd.conf file to YES to turn on TLS support. The default settings of other TLS -related directives that become automatically active when the ssl_enable option is enabled provide for a reasonably well-configured TLS set up. This includes, among other things, the requirement to only use the TLS v1 protocol for all connections (the use of the insecure SSL protocol versions is disabled by default) or forcing all non-anonymous logins to use TLS for sending passwords and data transfers. Example 16.14. Configuring vsftpd to Use TLS In this example, the configuration directives explicitly disable the older SSL versions of the security protocol in the vsftpd.conf file: Restart the vsftpd service after you modify its configuration: See the vsftpd.conf (5) manual page for other TLS -related configuration directives for fine-tuning the use of TLS by vsftpd . 16.2.2.4. SELinux Policy for vsftpd The SELinux policy governing the vsftpd daemon (as well as other ftpd processes), defines a mandatory access control, which, by default, is based on least access required. In order to allow the FTP daemon to access specific files or directories, appropriate labels need to be assigned to them. For example, in order to be able to share files anonymously, the public_content_t label must be assigned to the files and directories to be shared. You can do this using the chcon command as root : In the above command, replace /path/to/directory with the path to the directory to which you want to assign the label. Similarly, if you want to set up a directory for uploading files, you need to assign that particular directory the public_content_rw_t label. In addition to that, the allow_ftpd_anon_write SELinux Boolean option must be set to 1 . Use the setsebool command as root to do that: If you want local users to be able to access their home directories through FTP , which is the default setting on Red Hat Enterprise Linux 7, the ftp_home_dir Boolean option needs to be set to 1 . If vsftpd is to be allowed to run in standalone mode, which is also enabled by default on Red Hat Enterprise Linux 7, the ftpd_is_daemon option needs to be set to 1 as well. See the ftpd_selinux (8) manual page for more information, including examples of other useful labels and Boolean options, on how to configure the SELinux policy pertaining to FTP . Also, see the Red Hat Enterprise Linux 7 SELinux User's and Administrator's Guide for more detailed information about SELinux in general. 16.2.3. Additional Resources For more information about vsftpd , see the following resources. 16.2.3.1. Installed Documentation The /usr/share/doc/vsftpd- version-number / directory - Replace version-number with the installed version of the vsftpd package. This directory contains a README file with basic information about the software. The TUNING file contains basic performance-tuning tips and the SECURITY/ directory contains information about the security model employed by vsftpd . vsftpd -related manual pages - There are a number of manual pages for the daemon and the configuration files. The following lists some of the more important manual pages. Server Applications vsftpd (8) - Describes available command-line options for vsftpd . Configuration Files vsftpd.conf (5) - Contains a detailed list of options available within the configuration file for vsftpd . hosts_access (5) - Describes the format and options available within the TCP wrappers configuration files: hosts.allow and hosts.deny . Interaction with SELinux ftpd_selinux (8) - Contains a description of the SELinux policy governing ftpd processes as well as an explanation of the way SELinux labels need to be assigned and Booleans set. 16.2.3.2. Online Documentation About vsftpd and FTP in General http://vsftpd.beasts.org/ - The vsftpd project page is a great place to locate the latest documentation and to contact the author of the software. http://slacksite.com/other/ftp.html - This website provides a concise explanation of the differences between active and passive-mode FTP . Red Hat Enterprise Linux Documentation Red Hat Enterprise Linux 7 Networking Guide - The Networking Guide for Red Hat Enterprise Linux 7 documents relevant information regarding the configuration and administration of network interfaces, networks, and network services in this system. It provides an introduction to the hostnamectl utility and explains how to use it to view and set host names on the command line, both locally and remotely. Red Hat Enterprise Linux 7 SELinux User's and Administrator's Guide - The SELinux User's and Administrator's Guide for Red Hat Enterprise Linux 7 describes the basic principles of SELinux and documents in detail how to configure and use SELinux with various services such as the Apache HTTP Server , Postfix , PostgreSQL , or OpenShift . It explains how to configure SELinux access permissions for system services managed by systemd . Red Hat Enterprise Linux 7 Security Guide - The Security Guide for Red Hat Enterprise Linux 7 assists users and administrators in learning the processes and practices of securing their workstations and servers against local and remote intrusion, exploitation, and malicious activity. It also explains how to secure critical system services. Relevant RFC Documents RFC 0959 - The original Request for Comments ( RFC ) of the FTP protocol from the IETF . RFC 1123 - The small FTP -related section extends and clarifies RFC 0959. RFC 2228 - FTP security extensions. vsftpd implements the small subset needed to support TLS and SSL connections. RFC 2389 - Proposes FEAT and OPTS commands. RFC 2428 - IPv6 support. 16.3. Print Settings The Print Settings tool serves for printer configuring, maintenance of printer configuration files, print spool directories and print filters, and printer classes management. The tool is based on the Common Unix Printing System ( CUPS ). If you upgraded the system from a Red Hat Enterprise Linux version that used CUPS, the upgrade process preserved the configured printers. Important The cupsd.conf man page documents configuration of a CUPS server. It includes directives for enabling SSL support. However, CUPS does not allow control of the protocol versions used. Due to the vulnerability described in Resolution for POODLE SSLv3.0 vulnerability (CVE-2014-3566) for components that do not allow SSLv3 to be disabled via configuration settings , Red Hat recommends that you do not rely on this for security. It is recommend that you use stunnel to provide a secure tunnel and disable SSLv3 . For more information on using stunnel , see the Red Hat Enterprise Linux 7 Security Guide . For ad-hoc secure connections to a remote system's Print Settings tool, use X11 forwarding over SSH as described in Section 12.4.1, "X11 Forwarding" . Note You can perform the same and additional operations on printers directly from the CUPS web application or command line. To access the application, in a web browser, go to http://localhost:631/ . For CUPS manuals refer to the links on the Home tab of the web site. 16.3.1. Starting the Print Settings Configuration Tool With the Print Settings configuration tool you can perform various operations on existing printers and set up new printers. You can also use CUPS directly (go to http://localhost:631/ to access the CUPS web application). To start the Print Settings tool from the command line, type system-config-printer at a shell prompt. The Print Settings tool appears. Alternatively, if using the GNOME desktop, press the Super key to enter the Activities Overview, type Print Settings and then press Enter . The Print Settings tool appears. The Super key appears in a variety of guises, depending on the keyboard and other hardware, but often as either the Windows or Command key, and typically to the left of the Spacebar . The Print Settings window depicted in Figure 16.1, "Print Settings window" appears. Figure 16.1. Print Settings window 16.3.2. Starting Printer Setup Printer setup process varies depending on the printer queue type. If you are setting up a local printer connected with USB, the printer is discovered and added automatically. You will be prompted to confirm the packages to be installed and provide an administrator or the root user password. Local printers connected with other port types and network printers need to be set up manually. Follow this procedure to start a manual printer setup: Start the Print Settings tool (refer to Section 16.3.1, "Starting the Print Settings Configuration Tool" ). Go to Server New Printer . In the Authenticate dialog box, enter an administrator or root user password. If this is the first time you have configured a remote printer you will be prompted to authorize an adjustment to the firewall. Select the printer connection type and provide its details in the area on the right. 16.3.3. Adding a Local Printer Follow this procedure to add a local printer connected with other than a serial port: Open the Add printer dialog (refer to Section 16.3.2, "Starting Printer Setup" ). If the device does not appear automatically, select the port to which the printer is connected in the list on the left (such as Serial Port #1 or LPT #1 ). On the right, enter the connection properties: for Other URI (for example file:/dev/lp0) for Serial Port Baud Rate Parity Data Bits Flow Control Figure 16.2. Adding a local printer Click Forward . Select the printer model. See Section 16.3.8, "Selecting the Printer Model and Finishing" for details. 16.3.4. Adding an AppSocket/HP JetDirect printer Follow this procedure to add an AppSocket/HP JetDirect printer: Open the New Printer dialog (refer to Section 16.3.1, "Starting the Print Settings Configuration Tool" ). In the list on the left, select Network Printer AppSocket/HP JetDirect . On the right, enter the connection settings: Hostname Printer host name or IP address. Port Number Printer port listening for print jobs ( 9100 by default). Figure 16.3. Adding a JetDirect printer Click Forward . Select the printer model. See Section 16.3.8, "Selecting the Printer Model and Finishing" for details. 16.3.5. Adding an IPP Printer An IPP printer is a printer attached to a different system on the same TCP/IP network. The system this printer is attached to may either be running CUPS or simply configured to use IPP . If a firewall is enabled on the printer server, then the firewall must be configured to allow incoming TCP connections on port 631 . Note that the CUPS browsing protocol allows client machines to discover shared CUPS queues automatically. To enable this, the firewall on the client machine must be configured to allow incoming UDP packets on port 631 . Follow this procedure to add an IPP printer: Open the New Printer dialog (refer to Section 16.3.2, "Starting Printer Setup" ). In the list of devices on the left, select Network Printer and Internet Printing Protocol (ipp) or Internet Printing Protocol (https) . On the right, enter the connection settings: Host The host name of the IPP printer. Queue The queue name to be given to the new queue (if the box is left empty, a name based on the device node will be used). Figure 16.4. Adding an IPP printer Click Forward to continue. Select the printer model. See Section 16.3.8, "Selecting the Printer Model and Finishing" for details. 16.3.6. Adding an LPD/LPR Host or Printer Follow this procedure to add an LPD/LPR host or printer: Open the New Printer dialog (refer to Section 16.3.2, "Starting Printer Setup" ). In the list of devices on the left, select Network Printer LPD/LPR Host or Printer . On the right, enter the connection settings: Host The host name of the LPD/LPR printer or host. Optionally, click Probe to find queues on the LPD host. Queue The queue name to be given to the new queue (if the box is left empty, a name based on the device node will be used). Figure 16.5. Adding an LPD/LPR printer Click Forward to continue. Select the printer model. See Section 16.3.8, "Selecting the Printer Model and Finishing" for details. 16.3.7. Adding a Samba (SMB) printer Follow this procedure to add a Samba printer: Note Note that in order to add a Samba printer, you need to have the samba-client package installed. You can do so by running, as root : For more information on installing packages with Yum, refer to Section 9.2.4, "Installing Packages" . Open the New Printer dialog (refer to Section 16.3.2, "Starting Printer Setup" ). In the list on the left, select Network Printer Windows Printer via SAMBA . Enter the SMB address in the smb:// field. Use the format computer name/printer share . In Figure 16.6, "Adding a SMB printer" , the computer name is dellbox and the printer share is r2 . Figure 16.6. Adding a SMB printer Click Browse to see the available workgroups/domains. To display only queues of a particular host, type in the host name (NetBios name) and click Browse . Select either of the options: Prompt user if authentication is required : user name and password are collected from the user when printing a document. Set authentication details now : provide authentication information now so it is not required later. In the Username field, enter the user name to access the printer. This user must exist on the SMB system, and the user must have permission to access the printer. The default user name is typically guest for Windows servers, or nobody for Samba servers. Enter the Password (if required) for the user specified in the Username field. Warning Samba printer user names and passwords are stored in the printer server as unencrypted files readable by root and the Linux Printing Daemon, lpd . Thus, other users that have root access to the printer server can view the user name and password you use to access the Samba printer. Therefore, when you choose a user name and password to access a Samba printer, it is advisable that you choose a password that is different from what you use to access your local Red Hat Enterprise Linux system. If there are files shared on the Samba print server, it is recommended that they also use a password different from what is used by the print queue. Click Verify to test the connection. Upon successful verification, a dialog box appears confirming printer share accessibility. Click Forward . Select the printer model. See Section 16.3.8, "Selecting the Printer Model and Finishing" for details. 16.3.8. Selecting the Printer Model and Finishing Once you have properly selected a printer connection type, the system attempts to acquire a driver. If the process fails, you can locate or search for the driver resources manually. Follow this procedure to provide the printer driver and finish the installation: In the window displayed after the automatic driver detection has failed, select one of the following options: Select a Printer from database - the system chooses a driver based on the selected make of your printer from the list of Makes . If your printer model is not listed, choose Generic . Provide PPD file - the system uses the provided PostScript Printer Description ( PPD ) file for installation. A PPD file may also be delivered with your printer as being normally provided by the manufacturer. If the PPD file is available, you can choose this option and use the browser bar below the option description to select the PPD file. Search for a printer driver to download - enter the make and model of your printer into the Make and model field to search on OpenPrinting.org for the appropriate packages. Figure 16.7. Selecting a printer brand Depending on your choice provide details in the area displayed below: Printer brand for the Select printer from database option. PPD file location for the Provide PPD file option. Printer make and model for the Search for a printer driver to download option. Click Forward to continue. If applicable for your option, window shown in Figure 16.8, "Selecting a printer model" appears. Choose the corresponding model in the Models column on the left. Note On the right, the recommended printer driver is automatically selected; however, you can select another available driver. The print driver processes the data that you want to print into a format the printer can understand. Since a local printer is attached directly to your computer, you need a printer driver to process the data that is sent to the printer. Figure 16.8. Selecting a printer model Click Forward . Under the Describe Printer enter a unique name for the printer in the Printer Name field. The printer name can contain letters, numbers, dashes (-), and underscores ( ); it _must not contain any spaces. You can also use the Description and Location fields to add further printer information. Both fields are optional, and may contain spaces. Figure 16.9. Printer setup Click Apply to confirm your printer configuration and add the print queue if the settings are correct. Click Back to modify the printer configuration. After the changes are applied, a dialog box appears allowing you to print a test page. Click Yes to print a test page now. Alternatively, you can print a test page later as described in Section 16.3.9, "Printing a Test Page" . 16.3.9. Printing a Test Page After you have set up a printer or changed a printer configuration, print a test page to make sure the printer is functioning properly: Right-click the printer in the Printing window and click Properties . In the Properties window, click Settings on the left. On the displayed Settings tab, click the Print Test Page button. 16.3.10. Modifying Existing Printers To delete an existing printer, in the Print Settings window, select the printer and go to Printer Delete . Confirm the printer deletion. Alternatively, press the Delete key. To set the default printer, right-click the printer in the printer list and click the Set as Default button in the context menu. 16.3.10.1. The Settings Page To change printer driver configuration, double-click the corresponding name in the Printer list and click the Settings label on the left to display the Settings page. You can modify printer settings such as make and model, print a test page, change the device location (URI), and more. Figure 16.10. Settings page 16.3.10.2. The Policies Page Click the Policies button on the left to change settings in printer state and print output. You can select the printer states, configure the Error Policy of the printer (you can decide to abort the print job, retry, or stop it if an error occurs). You can also create a banner page (a page that describes aspects of the print job such as the originating printer, the user name from the which the job originated, and the security status of the document being printed): click the Starting Banner or Ending Banner drop-down menu and choose the option that best describes the nature of the print jobs (for example, confidential ). 16.3.10.2.1. Sharing Printers On the Policies page, you can mark a printer as shared: if a printer is shared, users published on the network can use it. To allow the sharing function for printers, go to Server Settings and select Publish shared printers connected to this system . Figure 16.11. Policies page Make sure that the firewall allows incoming TCP connections to port 631 , the port for the Network Printing Server ( IPP ) protocol. To allow IPP traffic through the firewall on Red Hat Enterprise Linux 7, make use of firewalld 's IPP service. To do so, proceed as follows: Enabling IPP Service in firewalld To start the graphical firewall-config tool, press the Super key to enter the Activities Overview, type firewall and then press Enter . The Firewall Configuration window opens. You will be prompted for an administrator or root password. Alternatively, to start the graphical firewall configuration tool using the command line, enter the following command as root user: The Firewall Configuration window opens. Look for the word "Connected" in the lower left corner. This indicates that the firewall-config tool is connected to the user space daemon, firewalld . To immediately change the current firewall settings, ensure the drop-down selection menu labeled Configuration is set to Runtime . Alternatively, to edit the settings to be applied at the system start, or firewall reload, select Permanent from the drop-down list. Select the Zones tab and then select the firewall zone to correspond with the network interface to be used. The default is the public zone. The Interfaces tab shows what interfaces have been assigned to a zone. Select the Services tab and then select the ipp service to enable sharing. The ipp-client service is required for accessing network printers. Close the firewall-config tool. For more information on opening and closing ports in firewalld , see the Red Hat Enterprise Linux 7 Security Guide . 16.3.10.2.2. The Access Control Page You can change user-level access to the configured printer on the Access Control page. Click the Access Control label on the left to display the page. Select either Allow printing for everyone except these users or Deny printing for everyone except these users and define the user set below: enter the user name in the text box and click the Add button to add the user to the user set. Figure 16.12. Access Control page 16.3.10.2.3. The Printer Options Page The Printer Options page contains various configuration options for the printer media and output, and its content may vary from printer to printer. It contains general printing, paper, quality, and printing size settings. Figure 16.13. Printer Options page 16.3.10.2.4. Job Options Page On the Job Options page, you can detail the printer job options. Click the Job Options label on the left to display the page. Edit the default settings to apply custom job options, such as number of copies, orientation, pages per side, scaling (increase or decrease the size of the printable area, which can be used to fit an oversize print area onto a smaller physical sheet of print medium), detailed text options, and custom job options. Figure 16.14. Job Options page 16.3.10.2.5. Ink/Toner Levels Page The Ink/Toner Levels page contains details on toner status if available and printer status messages. Click the Ink/Toner Levels label on the left to display the page. Figure 16.15. Ink/Toner Levels page 16.3.10.3. Managing Print Jobs When you send a print job to the printer daemon, such as printing a text file from Emacs or printing an image from GIMP , the print job is added to the print spool queue. The print spool queue is a list of print jobs that have been sent to the printer and information about each print request, such as the status of the request, the job number, and more. During the printing process, the Printer Status icon appears in the Notification Area on the panel. To check the status of a print job, click the Printer Status , which displays a window similar to Figure 16.16, "GNOME Print Status" . Figure 16.16. GNOME Print Status To cancel, hold, release, reprint or authenticate a print job, select the job in the GNOME Print Status and on the Job menu, click the respective command. To view the list of print jobs in the print spool from a shell prompt, type the command lpstat -o . The last few lines look similar to the following: Example 16.15. Example of lpstat -o output If you want to cancel a print job, find the job number of the request with the command lpstat -o and then use the command cancel job number . For example, cancel 60 would cancel the print job in Example 16.15, "Example of lpstat -o output" . You cannot cancel print jobs that were started by other users with the cancel command. However, you can enforce deletion of such job by issuing the cancel -U root job_number command. To prevent such canceling, change the printer operation policy to Authenticated to force root authentication. You can also print a file directly from a shell prompt. For example, the command lp sample.txt prints the text file sample.txt . The print filter determines what type of file it is and converts it into a format the printer can understand. 16.3.11. Additional Resources To learn more about printing on Red Hat Enterprise Linux, see the following resources. Installed Documentation lp(1) - The manual page for the lp command that allows you to print files from the command line. lpr(1) - The manual page for the lpr command that allows you to print files from the command line. cancel(1) - The manual page for the command-line utility to remove print jobs from the print queue. mpage(1) - The manual page for the command-line utility to print multiple pages on one sheet of paper. cupsd(8) - The manual page for the CUPS printer daemon. cupsd.conf(5) - The manual page for the CUPS printer daemon configuration file. classes.conf(5) - The manual page for the class configuration file for CUPS. lpstat(1) - The manual page for the lpstat command, which displays status information about classes, jobs, and printers. Online Documentation http://www.linuxprinting.org/ - The OpenPrinting group on the Linux Foundation website contains a large amount of information about printing in Linux. http://www.cups.org/ - The CUPS website provides documentation, FAQs, and newsgroups about CUPS. | [
"~]# testparm Load smb config files from /etc/samba/smb.conf rlimit_max: increasing rlimit_max (1024) to minimum Windows limit (16384) Unknown parameter encountered: \"log levell\" Processing section \"[example_share]\" Loaded services file OK. ERROR: The idmap range for the domain * (tdb) overlaps with the range of DOMAIN (ad)! Server role: ROLE_DOMAIN_MEMBER Press enter to see a dump of your service definitions Global parameters [global] [example_share]",
"~]# yum install samba",
"[global] workgroup = Example-WG netbios name = Server security = user log file = /var/log/samba/%m.log log level = 1",
"~]# testparm",
"~]# firewall-cmd --permanent --add-port={139/tcp,445/tcp} ~]# firewall-cmd --reload",
"~]# systemctl start smb",
"~]# systemctl enable smb",
"~]# useradd -M -s /sbin/nologin example",
"~]# passwd example Enter new UNIX password: password Retype new UNIX password: password passwd: password updated successfully",
"~]# smbpasswd -a example New SMB password: password Retype new SMB password: password Added user example .",
"~]# smbpasswd -e example Enabled user example .",
"~]# yum install realmd oddjob-mkhomedir oddjob samba-winbind-clients samba-winbind samba-common-tools",
"~]# yum install samba",
"~]# yum install samba-winbind-krb5-locator",
"~]# mv /etc/samba/smb.conf /etc/samba/smb.conf.old",
"~]# realm join --membership-software=samba --client-software=winbind ad.example.com",
"~]# systemctl status winbind",
"~]# systemctl start smb",
"~]# getent passwd AD\\\\administrator AD\\administrator:*:10000:10000::/home/administrator@AD:/bin/bash",
"~]# getent group \"AD\\\\Domain Users\" AD\\domain users:x:10000:user",
"~]# chown \"AD\\administrator\":\"AD\\Domain Users\" /srv/samba/example.txt",
"~]# kinit [email protected]",
"~]# klist Ticket cache: KEYRING:persistent:0:0 Default principal: [email protected] Valid starting Expires Service principal 11.09.2017 14:46:21 12.09.2017 00:46:21 krbtgt/[email protected] renew until 18.09.2017 14:46:19",
"~]# wbinfo --all-domains",
"~]# wbinfo --all-domains BUILTIN SAMBA-SERVER AD",
"[global] idmap config * : backend = tdb idmap config * : range = 10000-999999 idmap config AD-DOM :backend = rid idmap config AD-DOM :range = 2000000-2999999 idmap config TRUST-DOM :backend = rid idmap config TRUST-DOM :range = 4000000-4999999",
"idmap config * : backend = tdb idmap config * : range = 10000-999999",
"idmap config * : backend = autorid idmap config * : range = 10000-999999",
"idmap config * : backend = tdb idmap config * : range = 10000-999999",
"idmap config * : backend = tdb idmap config * : range = 10000-999999",
"idmap config DOMAIN : backend = ad",
"idmap config DOMAIN : range = 2000000-2999999",
"idmap config DOMAIN : schema_mode = rfc2307",
"idmap config DOMAIN : unix_nss_info = yes",
"template shell = /bin/bash template homedir = /home/%U",
"idmap config DOMAIN : unix_primary_group = yes",
"~]# testparm",
"~]# smbcontrol all reload-config",
"idmap config * : backend = tdb idmap config * : range = 10000-999999",
"idmap config DOMAIN : backend = rid",
"idmap config DOMAIN : range = 2000000-2999999",
"template shell = /bin/bash template homedir = /home/%U",
"~]# testparm",
"~]# smbcontrol all reload-config",
"idmap config * : backend = autorid",
"idmap config * : range = 10000-999999",
"idmap config * : rangesize = 200000",
"template shell = /bin/bash template homedir = /home/%U",
"~]# testparm",
"~]# smbcontrol all reload-config",
"~]# mkdir -p /srv/samba/example/",
"~]# semanage fcontext -a -t samba_share_t \"/srv/samba/example(/.*)?\" ~]# restorecon -Rv /srv/samba/example/",
"[example] path = /srv/samba/example/ read only = no",
"~]# testparm",
"~]# firewall-cmd --permanent --add-service=samba ~]# firewall-cmd --reload",
"~]# systemctl restart smb",
"~]# systemctl enable smb",
"~]# chown root:\"Domain Users\" /srv/samba/example/ ~]# chmod 2770 /srv/samba/example/",
"inherit acls = yes",
"~]# systemctl restart smb",
"~]# systemctl enable smb",
"~]# setfacl -m group::--- /srv/samba/example/ ~]# setfacl -m default:group::--- /srv/samba/example/",
"~]# setfacl -m group:\" DOMAIN \\Domain Admins\":rwx /srv/samba/example/",
"~]# setfacl -m group:\" DOMAIN \\Domain Users\":r-x /srv/samba/example/",
"~]# setfacl -R -m other::--- /srv/samba/example/",
"~]# setfacl -m default:group:\" DOMAIN \\Domain Admins\":rwx /srv/samba/example/ ~]# setfacl -m default:group:\" DOMAIN \\Domain Users\":r-x /srv/samba/example/ ~]# setfacl -m default:other::--- /srv/samba/example/",
"valid users = + DOMAIN \\\"Domain Users\" invalid users = DOMAIN \\user",
"hosts allow = 127.0.0.1 192.0.2.0/24 client1.example.com hosts deny = client2.example.com",
"~]# smbcontrol all reload-config",
"~]# net rpc rights grant \" DOMAIN \\Domain Admins\" SeDiskOperatorPrivilege -U \" DOMAIN \\administrator\" Enter DOMAIN \\administrator's password: Successfully granted rights.",
"~]# net rpc rights list privileges SeDiskOperatorPrivilege -U \" DOMAIN \\administrator\" Enter administrator's password: SeDiskOperatorPrivilege: BUILTIN\\Administrators DOMAIN \\Domain Admins",
"vfs objects = acl_xattr map acl inherit = yes store dos attributes = yes",
"~]# mkdir -p /srv/samba/example/",
"~]# semanage fcontext -a -t samba_share_t \"/srv/samba/example(/.*)?\" ~]# restorecon -Rv /srv/samba/example/",
"[example] path = /srv/samba/example/ read only = no",
"vfs objects = acl_xattr map acl inherit = yes store dos attributes = yes",
"~]# testparm",
"~]# firewall-cmd --permanent --add-service=samba ~]# firewall-cmd --reload",
"~]# systemctl restart smb",
"~]# systemctl enable smb",
"security_principal : access_right / inheritance_information / permissions",
"AD\\Domain Users:ALLOWED/OI|CI/CHANGE",
"~]# smbcacls //server/example / -U \" DOMAIN pass:quotes[ administrator ]\" Enter DOMAIN pass:quotes[ administrator ]'s password: REVISION:1 CONTROL:SR|PD|DI|DP OWNER:AD\\Administrators GROUP:AD\\Domain Users ACL:AD\\Administrator:ALLOWED/OI|CI/FULL ACL:AD\\Domain Users:ALLOWED/OI|CI/CHANGE ACL:AD\\Domain Guests:ALLOWED/OI|CI/0x00100021",
"~]# echo USD(printf '0x%X' USD hex_value_1 | hex_value_2 | ...)",
"~]# echo USD(printf '0x%X' USD(( 0x00100020 | 0x00100001 | 0x00100080 ))) 0x1000A1",
"~]# smbcacls //server/example / -U \" DOMAIN \\administrator --add ACL:\"AD\\Domain Users\":ALLOWED/OI|CI/CHANGE",
"ACL for SID principal_name not found",
"~]# smbcacls //server/example / -U \" DOMAIN \\administrator --modify ACL:\"AD\\Domain Users\":ALLOWED/OI|CI/READ",
"~]# smbcacls //server/example / -U \" DOMAIN \\administrator --delete ACL:\"AD\\Domain Users\":ALLOWED/OI|CI/READ",
"~]# groupadd example",
"~]# mkdir -p /var/lib/samba/usershares/",
"~]# chgrp example /var/lib/samba/usershares/ ~]# chmod 1770 /var/lib/samba/usershares/",
"usershare path = /var/lib/samba/usershares/",
"usershare max shares = 100",
"usershare prefix allow list = /data /srv",
"~]# testparm",
"~]# smbcontrol all reload-config",
"~]USD net usershare add example /srv/samba/ \"\" \"AD\\Domain Users\":F,Everyone:R guest_ok=yes",
"~]USD net usershare info -l [ share_1 ] path=/srv/samba/ comment= usershare_acl=Everyone:R, host_name \\user:F, guest_ok=y",
"~]USD net usershare info -l share *_",
"~]USD net usershare list -l share_1 share_2",
"~]USD net usershare list -l share_*",
"~]USD net usershare delete share_name",
"-rw-r--r--. 1 root root 1024 1. Sep 10:00 file1.txt -rw-r-----. 1 nobody root 1024 1. Sep 10:00 file2.txt -rw-r-----. 1 root root 1024 1. Sep 10:00 file3.txt",
"[global] map to guest = Bad User",
"[global] guest account = user_name",
"[example] guest ok = yes",
"~]# testparm",
"~]# smbcontrol all reload-config",
"rpc_server:spoolss = external rpc_daemon:spoolssd = fork",
"~]# testparm",
"~]# systemctl restart smb",
"~]# ps axf 30903 smbd 30912 \\_ smbd 30913 \\_ smbd 30914 \\_ smbd 30915 \\_ smbd",
"rpc_server:spoolss = external rpc_daemon:spoolssd = fork",
"[printers] comment = All Printers path = /var/tmp/ printable = yes create mask = 0600",
"~]# testparm",
"~]# firewall-cmd --permanent --add-service=samba ~]# firewall-cmd --reload",
"~]# systemctl restart smb",
"load printers = no",
"[ Example-Printer ] path = /var/tmp/ printable = yes printer name = example",
"~]# testparm",
"~]# smbcontrol all reload-config",
"~]# net rpc rights grant \"printadmin\" SePrintOperatorPrivilege -U \" DOMAIN \\administrator\" Enter DOMAIN \\administrator's password: Successfully granted rights.",
"~]# net rpc rights list privileges SePrintOperatorPrivilege -U \" DOMAIN \\administrator\" Enter administrator's password: SePrintOperatorPrivilege: BUILTIN\\Administrators DOMAIN \\printadmin",
"[printUSD] path = /var/lib/samba/drivers/ read only = no write list = @printadmin force group = @printadmin create mask = 0664 directory mask = 2775",
"spoolss: architecture = Windows x64",
"~]# testparm",
"~]# smbcontrol all reload-config",
"~]# groupadd printadmin",
"~]# net rpc rights grant \"printadmin\" SePrintOperatorPrivilege -U \" DOMAIN \\administrator\" Enter DOMAIN \\administrator's password: Successfully granted rights.",
"~]# semanage fcontext -a -t samba_share_t \"/var/lib/samba/drivers(/.*)?\" ~]# restorecon -Rv /var/lib/samba/drivers/",
"~]# chgrp -R \"printadmin\" /var/lib/samba/drivers/ ~]# chmod -R 2775 /var/lib/samba/drivers/",
"case sensitive = true default case = lower preserve case = no short preserve case = no",
"~]# smbcontrol all reload-config",
"[global] workgroup = domain_name security = ads passdb backend = tdbsam realm = AD_REALM",
"[global] workgroup = domain_name security = user passdb backend = tdbsam",
"~]# testparm",
"~]# net ads join -U \" DOMAIN pass:quotes[ administrator ]\"",
"~]# net rpc join -U \" DOMAIN pass:quotes[ administrator ]\"",
"passwd: files winbind group: files winbind",
"~]# systemctl enable winbind ~]# systemctl start winbind",
"net rpc rights list -U \" DOMAIN pass:attributes[{blank}] administrator \" Enter DOMAIN pass:attributes[{blank}] administrator 's password: SeMachineAccountPrivilege Add machines to domain SeTakeOwnershipPrivilege Take ownership of files or other objects SeBackupPrivilege Back up files and directories SeRestorePrivilege Restore files and directories SeRemoteShutdownPrivilege Force shutdown from a remote system SePrintOperatorPrivilege Manage printers SeAddUsersPrivilege Add users and groups to the domain SeDiskOperatorPrivilege Manage disk shares SeSecurityPrivilege System security",
"~]# net rpc rights grant \" DOMAIN \\printadmin\" SePrintOperatorPrivilege -U \" DOMAIN \\administrator\" Enter DOMAIN \\administrator's password: Successfully granted rights.",
"~]# net rpc rights remoke \" DOMAIN \\printadmin\" SePrintOperatorPrivilege -U \" DOMAIN \\administrator\" Enter DOMAIN \\administrator's password: Successfully revoked rights.",
"~]# net rpc share list -U \" DOMAIN \\administrator\" -S example Enter DOMAIN \\administrator's password: IPCUSD share_1 share_2",
"~]# net rpc share add example=\"C:\\example\" -U \" DOMAIN \\administrator\" -S server",
"~]# net rpc share delete example -U \" DOMAIN \\administrator\" -S server",
"~]# net ads user -U \" DOMAIN \\administrator\"",
"~]# net rpc user -U \" DOMAIN \\administrator\"",
"~]# net user add user password -U \" DOMAIN \\administrator\" User user added",
"~]# net rpc shell -U DOMAIN \\administrator -S DC_or_PDC_name Talking to domain DOMAIN (S-1-5-21-1424831554-512457234-5642315751) net rpc> user edit disabled user no Set user 's disabled flag from [yes] to [no] net rpc> exit",
"~]# net user delete user -U \" DOMAIN \\administrator\" User user deleted",
"~]# rpcclient server_name -U \" DOMAIN pass:quotes[ administrator ]\" -c 'setdriver \" printer_name \" \" driver_name \"' Enter DOMAIN pass:quotes[ administrator ]s password: Successfully set printer_name to driver driver_name .",
"~]# rpcclient server_name -U \" DOMAIN pass:quotes[ administrator ]\" -c 'netshareenum' Enter DOMAIN pass:quotes[ administrator ]s password: netname: Example_Share remark: path: C:\\srv\\samba\\example_share password: netname: Example_Printer remark: path: C:\\var\\spool\\samba password:",
"~]# rpcclient server_name -U \" DOMAIN pass:quotes[ administrator ]\" -c 'enumdomusers' Enter DOMAIN pass:quotes[ administrator ]s password: user:[user1] rid:[0x3e8] user:[user2] rid:[0x3e9]",
"~]# samba-regedit",
"~]# smbclient -U \" DOMAIN\\user \" // server / example Enter domain \\user's password: Domain=[SERVER] OS=[Windows 6.1] Server=[Samba 4.6.2] smb: \\>",
"smb: \\>",
"smb: \\> help",
"smb: \\> help command_name",
"~]# smbclient -U \" DOMAIN pass:quotes[ user_name ]\" // server_name / share_name",
"smb: \\> cd /example/",
"smb: \\example\\> ls . D 0 Mon Sep 1 10:00:00 2017 .. D 0 Mon Sep 1 10:00:00 2017 example.txt N 1048576 Mon Sep 1 10:00:00 2017 9950208 blocks of size 1024. 8247144 blocks available",
"smb: \\example\\> get example.txt getting file \\directory\\subdirectory\\example.txt of size 1048576 as example.txt (511975,0 KiloBytes/sec) (average 170666,7 KiloBytes/sec)",
"smb: \\example\\> exit",
"~]# smbclient -U DOMAIN pass:quotes[ user_name ] // server_name / share_name -c \"cd /example/ ; get example.txt ; exit\"",
"~]# smbcontrol all reload-config",
"[user@server ~]USD smbpasswd New SMB password: Retype new SMB password:",
"smbpasswd -a user_name New SMB password: Retype new SMB password: Added user user_name .",
"smbpasswd -e user_name Enabled user user_name .",
"smbpasswd -x user_name Disabled user user_name .",
"smbpasswd -x user_name Deleted user user_name .",
"~]# smbstatus Samba version 4.6.2 PID Username Group Machine Protocol Version Encryption Signing ----------------------------------------------------------------------------------------------------------------------------- 963 DOMAIN \\administrator DOMAIN \\domain users client-pc (ipv4:192.0.2.1:57786) SMB3_02 - AES-128-CMAC Service pid Machine Connected at Encryption Signing: ------------------------------------------------------------------------------- example 969 192.0.2.1 Mo Sep 1 10:00:00 2017 CEST - AES-128-CMAC Locked files: Pid Uid DenyMode Access R/W Oplock SharePath Name Time ------------------------------------------------------------------------------------------------------------ 969 10000 DENY_WRITE 0x120089 RDONLY LEASE(RWH) /srv/samba/example file.txt Mon Sep 1 10:00:00 2017",
"~]# smbtar -s server -x example -u user_name -p password -t /root/example.tar",
"~]# wbinfo -u AD\\administrator AD\\guest",
"~]# wbinfo -g AD\\domain computers AD\\domain admins AD\\domain users",
"~]# wbinfo --name-to-sid=\"AD\\administrator\" S-1-5-21-1762709870-351891212-3141221786-500 SID_USER (1)",
"~]# wbinfo --trusted-domains --verbose Domain Name DNS Domain Trust Type Transitive In Out BUILTIN None Yes Yes Yes server None Yes Yes Yes DOMAIN1 domain1.example.com None Yes Yes Yes DOMAIN2 domain2.example.com External No Yes Yes",
"~]# man 5 smb.conf",
"~]# systemctl start vsftpd.service",
"~]# systemctl stop vsftpd.service",
"~]# systemctl restart vsftpd.service",
"~]# systemctl try-restart vsftpd.service",
"~]# systemctl enable vsftpd.service Created symlink from /etc/systemd/system/multi-user.target.wants/vsftpd.service to /usr/lib/systemd/system/vsftpd.service.",
"listen_address=N.N.N.N",
"~]# systemctl start [email protected]",
"~]# systemctl enable vsftpd.target Created symlink from /etc/systemd/system/multi-user.target.wants/vsftpd.target to /usr/lib/systemd/system/vsftpd.target.",
"~]# systemctl start vsftpd.target",
"ssl_enable=YES ssl_tlsv1=YES ssl_sslv2=NO ssl_sslv3=NO",
"~]# systemctl restart vsftpd.service",
"~]# chcon -R -t public_content_t /path/to/directory",
"~]# setsebool -P allow_ftpd_anon_write=1",
"{blank}",
"{blank}",
"{blank}",
"{blank}",
"{blank}",
"{blank}",
"install samba-client",
"~]# firewall-config",
"lpstat -o Charlie-60 twaugh 1024 Tue 08 Feb 2011 16:42:11 GMT Aaron-61 twaugh 1024 Tue 08 Feb 2011 16:42:44 GMT Ben-62 root 1024 Tue 08 Feb 2011 16:45:42 GMT"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system_administrators_guide/ch-file_and_print_servers |
Chapter 4. Installing a cluster with RHEL KVM on IBM Z and IBM LinuxONE | Chapter 4. Installing a cluster with RHEL KVM on IBM Z and IBM LinuxONE In OpenShift Container Platform version 4.16, you can install a cluster on IBM Z(R) or IBM(R) LinuxONE infrastructure that you provision. Note While this document refers only to IBM Z(R), all information in it also applies to IBM(R) LinuxONE. Important Additional considerations exist for non-bare metal platforms. Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you install an OpenShift Container Platform cluster. 4.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . Before you begin the installation process, you must clean the installation directory. This ensures that the required installation files are created and updated during the installation process. You provisioned persistent storage using OpenShift Data Foundation or other supported storage protocols for your cluster. To deploy a private image registry, you must set up persistent storage with ReadWriteMany access. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. You provisioned a RHEL Kernel Virtual Machine (KVM) system that is hosted on the logical partition (LPAR) and based on RHEL 8.6 or later. See Red Hat Enterprise Linux 8 and 9 Life Cycle . 4.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.16, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 4.3. Machine requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. One or more KVM host machines based on RHEL 8.6 or later. Each RHEL KVM host machine must have libvirt installed and running. The virtual machines are provisioned under each RHEL KVM host machine. 4.3.1. Required machines The smallest OpenShift Container Platform clusters require the following hosts: Table 4.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To improve high availability of your cluster, distribute the control plane machines over different RHEL instances on at least two physical machines. The bootstrap, control plane, and compute machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. See Red Hat Enterprise Linux technology capabilities and limits . 4.3.2. Network connectivity requirements The OpenShift Container Platform installer creates the Ignition files, which are necessary for all the Red Hat Enterprise Linux CoreOS (RHCOS) virtual machines. The automated installation of OpenShift Container Platform is performed by the bootstrap machine. It starts the installation of OpenShift Container Platform on each node, starts the Kubernetes cluster, and then finishes. During this bootstrap, the virtual machine must have an established network connection either through a Dynamic Host Configuration Protocol (DHCP) server or static IP address. 4.3.3. IBM Z network connectivity requirements To install on IBM Z(R) under RHEL KVM, you need: A RHEL KVM host configured with an OSA or RoCE network adapter. Either a RHEL KVM host that is configured to use bridged networking in libvirt or MacVTap to connect the network to the guests. See Types of virtual machine network connections . 4.3.4. Host machine resource requirements The RHEL KVM host in your environment must meet the following requirements to host the virtual machines that you plan for the OpenShift Container Platform environment. See Enabling virtualization on IBM Z(R) . You can install OpenShift Container Platform version 4.16 on the following IBM(R) hardware: IBM(R) z16 (all models), IBM(R) z15 (all models), IBM(R) z14 (all models) IBM(R) LinuxONE 4 (all models), IBM(R) LinuxONE III (all models), IBM(R) LinuxONE Emperor II, IBM(R) LinuxONE Rockhopper II 4.3.5. Minimum IBM Z system environment Hardware requirements The equivalent of six Integrated Facilities for Linux (IFL), which are SMT2 enabled, for each cluster. At least one network connection to both connect to the LoadBalancer service and to serve data for traffic outside the cluster. Note You can use dedicated or shared IFLs to assign sufficient compute resources. Resource sharing is one of the key strengths of IBM Z(R). However, you must adjust capacity correctly on each hypervisor layer and ensure sufficient resources for every OpenShift Container Platform cluster. Important Since the overall performance of the cluster can be impacted, the LPARs that are used to set up the OpenShift Container Platform clusters must provide sufficient compute capacity. In this context, LPAR weight management, entitlements, and CPU shares on the hypervisor level play an important role. Operating system requirements One LPAR running on RHEL 8.6 or later with KVM, which is managed by libvirt On your RHEL KVM host, set up: Three guest virtual machines for OpenShift Container Platform control plane machines Two guest virtual machines for OpenShift Container Platform compute machines One guest virtual machine for the temporary OpenShift Container Platform bootstrap machine 4.3.6. Minimum resource requirements Each cluster virtual machine must meet the following minimum requirements: Virtual Machine Operating System vCPU [1] Virtual RAM Storage IOPS Bootstrap RHCOS 4 16 GB 100 GB N/A Control plane RHCOS 4 16 GB 100 GB N/A Compute RHCOS 2 8 GB 100 GB N/A One physical core (IFL) provides two logical cores (threads) when SMT-2 is enabled. The hypervisor can provide two or more vCPUs. 4.3.7. Preferred IBM Z system environment Hardware requirements Three LPARS that each have the equivalent of six IFLs, which are SMT2 enabled, for each cluster. Two network connections to both connect to the LoadBalancer service and to serve data for traffic outside the cluster. Operating system requirements For high availability, two or three LPARs running on RHEL 8.6 or later with KVM, which are managed by libvirt. On your RHEL KVM host, set up: Three guest virtual machines for OpenShift Container Platform control plane machines, distributed across the RHEL KVM host machines. At least six guest virtual machines for OpenShift Container Platform compute machines, distributed across the RHEL KVM host machines. One guest virtual machine for the temporary OpenShift Container Platform bootstrap machine. To ensure the availability of integral components in an overcommitted environment, increase the priority of the control plane by using cpu_shares . Do the same for infrastructure nodes, if they exist. See schedinfo in IBM(R) Documentation. 4.3.8. Preferred resource requirements The preferred requirements for each cluster virtual machine are: Virtual Machine Operating System vCPU Virtual RAM Storage Bootstrap RHCOS 4 16 GB 120 GB Control plane RHCOS 8 16 GB 120 GB Compute RHCOS 6 8 GB 120 GB 4.3.9. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. Additional resources Recommended host practices for IBM Z(R) & IBM(R) LinuxONE environments 4.3.10. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 4.3.10.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 4.3.10.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Note The RHEL KVM host must be configured to use bridged networking in libvirt or MacVTap to connect the network to the virtual machines. The virtual machines must have access to the network, which is attached to the RHEL KVM host. Virtual Networks, for example network address translation (NAT), within KVM are not a supported configuration. Table 4.2. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 4.3. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 4.4. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers. Additional resources Configuring chrony time service 4.3.11. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 4.5. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <control_plane><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <compute><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 4.3.11.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 4.1. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 4.2. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 4.3.12. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application Ingress load balancing infrastructure. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 4.6. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 4.7. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 4.3.12.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 4.3. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 4.4. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure If you are using DHCP to provide the IP networking configuration to your cluster nodes, configure your DHCP service. Add persistent IP addresses for the nodes to your DHCP server configuration. In your configuration, match the MAC address of the relevant network interface to the intended IP address for each node. When you use DHCP to configure IP addressing for the cluster machines, the machines also obtain the DNS server information through DHCP. Define the persistent DNS server address that is used by the cluster nodes through your DHCP server configuration. Note If you are not using a DHCP service, you must provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. Define the hostnames of your cluster nodes in your DHCP server configuration. See the Setting the cluster node hostnames through DHCP section for details about hostname considerations. Note If you are not using a DHCP service, the cluster nodes obtain their hostname through a reverse DNS lookup. Choose to perform either a fast track installation of Red Hat Enterprise Linux CoreOS (RHCOS) or a full installation of Red Hat Enterprise Linux CoreOS (RHCOS). For the full installation, you must set up an HTTP or HTTPS server to provide Ignition files and install images to the cluster nodes. For the fast track installation an HTTP or HTTPS server is not required, however, a DHCP server is required. See sections "Fast-track installation: Creating Red Hat Enterprise Linux CoreOS (RHCOS) machines" and "Full installation: Creating Red Hat Enterprise Linux CoreOS (RHCOS) machines". Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. 4.5. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 604800 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 604800 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. 4.6. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 4.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on your provisioning machine. Prerequisites You have a machine that runs Linux, for example Red Hat Enterprise Linux (RHEL) 8, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 4.8. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.16. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.16 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 4.9. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for IBM Z(R) 4.9.1. Sample install-config.yaml file for IBM Z You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{"auths": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Specifies whether to enable or disable simultaneous multithreading (SMT), or hyperthreading. By default, SMT is enabled to increase the performance of the cores in your machines. You can disable it by setting the parameter value to Disabled . If you disable SMT, you must disable it in all cluster machines; this includes both control plane and compute machines. Note Simultaneous multithreading (SMT) is enabled by default. If SMT is not available on your OpenShift Container Platform nodes, the hyperthreading parameter has no effect. Important If you disable hyperthreading , whether on your OpenShift Container Platform nodes or in the install-config.yaml file, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned infrastructure. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. In user-provisioned installations, you must manually deploy the compute machines before you finish installing the cluster. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 11 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 12 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 13 You must set the platform to none . You cannot provide additional platform configuration variables for IBM Z(R) infrastructure. Important Clusters that are installed with the platform type none are unable to use some features, such as managing compute machines with the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that would normally support the feature. This parameter cannot be changed after installation. 14 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 15 The pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 16 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 4.9.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 4.9.3. Configuring a three-node cluster Optionally, you can deploy zero compute machines in a minimal three node cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. In three-node OpenShift Container Platform environments, the three control plane machines are schedulable, which means that your application workloads are scheduled to run on them. Prerequisites You have an existing install-config.yaml file. Procedure Ensure that the number of compute replicas is set to 0 in your install-config.yaml file, as shown in the following compute stanza: compute: - name: worker platform: {} replicas: 0 Note You must set the value of the replicas parameter for the compute machines to 0 when you install OpenShift Container Platform on user-provisioned infrastructure, regardless of the number of compute machines you are deploying. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. This does not apply to user-provisioned installations, where the compute machines are deployed manually. Note The preferred resource for control plane nodes is six vCPUs and 21 GB. For three control plane nodes this is the memory + vCPU equivalent of a minimum five-node cluster. You should back the three nodes, each installed on a 120 GB disk, with three IFLs that are SMT2 enabled. The minimum tested setup is three vCPUs and 10 GB on a 120 GB disk for each control plane node. For three-node cluster installations, follow these steps: If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. See the Load balancing requirements for user-provisioned infrastructure section for more information. When you create the Kubernetes manifest files in the following procedure, ensure that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml file is set to true . This enables your application workloads to run on the control plane nodes. Do not deploy any compute nodes when you create the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 4.10. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin. OVNKubernetes is the only supported plugin during installation. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 4.10.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 4.8. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 4.9. defaultNetwork object Field Type Description type string OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. OpenShift SDN is no longer available as an installation choice for new clusters. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 4.10. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify a configuration object for customizing the IPsec configuration. ipv4 object Specifies a configuration object for IPv4 settings. ipv6 object Specifies a configuration object for IPv6 settings. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. Table 4.11. ovnKubernetesConfig.ipv4 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the 100.88.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is 100.88.0.0/16 . internalJoinSubnet string If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . The default value is 100.64.0.0/16 . Table 4.12. ovnKubernetesConfig.ipv6 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the fd97::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is fd97::/64 . internalJoinSubnet string If your existing network infrastructure overlaps with the fd98::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is fd98::/64 . Table 4.13. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 4.14. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. ipForwarding object You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the ipForwarding specification in the Network resource. Specify Restricted to only allow IP forwarding for Kubernetes related traffic. Specify Global to allow forwarding of all IP traffic. For new installations, the default is Restricted . For updates to OpenShift Container Platform 4.14 or later, the default is Global . ipv4 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv4 addresses. ipv6 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv6 addresses. Table 4.15. gatewayConfig.ipv4 object Field Type Description internalMasqueradeSubnet string The masquerade IPv4 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is 169.254.169.0/29 . Table 4.16. gatewayConfig.ipv6 object Field Type Description internalMasqueradeSubnet string The masquerade IPv6 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is fd69::/125 . Table 4.17. ipsecConfig object Field Type Description mode string Specifies the behavior of the IPsec implementation. Must be one of the following values: Disabled : IPsec is not enabled on cluster nodes. External : IPsec is enabled for network traffic with external hosts. Full : IPsec is enabled for pod traffic and network traffic with external hosts. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full Important Using OVNKubernetes can lead to a stack exhaustion problem on IBM Power(R). kubeProxyConfig object configuration (OpenShiftSDN container network interface only) The values for the kubeProxyConfig object are defined in the following table: Table 4.18. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 4.11. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Note The installation program that generates the manifest and Ignition files is architecture specific and can be obtained from the client image mirror . The Linux version of the installation program runs on s390x only. This installer program is also available as a Mac OS version. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 4.12. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on IBM Z(R) infrastructure that you provision, you must install Red Hat Enterprise Linux CoreOS (RHCOS) as Red Hat Enterprise Linux (RHEL) guest virtual machines. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS machines have rebooted. You can perform a fast-track installation of RHCOS that uses a prepackaged QEMU copy-on-write (QCOW2) disk image. Alternatively, you can perform a full installation on a new QCOW2 disk image. To add further security to your system, you can optionally install RHCOS using IBM(R) Secure Execution before proceeding to the fast-track installation. 4.12.1. Installing RHCOS using IBM Secure Execution Before you install RHCOS using IBM(R) Secure Execution, you must prepare the underlying infrastructure. Prerequisites IBM(R) z15 or later, or IBM(R) LinuxONE III or later. Red Hat Enterprise Linux (RHEL) 8 or later. You have a bootstrap Ignition file. The file is not protected, enabling others to view and edit it. You have verified that the boot image has not been altered after installation. You must run all your nodes as IBM(R) Secure Execution guests. Procedure Prepare your RHEL KVM host to support IBM(R) Secure Execution. By default, KVM hosts do not support guests in IBM(R) Secure Execution mode. To support guests in IBM(R) Secure Execution mode, KVM hosts must boot in LPAR mode with the kernel parameter specification prot_virt=1 . To enable prot_virt=1 on RHEL 8, follow these steps: Navigate to /boot/loader/entries/ to modify your bootloader configuration file *.conf . Add the kernel command line parameter prot_virt=1 . Run the zipl command and reboot your system. KVM hosts that successfully start with support for IBM(R) Secure Execution for Linux issue the following kernel message: prot_virt: Reserving <amount>MB as ultravisor base storage. To verify that the KVM host now supports IBM(R) Secure Execution, run the following command: # cat /sys/firmware/uv/prot_virt_host Example output 1 The value of this attribute is 1 for Linux instances that detect their environment as consistent with that of a secure host. For other instances, the value is 0. Add your host keys to the KVM guest via Ignition. During the first boot, RHCOS looks for your host keys to re-encrypt itself with them. RHCOS searches for files starting with ibm-z-hostkey- in the /etc/se-hostkeys directory. All host keys, for each machine the cluster is running on, must be loaded into the directory by the administrator. After first boot, you cannot run the VM on any other machines. Note You need to prepare your Ignition file on a safe system. For example, another IBM(R) Secure Execution guest. For example: { "ignition": { "version": "3.0.0" }, "storage": { "files": [ { "path": "/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt", "contents": { "source": "data:;base64,<base64 encoded hostkey document>" }, "mode": 420 }, { "path": "/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt", "contents": { "source": "data:;base64,<base64 encoded hostkey document>" }, "mode": 420 } ] } } ``` Note You can add as many host keys as required if you want your node to be able to run on multiple IBM Z(R) machines. To generate the Base64 encoded string, run the following command: base64 <your-hostkey>.crt Compared to guests not running IBM(R) Secure Execution, the first boot of the machine is longer because the entire image is encrypted with a randomly generated LUKS passphrase before the Ignition phase. Add Ignition protection To protect the secrets that are stored in the Ignition config file from being read or even modified, you must encrypt the Ignition config file. Note To achieve the desired security, Ignition logging and local login are disabled by default when running IBM(R) Secure Execution. Fetch the public GPG key for the secex-qemu.qcow2 image and encrypt the Ignition config with the key by running the following command: gpg --recipient-file /path/to/ignition.gpg.pub --yes --output /path/to/config.ign.gpg --verbose --armor --encrypt /path/to/config.ign Follow the fast-track installation of RHCOS to install nodes by using the IBM(R) Secure Execution QCOW image. Note Before you start the VM, replace serial=ignition with serial=ignition_crypted , and add the launchSecurity parameter. Verification When you have completed the fast-track installation of RHCOS and Ignition runs at the first boot, verify if decryption is successful. If the decryption is successful, you can expect an output similar to the following example: Example output [ 2.801433] systemd[1]: Starting coreos-ignition-setup-user.service - CoreOS Ignition User Config Setup... [ 2.803959] coreos-secex-ignition-decrypt[731]: gpg: key <key_name>: public key "Secure Execution (secex) 38.20230323.dev.0" imported [ 2.808874] coreos-secex-ignition-decrypt[740]: gpg: encrypted with rsa4096 key, ID <key_name>, created <yyyy-mm-dd> [ OK ] Finished coreos-secex-igni...S Secex Ignition Config Decryptor. If the decryption fails, you can expect an output similar to the following example: Example output Starting coreos-ignition-s...reOS Ignition User Config Setup... [ 2.863675] coreos-secex-ignition-decrypt[729]: gpg: key <key_name>: public key "Secure Execution (secex) 38.20230323.dev.0" imported [ 2.869178] coreos-secex-ignition-decrypt[738]: gpg: encrypted with RSA key, ID <key_name> [ 2.870347] coreos-secex-ignition-decrypt[738]: gpg: public key decryption failed: No secret key [ 2.870371] coreos-secex-ignition-decrypt[738]: gpg: decryption failed: No secret key Additional resources Introducing IBM(R) Secure Execution for Linux Linux as an IBM(R) Secure Execution host or guest Setting up IBM(R) Secure Execution on IBM Z 4.12.2. Configuring NBDE with static IP in an IBM Z or IBM LinuxONE environment Enabling NBDE disk encryption in an IBM Z(R) or IBM(R) LinuxONE environment requires additional steps, which are described in detail in this section. Prerequisites You have set up the External Tang Server. See Network-bound disk encryption for instructions. You have installed the butane utility. You have reviewed the instructions for how to create machine configs with Butane. Procedure Create Butane configuration files for the control plane and compute nodes. The following example of a Butane configuration for a control plane node creates a file named master-storage.bu for disk encryption: variant: openshift version: 4.16.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 2 1 The cipher option is only required if FIPS mode is enabled. Omit the entry if FIPS is disabled. 2 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Create a customized initramfs file to boot the machine, by running the following command: USD coreos-installer pxe customize \ /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img \ --dest-device /dev/disk/by-id/scsi-<serial_number> --dest-karg-append \ ip=<ip_address>::<gateway_ip>:<subnet_mask>::<network_device>:none \ --dest-karg-append nameserver=<nameserver_ip> \ --dest-karg-append rd.neednet=1 -o \ /root/rhcos-bootfiles/<node_name>-initramfs.s390x.img Note Before first boot, you must customize the initramfs for each node in the cluster, and add PXE kernel parameters. Create a parameter file that includes ignition.platform.id=metal and ignition.firstboot . Example kernel parameter file for the control plane machine cio_ignore=all,!condev rd.neednet=1 \ console=ttysclp0 \ ignition.firstboot ignition.platform.id=metal \ coreos.inst.ignition_url=http://<http_server>/master.ign \ 1 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \ 2 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \ rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 \ rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 \ zfcp.allow_lun_scan=0 1 Specify the location of the Ignition config file. Use master.ign or worker.ign . Only HTTP and HTTPS protocols are supported. 2 Specify the location of the rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. Note Write all options in the parameter file as a single line and make sure you have no newline characters. Additional resources Creating machine configs with Butane 4.12.3. Fast-track installation by using a prepackaged QCOW2 disk image Complete the following steps to create the machines in a fast-track installation of Red Hat Enterprise Linux CoreOS (RHCOS), importing a prepackaged Red Hat Enterprise Linux CoreOS (RHCOS) QEMU copy-on-write (QCOW2) disk image. Prerequisites At least one LPAR running on RHEL 8.6 or later with KVM, referred to as RHEL KVM host in this procedure. The KVM/QEMU hypervisor is installed on the RHEL KVM host. A domain name server (DNS) that can perform hostname and reverse lookup for the nodes. A DHCP server that provides IP addresses. Procedure Obtain the RHEL QEMU copy-on-write (QCOW2) disk image file from the Product Downloads page on the Red Hat Customer Portal or from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate RHCOS QCOW2 image described in the following procedure. Download the QCOW2 disk image and Ignition files to a common directory on the RHEL KVM host. For example: /var/lib/libvirt/images Note The Ignition files are generated by the OpenShift Container Platform installer. Create a new disk image with the QCOW2 disk image backing file for each KVM guest node. USD qemu-img create -f qcow2 -F qcow2 -b /var/lib/libvirt/images/{source_rhcos_qemu} /var/lib/libvirt/images/{vmname}.qcow2 {size} Create the new KVM guest nodes using the Ignition file and the new disk image. USD virt-install --noautoconsole \ --connect qemu:///system \ --name <vm_name> \ --memory <memory_mb> \ --vcpus <vcpus> \ --disk <disk> \ --launchSecurity type="s390-pv" \ 1 --import \ --network network=<virt_network_parm>,mac=<mac_address> \ --disk path=<ign_file>,format=raw,readonly=on,serial=ignition,startup_policy=optional 2 1 If IBM(R) Secure Execution is enabled, add the launchSecurity type="s390-pv" parameter. 2 If IBM(R) Secure Execution is enabled, replace serial=ignition with serial=ignition_crypted . 4.12.4. Full installation on a new QCOW2 disk image Complete the following steps to create the machines in a full installation on a new QEMU copy-on-write (QCOW2) disk image. Prerequisites At least one LPAR running on RHEL 8.6 or later with KVM, referred to as RHEL KVM host in this procedure. The KVM/QEMU hypervisor is installed on the RHEL KVM host. A domain name server (DNS) that can perform hostname and reverse lookup for the nodes. An HTTP or HTTPS server is set up. Procedure Obtain the RHEL kernel, initramfs, and rootfs files from the Product Downloads page on the Red Hat Customer Portal or from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate RHCOS QCOW2 image described in the following procedure. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel: rhcos-<version>-live-kernel-<architecture> initramfs: rhcos-<version>-live-initramfs.<architecture>.img rootfs: rhcos-<version>-live-rootfs.<architecture>.img Move the downloaded RHEL live kernel, initramfs, and rootfs as well as the Ignition files to an HTTP or HTTPS server before you launch virt-install . Note The Ignition files are generated by the OpenShift Container Platform installer. Create the new KVM guest nodes using the RHEL kernel, initramfs, and Ignition files, the new disk image, and adjusted parm line arguments. USD virt-install \ --connect qemu:///system \ --name <vm_name> \ --memory <memory_mb> \ --vcpus <vcpus> \ --location <media_location>,kernel=<rhcos_kernel>,initrd=<rhcos_initrd> \ / 1 --disk <vm_name>.qcow2,size=<image_size>,cache=none,io=native \ --network network=<virt_network_parm> \ --boot hd \ --extra-args "rd.neednet=1" \ --extra-args "coreos.inst.install_dev=/dev/<block_device>" \ --extra-args "coreos.inst.ignition_url=http://<http_server>/bootstrap.ign" \ 2 --extra-args "coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img" \ 3 --extra-args "ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns>" \ --noautoconsole \ --wait 1 For the --location parameter, specify the location of the kernel/initrd on the HTTP or HTTPS server. 2 Specify the location of the Ignition config file. Use bootstrap.ign , master.ign , or worker.ign . Only HTTP and HTTPS protocols are supported. 3 Specify the location of the rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. 4.12.5. Advanced RHCOS installation reference This section illustrates the networking configuration and other advanced options that allow you to modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables describe the kernel arguments and command-line options you can use with the RHCOS live installer and the coreos-installer command. 4.12.5.1. Networking options for ISO installations If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the image to configure networking for a node. If no networking arguments are specified, DHCP is activated in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file. Important When adding networking arguments manually, you must also add the rd.neednet=1 kernel argument to bring the network up in the initramfs. The following information provides examples for configuring networking on your RHCOS nodes for ISO installations. The examples describe how to use the ip= and nameserver= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= and nameserver= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page. The following examples are the networking options for ISO installation. Configuring DHCP or static IP addresses To configure an IP address, either use DHCP ( ip=dhcp ) or set an individual static IP address ( ip=<host_ip> ). If setting a static IP, you must then identify the DNS server IP address ( nameserver=<dns_ip> ) on each node. The following example sets: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The hostname to core0.example.com The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41 Note When you use DHCP to configure IP addressing for the RHCOS machines, the machines also obtain the DNS server information through DHCP. For DHCP-based deployments, you can define the DNS server address that is used by the RHCOS nodes through your DHCP server configuration. Configuring an IP address without a static hostname You can configure an IP address without assigning a static hostname. If a static hostname is not set by the user, it will be picked up and automatically set by a reverse DNS lookup. To configure an IP address without a static hostname refer to the following example: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41 Specifying multiple network interfaces You can specify multiple network interfaces by setting multiple ip= entries. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring default gateway and route Optional: You can configure routes to additional networks by setting an rd.route= value. Note When you configure one or multiple networks, one default gateway is required. If the additional network gateway is different from the primary network gateway, the default gateway must be the primary network gateway. Run the following command to configure the default gateway: ip=::10.10.10.254:::: Enter the following command to configure the route for the additional network: rd.route=20.20.20.0/24:20.20.20.254:enp2s0 Disabling DHCP on a single interface You can disable DHCP on a single interface, such as when there are two or more network interfaces and only one interface is being used. In the example, the enp1s0 interface has a static networking configuration and DHCP is disabled for enp2s0 , which is not used: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none Combining DHCP and static IP configurations You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example: ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring VLANs on individual interfaces Optional: You can configure VLANs on individual interfaces by using the vlan= parameter. To configure a VLAN on a network interface and use a static IP address, run the following command: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0 To configure a VLAN on a network interface and to use DHCP, run the following command: ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0 Providing multiple DNS servers You can provide multiple DNS servers by adding a nameserver= entry for each server, for example: nameserver=1.1.1.1 nameserver=8.8.8.8 4.13. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Your machines have direct internet access or have an HTTP or HTTPS proxy available. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.29.4 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 4.14. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 4.15. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.29.4 master-1 Ready master 63m v1.29.4 master-2 Ready master 64m v1.29.4 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.29.4 master-1 Ready master 73m v1.29.4 master-2 Ready master 74m v1.29.4 worker-0 Ready worker 11m v1.29.4 worker-1 Ready worker 11m v1.29.4 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 4.16. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.16.0 True False False 19m baremetal 4.16.0 True False False 37m cloud-credential 4.16.0 True False False 40m cluster-autoscaler 4.16.0 True False False 37m config-operator 4.16.0 True False False 38m console 4.16.0 True False False 26m csi-snapshot-controller 4.16.0 True False False 37m dns 4.16.0 True False False 37m etcd 4.16.0 True False False 36m image-registry 4.16.0 True False False 31m ingress 4.16.0 True False False 30m insights 4.16.0 True False False 31m kube-apiserver 4.16.0 True False False 26m kube-controller-manager 4.16.0 True False False 36m kube-scheduler 4.16.0 True False False 36m kube-storage-version-migrator 4.16.0 True False False 37m machine-api 4.16.0 True False False 29m machine-approver 4.16.0 True False False 37m machine-config 4.16.0 True False False 36m marketplace 4.16.0 True False False 37m monitoring 4.16.0 True False False 29m network 4.16.0 True False False 38m node-tuning 4.16.0 True False False 37m openshift-apiserver 4.16.0 True False False 32m openshift-controller-manager 4.16.0 True False False 30m openshift-samples 4.16.0 True False False 32m operator-lifecycle-manager 4.16.0 True False False 37m operator-lifecycle-manager-catalog 4.16.0 True False False 37m operator-lifecycle-manager-packageserver 4.16.0 True False False 32m service-ca 4.16.0 True False False 38m storage 4.16.0 True False False 37m Configure the Operators that are not available. 4.16.1. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 4.16.1.1. Configuring registry storage for IBM Z As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a cluster on IBM Z(R). You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have 100Gi capacity. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resources found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.16 True False False 6h50m Ensure that your registry is set to managed to enable building and pushing of images. Run: Then, change the line to 4.16.1.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 4.17. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.16.0 True False False 19m baremetal 4.16.0 True False False 37m cloud-credential 4.16.0 True False False 40m cluster-autoscaler 4.16.0 True False False 37m config-operator 4.16.0 True False False 38m console 4.16.0 True False False 26m csi-snapshot-controller 4.16.0 True False False 37m dns 4.16.0 True False False 37m etcd 4.16.0 True False False 36m image-registry 4.16.0 True False False 31m ingress 4.16.0 True False False 30m insights 4.16.0 True False False 31m kube-apiserver 4.16.0 True False False 26m kube-controller-manager 4.16.0 True False False 36m kube-scheduler 4.16.0 True False False 36m kube-storage-version-migrator 4.16.0 True False False 37m machine-api 4.16.0 True False False 29m machine-approver 4.16.0 True False False 37m machine-config 4.16.0 True False False 36m marketplace 4.16.0 True False False 37m monitoring 4.16.0 True False False 29m network 4.16.0 True False False 38m node-tuning 4.16.0 True False False 37m openshift-apiserver 4.16.0 True False False 32m openshift-controller-manager 4.16.0 True False False 30m openshift-samples 4.16.0 True False False 32m operator-lifecycle-manager 4.16.0 True False False 37m operator-lifecycle-manager-catalog 4.16.0 True False False 37m operator-lifecycle-manager-packageserver 4.16.0 True False False 32m service-ca 4.16.0 True False False 38m storage 4.16.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Postinstallation machine configuration tasks documentation for more information. 4.18. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.16, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service How to generate SOSREPORT within OpenShift4 nodes without SSH . 4.19. steps Customize your cluster . If necessary, you can opt out of remote health reporting . | [
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - name: worker platform: {} replicas: 0",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full",
"kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s",
"./openshift-install create manifests --dir <installation_directory> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"prot_virt: Reserving <amount>MB as ultravisor base storage.",
"cat /sys/firmware/uv/prot_virt_host",
"1",
"{ \"ignition\": { \"version\": \"3.0.0\" }, \"storage\": { \"files\": [ { \"path\": \"/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt\", \"contents\": { \"source\": \"data:;base64,<base64 encoded hostkey document>\" }, \"mode\": 420 }, { \"path\": \"/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt\", \"contents\": { \"source\": \"data:;base64,<base64 encoded hostkey document>\" }, \"mode\": 420 } ] } } ```",
"base64 <your-hostkey>.crt",
"gpg --recipient-file /path/to/ignition.gpg.pub --yes --output /path/to/config.ign.gpg --verbose --armor --encrypt /path/to/config.ign",
"[ 2.801433] systemd[1]: Starting coreos-ignition-setup-user.service - CoreOS Ignition User Config Setup [ 2.803959] coreos-secex-ignition-decrypt[731]: gpg: key <key_name>: public key \"Secure Execution (secex) 38.20230323.dev.0\" imported [ 2.808874] coreos-secex-ignition-decrypt[740]: gpg: encrypted with rsa4096 key, ID <key_name>, created <yyyy-mm-dd> [ OK ] Finished coreos-secex-igni...S Secex Ignition Config Decryptor.",
"Starting coreos-ignition-s...reOS Ignition User Config Setup [ 2.863675] coreos-secex-ignition-decrypt[729]: gpg: key <key_name>: public key \"Secure Execution (secex) 38.20230323.dev.0\" imported [ 2.869178] coreos-secex-ignition-decrypt[738]: gpg: encrypted with RSA key, ID <key_name> [ 2.870347] coreos-secex-ignition-decrypt[738]: gpg: public key decryption failed: No secret key [ 2.870371] coreos-secex-ignition-decrypt[738]: gpg: decryption failed: No secret key",
"variant: openshift version: 4.16.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 2",
"coreos-installer pxe customize /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img --dest-device /dev/disk/by-id/scsi-<serial_number> --dest-karg-append ip=<ip_address>::<gateway_ip>:<subnet_mask>::<network_device>:none --dest-karg-append nameserver=<nameserver_ip> --dest-karg-append rd.neednet=1 -o /root/rhcos-bootfiles/<node_name>-initramfs.s390x.img",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 ignition.firstboot ignition.platform.id=metal coreos.inst.ignition_url=http://<http_server>/master.ign \\ 1 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 2 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 zfcp.allow_lun_scan=0",
"qemu-img create -f qcow2 -F qcow2 -b /var/lib/libvirt/images/{source_rhcos_qemu} /var/lib/libvirt/images/{vmname}.qcow2 {size}",
"virt-install --noautoconsole --connect qemu:///system --name <vm_name> --memory <memory_mb> --vcpus <vcpus> --disk <disk> --launchSecurity type=\"s390-pv\" \\ 1 --import --network network=<virt_network_parm>,mac=<mac_address> --disk path=<ign_file>,format=raw,readonly=on,serial=ignition,startup_policy=optional 2",
"virt-install --connect qemu:///system --name <vm_name> --memory <memory_mb> --vcpus <vcpus> --location <media_location>,kernel=<rhcos_kernel>,initrd=<rhcos_initrd> \\ / 1 --disk <vm_name>.qcow2,size=<image_size>,cache=none,io=native --network network=<virt_network_parm> --boot hd --extra-args \"rd.neednet=1\" --extra-args \"coreos.inst.install_dev=/dev/<block_device>\" --extra-args \"coreos.inst.ignition_url=http://<http_server>/bootstrap.ign\" \\ 2 --extra-args \"coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img\" \\ 3 --extra-args \"ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns>\" --noautoconsole --wait",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.29.4 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.29.4 master-1 Ready master 63m v1.29.4 master-2 Ready master 64m v1.29.4",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.29.4 master-1 Ready master 73m v1.29.4 master-2 Ready master 74m v1.29.4 worker-0 Ready worker 11m v1.29.4 worker-1 Ready worker 11m v1.29.4",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.16.0 True False False 19m baremetal 4.16.0 True False False 37m cloud-credential 4.16.0 True False False 40m cluster-autoscaler 4.16.0 True False False 37m config-operator 4.16.0 True False False 38m console 4.16.0 True False False 26m csi-snapshot-controller 4.16.0 True False False 37m dns 4.16.0 True False False 37m etcd 4.16.0 True False False 36m image-registry 4.16.0 True False False 31m ingress 4.16.0 True False False 30m insights 4.16.0 True False False 31m kube-apiserver 4.16.0 True False False 26m kube-controller-manager 4.16.0 True False False 36m kube-scheduler 4.16.0 True False False 36m kube-storage-version-migrator 4.16.0 True False False 37m machine-api 4.16.0 True False False 29m machine-approver 4.16.0 True False False 37m machine-config 4.16.0 True False False 36m marketplace 4.16.0 True False False 37m monitoring 4.16.0 True False False 29m network 4.16.0 True False False 38m node-tuning 4.16.0 True False False 37m openshift-apiserver 4.16.0 True False False 32m openshift-controller-manager 4.16.0 True False False 30m openshift-samples 4.16.0 True False False 32m operator-lifecycle-manager 4.16.0 True False False 37m operator-lifecycle-manager-catalog 4.16.0 True False False 37m operator-lifecycle-manager-packageserver 4.16.0 True False False 32m service-ca 4.16.0 True False False 38m storage 4.16.0 True False False 37m",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.16 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.16.0 True False False 19m baremetal 4.16.0 True False False 37m cloud-credential 4.16.0 True False False 40m cluster-autoscaler 4.16.0 True False False 37m config-operator 4.16.0 True False False 38m console 4.16.0 True False False 26m csi-snapshot-controller 4.16.0 True False False 37m dns 4.16.0 True False False 37m etcd 4.16.0 True False False 36m image-registry 4.16.0 True False False 31m ingress 4.16.0 True False False 30m insights 4.16.0 True False False 31m kube-apiserver 4.16.0 True False False 26m kube-controller-manager 4.16.0 True False False 36m kube-scheduler 4.16.0 True False False 36m kube-storage-version-migrator 4.16.0 True False False 37m machine-api 4.16.0 True False False 29m machine-approver 4.16.0 True False False 37m machine-config 4.16.0 True False False 36m marketplace 4.16.0 True False False 37m monitoring 4.16.0 True False False 29m network 4.16.0 True False False 38m node-tuning 4.16.0 True False False 37m openshift-apiserver 4.16.0 True False False 32m openshift-controller-manager 4.16.0 True False False 30m openshift-samples 4.16.0 True False False 32m operator-lifecycle-manager 4.16.0 True False False 37m operator-lifecycle-manager-catalog 4.16.0 True False False 37m operator-lifecycle-manager-packageserver 4.16.0 True False False 32m service-ca 4.16.0 True False False 38m storage 4.16.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1"
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_ibm_z_and_ibm_linuxone/installing-ibm-z-kvm |
Chapter 1. Deploying a Block Storage service custom back end | Chapter 1. Deploying a Block Storage service custom back end The Red Hat OpenStack Platform director installs and manages a complete, Enterprise-grade OpenStack deployment with minimal manual configuration. For more information about the director, see the Director Installation and Usage guide. The Openstack environment that director creates is called the overcloud. The overcloud contains all the components that provide services to end users, including Block Storage. This document provides guidance about how to deploy a custom back end to the Block Storage service (cinder) on the overcloud. By default, the Block Storage service is installed on Controller nodes. Prerequisites You have already deployed the overcloud with the director. The overcloud has a functioning Block Storage service. You are familiar with Block Storage concepts and configuration. For more information about Block Storage, see Block Storage and Volumes in the Storage Guide . Warning This procedure has been tested successfully in limited use cases. Ensure that you test your planned deployment on a non-production environment first. If you have any questions, contact Red Hat support. | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/custom_block_storage_back_end_deployment_guide/assembly-custom-block-storage-back-ends |
2.5. Using Referrals | 2.5. Using Referrals Referrals tell client applications which server to contact for a specific piece of information. This redirection occurs when a client application requests a directory entry that does not exist on the local server or when a database has been taken off-line for maintenance. This section contains the following information about referrals: Section 2.5.1, "Starting the Server in Referral Mode" Section 2.5.2, "Setting Default Referrals" Section 2.5.3, "Creating Smart Referrals" Section 2.5.4, "Creating Suffix Referrals" For conceptual information on how to use referrals in the directory, see the Red Hat Directory Server Deployment Guide . 2.5.1. Starting the Server in Referral Mode Referrals are used to redirect client applications to another server while the current server is unavailable or when the client requests information that is not held on the current server. For example, starting Directory Server in referral mode while there are configuration changes being made to Directory Server will refer all clients to another supplier while that server is unavailable. Starting Directory Server in referral mode is done with the refer command. Run nsslapd with the refer option. /etc/dirsrv/slapd- instance_name / is the directory where the Directory Server configuration files are. This is the default location on Red Hat Enterprise Linux. port is the optional port number of Directory Server to start in referral mode. referral_url is the referral returned to clients. The format of an LDAP URL is covered in Appendix C, LDAP URLs . 2.5.2. Setting Default Referrals Directory Server returns default referrals to client applications that submit operations on a DN not contained within any of the suffixes maintained by the directory. The following procedures describe setting a default referral for the directory using the command line. 2.5.2.1. Setting a Default Referral Using the Command Line Use the dsconf config replace command, to set the default referral in the nsslapd-referral parameter. For example, to set ldap://directory.example.com/ as the default referral: 2.5.3. Creating Smart Referrals Smart referrals map a directory entry or directory tree to a specific LDAP URL. Using smart referrals, client applications can be referred to a specific server or a specific entry on a specific server. For example, a client application requests the directory entry uid=jdoe,ou=people,dc=example,dc=com . A smart referral is returned to the client that points to the entry cn=john doe,o=people,ou=europe,dc=example,dc=com on the server directory.europe.example.com . The way the directory uses smart referrals conforms to the standard specified in RFC 2251 section 4.1.11. The RFC can be downloaded at http://www.ietf.org/rfc/rfc2251.txt . 2.5.3.1. Creating Smart Referrals Using the Command Line To create a smart referral, create the relevant directory entry with the referral object class and set the ref attribute to the referral LDAP URL. For example, to create a smart referral named uid=user,ou=people,dc=example,dc=com that refers to ldap://directory.europe.example.com/cn=user,ou=people,ou=europe,dc=example,dc=com : Note Directory Server ignores any information after a space in an LDAP URL. For this reason, use %20 instead of spaces in LDAP URLs used as a referral. Use the -M option with ldapadd if there is already a referral in the DN path. For more information on smart referrals, see the Directory Server Deployment Guide . 2.5.4. Creating Suffix Referrals The following procedure describes creating a referral in a suffix . This means that the suffix processes operations using a referral rather than a database or database link. Warning When you configure a suffix to return referrals, the ACIs contained in the database associated with the suffix are ignored. In addition, creating suffix referrals applies only to non-replicated suffixes. 2.5.4.1. Creating Suffix Referrals Using the Command Line To create a suffix referral: Optionally, create a root or sub-suffix, if it does not already exist. For details, see Section 2.1.1, "Creating Suffixes" . Add the referral to the suffix. For example: 2.5.4.2. Creating Suffix Referrals Using the Web Console To create a suffix referral: Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Open the Database menu. Optionally, create a root or sub-suffix, if it does not already exist. For details, see Section 2.1.1, "Creating Suffixes" . Select the suffix in the list, and open the Referrals tab. Click Create Referral . Fill the fields to create the referral URL. Click Create Referral . | [
"ns-slapd refer -D /etc/dirsrv/slapd- instance_name [-p port ] -r referral_url",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace nsslapd-referral=\"ldap://directory.example.com/\"",
"ldapadd -D \"cn=Directory Manager\" -W -p 389 -h server2.example.com -x dn: uid=user,ou=people,dc=example,dc=com objectclass: top objectclass: person objectclass: organizationalPerson objectclass: inetOrgPerson objectclass: referral sn: user uid: user cn: user ref: ldap://directory.europe.example.com/cn=user,ou=people,ou=europe,dc=example,dc=com",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com backend suffix set --add-referral=\"ldap://directory.example.com/\" database_name"
] | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/configuring_directory_databases-using_referrals |
Chapter 2. Getting started | Chapter 2. Getting started 2.1. Running the Maven Plugin The Maven plugin is run by including a reference to the plugin inside your application's pom.xml file. When the application is built, the Maven plugin is run and generates the reports for analysis. Prerequisites Java Development Kit (JDK) installed. MTR supports the following JDKs: OpenJDK 11 Oracle JDK 11 8 GB RAM macOS installation: the value of maxproc must be 2048 or greater. The Maven settings.xml file configured to direct Maven to use the JBoss EAP Maven repository. To run the Maven plugin on OpenJDK 17 or Oracle JDK17, you first need to set MAVEN_OPTS on the command line by running the following command: export MAVEN_OPTS="--add-modules=java.se --add-opens java.base/java.lang=ALL-UNNAMED --add-opens java.base/java.util=ALL-UNNAMED --add-opens java.base/java.util.stream=ALL-UNNAMED" Procedure Add the following <plugin> to your application's pom.xml file: [...] <plugin> <groupId>org.jboss.windup.plugin</groupId> <artifactId>mtr-maven-plugin</artifactId> <version>1.2.7.GA-redhat-00001</version> <executions> <execution> <id>run-windup</id> <phase>package</phase> <goals> <goal>windup</goal> </goals> </execution> </executions> <configuration> <target>eap:7</target> 1 </configuration> </plugin> [...] 1 Specify a migration target. At least one migration target must be supplied within the configuration. Add --add-modules=java.se to the MAVEN_OPTS environment variable. export MAVEN_OPTS=--add-modules=java.se Build the project: USD mvn clean install You can access the generated reports. 2.2. Running the Maven Plugin with multiple modules To use the Maven plugin in a project with multiple modules, place the configuration inside the parent's pom.xml . The Maven plugin will generate a single report that contains the analysis for the parent and any child modules. Note It is strongly recommended to set inherited to false in multi-module projects; otherwise, the Maven plugin will run when each child is compiled, resulting in multiple executions of the Maven plugin against the child modules. Setting inherited to false results in each project being analyzed a single time and drastically decreased run times. To run the Maven plugin in a project with multiple modules, perform the following steps. Include the following plugin inside the parent project's pom.xml . The following is a sample pom.xml for a parent module. <plugin> <groupId>org.jboss.windup.plugin</groupId> <artifactId>mtr-maven-plugin</artifactId> <version>1.2.7.GA-redhat-00001</version> <inherited>false</inherited> <executions> <execution> <id>run-windup</id> <phase>package</phase> <goals> <goal>windup</goal> </goals> </execution> </executions> <configuration> <input>USD{project.basedir}</input> <target>eap:7</target> 1 <windupHome>>/PATH/TO/CLI/<</windupHome> </configuration> </plugin> 1 Specify a migration target. At least one migration target must be supplied within the configuration. This pom.xml file differs from the default in the following attributes: inherited : Defined at the plugin level, this attribute indicates whether or not this configuration should be used in child modules. Set to false for performance improvements. input : Specifies the path to the directory containing the projects to be analyzed. This attribute defaults to {project.basedir}/src/main , and should be defined if the parent project does not have source code to analyze. windupHome : A path to an extracted copy of the MTR CLI. This attribute is optional, but is recommended as a performance improvement. The above example demonstrates a set of recommended arguments. Build the parent project. During the build process, the Maven plugin runs against all children in the project without further configuration. USD mvn clean install Once completed, you can access the generated reports. This report contains the analysis for the parent and all children. 2.3. Accessing the report When you run Migration Toolkit for Runtimes, the report is generated in the OUTPUT_REPORT_DIRECTORY that you specify using the outputDirectory argument in the pom.xml . Upon completion of the build, you will see the following message in the build log. The output directory contains the following files and subdirectories: See the Reviewing the reports section of the MTR CLI Guide for information on the MTR reports and how to use them to assess your migration or modernization effort. | [
"export MAVEN_OPTS=\"--add-modules=java.se --add-opens java.base/java.lang=ALL-UNNAMED --add-opens java.base/java.util=ALL-UNNAMED --add-opens java.base/java.util.stream=ALL-UNNAMED\"",
"[...] <plugin> <groupId>org.jboss.windup.plugin</groupId> <artifactId>mtr-maven-plugin</artifactId> <version>1.2.7.GA-redhat-00001</version> <executions> <execution> <id>run-windup</id> <phase>package</phase> <goals> <goal>windup</goal> </goals> </execution> </executions> <configuration> <target>eap:7</target> 1 </configuration> </plugin> [...]",
"export MAVEN_OPTS=--add-modules=java.se",
"mvn clean install",
"<plugin> <groupId>org.jboss.windup.plugin</groupId> <artifactId>mtr-maven-plugin</artifactId> <version>1.2.7.GA-redhat-00001</version> <inherited>false</inherited> <executions> <execution> <id>run-windup</id> <phase>package</phase> <goals> <goal>windup</goal> </goals> </execution> </executions> <configuration> <input>USD{project.basedir}</input> <target>eap:7</target> 1 <windupHome>>/PATH/TO/CLI/<</windupHome> </configuration> </plugin>",
"mvn clean install",
"Windup report created: <OUTPUT_REPORT_DIRECTORY>/index.html",
"<OUTPUT_REPORT_DIRECTORY>/ ├── index.html // Landing page for the report ├── <EXPORT_FILE>.csv // Optional export of data in CSV format ├── graph/ // Generated graphs used for indexing ├── reports/ // Generated HTML reports ├── stats/ // Performance statistics"
] | https://docs.redhat.com/en/documentation/migration_toolkit_for_runtimes/1.2/html/maven_plugin_guide/getting_started |
4.2. Power Management by Proxy in Red Hat Virtualization | 4.2. Power Management by Proxy in Red Hat Virtualization The Red Hat Virtualization Manager does not communicate directly with fence agents. Instead, the Manager uses a proxy to send power management commands to a host power management device. The Manager uses VDSM to execute power management device actions, so another host in the environment is used as a fencing proxy. You can select between: Any host in the same cluster as the host requiring fencing. Any host in the same data center as the host requiring fencing. A viable fencing proxy host has a status of either UP or Maintenance . | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/technical_reference/power_management_by_proxy_in_red_hat_enterprise_virtualization |
Chapter 120. KafkaMirrorMakerConsumerSpec schema reference | Chapter 120. KafkaMirrorMakerConsumerSpec schema reference Used in: KafkaMirrorMakerSpec Full list of KafkaMirrorMakerConsumerSpec schema properties Configures a MirrorMaker consumer. 120.1. numStreams Use the consumer.numStreams property to configure the number of streams for the consumer. You can increase the throughput in mirroring topics by increasing the number of consumer threads. Consumer threads belong to the consumer group specified for Kafka MirrorMaker. Topic partitions are assigned across the consumer threads, which consume messages in parallel. 120.2. offsetCommitInterval Use the consumer.offsetCommitInterval property to configure an offset auto-commit interval for the consumer. You can specify the regular time interval at which an offset is committed after Kafka MirrorMaker has consumed data from the source Kafka cluster. The time interval is set in milliseconds, with a default value of 60,000. 120.3. config Use the consumer.config properties to configure Kafka options for the consumer as keys. The values can be one of the following JSON types: String Number Boolean Exceptions You can specify and configure the options listed in the Apache Kafka configuration documentation for consumers . However, Streams for Apache Kafka takes care of configuring and managing options related to the following, which cannot be changed: Kafka cluster bootstrap address Security (encryption, authentication, and authorization) Consumer group identifier Interceptors Properties with the following prefixes cannot be set: bootstrap.servers group.id interceptor.classes sasl. security. ssl. If the config property contains an option that cannot be changed, it is disregarded, and a warning message is logged to the Cluster Operator log file. All other supported options are forwarded to MirrorMaker, including the following exceptions to the options configured by Streams for Apache Kafka: Any ssl configuration for supported TLS versions and cipher suites Important The Cluster Operator does not validate keys or values in the config object provided. If an invalid configuration is provided, the MirrorMaker cluster might not start or might become unstable. In this case, fix the configuration so that the Cluster Operator can roll out the new configuration to all MirrorMaker nodes. 120.4. groupId Use the consumer.groupId property to configure a consumer group identifier for the consumer. Kafka MirrorMaker uses a Kafka consumer to consume messages, behaving like any other Kafka consumer client. Messages consumed from the source Kafka cluster are mirrored to a target Kafka cluster. A group identifier is required, as the consumer needs to be part of a consumer group for the assignment of partitions. 120.5. KafkaMirrorMakerConsumerSpec schema properties Property Property type Description numStreams integer Specifies the number of consumer stream threads to create. offsetCommitInterval integer Specifies the offset auto-commit interval in ms. Default value is 60000. bootstrapServers string A list of host:port pairs for establishing the initial connection to the Kafka cluster. groupId string A unique string that identifies the consumer group this consumer belongs to. authentication KafkaClientAuthenticationTls , KafkaClientAuthenticationScramSha256 , KafkaClientAuthenticationScramSha512 , KafkaClientAuthenticationPlain , KafkaClientAuthenticationOAuth Authentication configuration for connecting to the cluster. config map The MirrorMaker consumer config. Properties with the following prefixes cannot be set: ssl., bootstrap.servers, group.id, sasl., security., interceptor.classes (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols). tls ClientTls TLS configuration for connecting MirrorMaker to the cluster. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-kafkamirrormakerconsumerspec-reference |
function::ansi_set_color3 | function::ansi_set_color3 Name function::ansi_set_color3 - Set the ansi Select Graphic Rendition mode. Synopsis Arguments fg Foreground color to set. bg Background color to set. attr Color attribute to set. Description Sends ansi code for Select Graphic Rendition mode for the given forground color, Black (30), Blue (34), Green (32), Cyan (36), Red (31), Purple (35), Brown (33), Light Gray (37), the given background color, Black (40), Red (41), Green (42), Yellow (43), Blue (44), Magenta (45), Cyan (46), White (47) and the color attribute All attributes off (0), Intensity Bold (1), Underline Single (4), Blink Slow (5), Blink Rapid (6), Image Negative (7). | [
"ansi_set_color3(fg:long,bg:long,attr:long)"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-ansi-set-color3 |
Chapter 19. Analyzing a core dump | Chapter 19. Analyzing a core dump To identify the cause of the system crash, you can use the crash utility, which provides an interactive prompt similar to the GNU Debugger (GDB). By using crash , you can analyze a core dump created by kdump , netdump , diskdump , or xendump and a running Linux system. Alternatively, you can use the Kernel Oops Analyzer or the Kdump Helper tool. 19.1. Installing the crash utility With the provided information, understand the required packages and the procedure to install the crash utility. The crash utility might not be installed by default on your RHEL 9 systems. crash is a tool to interactively analyze a system's state while it is running or after a kernel crash occurs and a core dump file is created. The core dump file is also known as the vmcore file. Procedure Enable the relevant repositories: Install the crash package: Install the kernel-debuginfo package: The package kernel-debuginfo will correspond to the running kernel and provides the data necessary for the dump analysis. 19.2. Running and exiting the crash utility The crash utility is a powerful tool for analyzing kdump . By running crash on a crash dump file, you can gain insights into the system's state at the time of the crash, identify the root cause of the issue, and troubleshoot kernel-related problems. Prerequisites Identify the currently running kernel (for example 5.14.0-1.el9.x86_64 ). Procedure To start the crash utility, two necessary parameters need to be passed to the command: The debug-info (a decompressed vmlinuz image), for example /usr/lib/debug/lib/modules/5.14.0-1.el9.x86_64/vmlinux provided through a specific kernel-debuginfo package. The actual vmcore file, for example /var/crash/127.0.0.1-2021-09-13-14:05:33/vmcore The resulting crash command then looks: Use the same <kernel> version that was captured by kdump . Running the crash utility. The following example shows analyzing a core dump created on September 13 2021 at 14:05 PM, using the 5.14.0-1.el9.x86_64 kernel. To exit the interactive prompt and stop crash , type exit or q . Note The crash command is also utilized as a powerful tool for debugging a live system. However, you must use it with caution to avoid system-level issues. Additional resources A Guide to Unexpected System Restarts 19.3. Displaying various indicators in the crash utility Use the crash utility to display various indicators, such as a kernel message buffer, a backtrace, a process status, virtual memory information and open files. Displaying the message buffer To display the kernel message buffer, type the log command at the interactive prompt: Type help log for more information about the command usage. Note The kernel message buffer includes the most essential information about the system crash. It is always dumped first in to the vmcore-dmesg.txt file. If you fail to obtain the full vmcore file, for example, due to insufficient space on the target location, you can obtain the required information from the kernel message buffer. By default, vmcore-dmesg.txt is placed in the /var/crash/ directory. Displaying a backtrace To display the kernel stack trace, use the bt command. Type bt <pid> to display the backtrace of a specific process or type help bt for more information about bt usage. Displaying a process status To display the status of processes in the system, use the ps command. Use ps <pid> to display the status of a single specific process. Use help ps for more information about ps usage. Displaying virtual memory information To display basic virtual memory information, type the vm command at the interactive prompt. Use vm <pid> to display information about a single specific process, or use help vm for more information about vm usage. Displaying open files To display information about open files, use the files command. Use files <pid> to display files opened by only one selected process, or use help files for more information about files usage. 19.4. Using Kernel Oops Analyzer The Kernel Oops Analyzer tool analyzes the crash dump by comparing the oops messages with known issues in the knowledge base. Prerequisites An oops message is secured to feed the Kernel Oops Analyzer. Procedure Access the Kernel Oops Analyzer tool. To diagnose a kernel crash issue, upload a kernel oops log generated in vmcore . Alternatively, you can diagnose a kernel crash issue by providing a text message or a vmcore-dmesg.txt as an input. Click DETECT to compare the oops message based on information from the makedumpfile against known solutions. Additional resources The Kernel Oops Analyzer article 19.5. The Kdump Helper tool The Kdump Helper tool helps to set up the kdump using the provided information. Kdump Helper generates a configuration script based on your preferences. Initiating and running the script on your server sets up the kdump service. Additional resources Kdump Helper | [
"subscription-manager repos --enable baseos repository",
"subscription-manager repos --enable appstream repository",
"subscription-manager repos --enable rhel-9-for-x86_64-baseos-debug-rpms",
"dnf install crash",
"dnf install kernel-debuginfo",
"crash /usr/lib/debug/lib/modules/5.14.0-1.el9.x86_64/vmlinux /var/crash/127.0.0.1-2021-09-13-14:05:33/vmcore",
"WARNING: kernel relocated [202MB]: patching 90160 gdb minimal_symbol values KERNEL: /usr/lib/debug/lib/modules/5.14.0-1.el9.x86_64/vmlinux DUMPFILE: /var/crash/127.0.0.1-2021-09-13-14:05:33/vmcore [PARTIAL DUMP] CPUS: 2 DATE: Mon Sep 13 14:05:16 2021 UPTIME: 01:03:57 LOAD AVERAGE: 0.00, 0.00, 0.00 TASKS: 586 NODENAME: localhost.localdomain RELEASE: 5.14.0-1.el9.x86_64 VERSION: #1 SMP Wed Aug 29 11:51:55 UTC 2018 MACHINE: x86_64 (2904 Mhz) MEMORY: 2.9 GB PANIC: \"sysrq: SysRq : Trigger a crash\" PID: 10635 COMMAND: \"bash\" TASK: ffff8d6c84271800 [THREAD_INFO: ffff8d6c84271800] CPU: 1 STATE: TASK_RUNNING (SYSRQ) crash>",
"crash> exit ~]#",
"crash> log ... several lines omitted EIP: 0060:[<c068124f>] EFLAGS: 00010096 CPU: 2 EIP is at sysrq_handle_crash+0xf/0x20 EAX: 00000063 EBX: 00000063 ECX: c09e1c8c EDX: 00000000 ESI: c0a09ca0 EDI: 00000286 EBP: 00000000 ESP: ef4dbf24 DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068 Process bash (pid: 5591, ti=ef4da000 task=f196d560 task.ti=ef4da000) Stack: c068146b c0960891 c0968653 00000003 00000000 00000002 efade5c0 c06814d0 <0> fffffffb c068150f b7776000 f2600c40 c0569ec4 ef4dbf9c 00000002 b7776000 <0> efade5c0 00000002 b7776000 c0569e60 c051de50 ef4dbf9c f196d560 ef4dbfb4 Call Trace: [<c068146b>] ? __handle_sysrq+0xfb/0x160 [<c06814d0>] ? write_sysrq_trigger+0x0/0x50 [<c068150f>] ? write_sysrq_trigger+0x3f/0x50 [<c0569ec4>] ? proc_reg_write+0x64/0xa0 [<c0569e60>] ? proc_reg_write+0x0/0xa0 [<c051de50>] ? vfs_write+0xa0/0x190 [<c051e8d1>] ? sys_write+0x41/0x70 [<c0409adc>] ? syscall_call+0x7/0xb Code: a0 c0 01 0f b6 41 03 19 d2 f7 d2 83 e2 03 83 e0 cf c1 e2 04 09 d0 88 41 03 f3 c3 90 c7 05 c8 1b 9e c0 01 00 00 00 0f ae f8 89 f6 <c6> 05 00 00 00 00 01 c3 89 f6 8d bc 27 00 00 00 00 8d 50 d0 83 EIP: [<c068124f>] sysrq_handle_crash+0xf/0x20 SS:ESP 0068:ef4dbf24 CR2: 0000000000000000",
"crash> bt PID: 5591 TASK: f196d560 CPU: 2 COMMAND: \"bash\" #0 [ef4dbdcc] crash_kexec at c0494922 #1 [ef4dbe20] oops_end at c080e402 #2 [ef4dbe34] no_context at c043089d #3 [ef4dbe58] bad_area at c0430b26 #4 [ef4dbe6c] do_page_fault at c080fb9b #5 [ef4dbee4] error_code (via page_fault) at c080d809 EAX: 00000063 EBX: 00000063 ECX: c09e1c8c EDX: 00000000 EBP: 00000000 DS: 007b ESI: c0a09ca0 ES: 007b EDI: 00000286 GS: 00e0 CS: 0060 EIP: c068124f ERR: ffffffff EFLAGS: 00010096 #6 [ef4dbf18] sysrq_handle_crash at c068124f #7 [ef4dbf24] __handle_sysrq at c0681469 #8 [ef4dbf48] write_sysrq_trigger at c068150a #9 [ef4dbf54] proc_reg_write at c0569ec2 #10 [ef4dbf74] vfs_write at c051de4e #11 [ef4dbf94] sys_write at c051e8cc #12 [ef4dbfb0] system_call at c0409ad5 EAX: ffffffda EBX: 00000001 ECX: b7776000 EDX: 00000002 DS: 007b ESI: 00000002 ES: 007b EDI: b7776000 SS: 007b ESP: bfcb2088 EBP: bfcb20b4 GS: 0033 CS: 0073 EIP: 00edc416 ERR: 00000004 EFLAGS: 00000246",
"crash> ps PID PPID CPU TASK ST %MEM VSZ RSS COMM > 0 0 0 c09dc560 RU 0.0 0 0 [swapper] > 0 0 1 f7072030 RU 0.0 0 0 [swapper] 0 0 2 f70a3a90 RU 0.0 0 0 [swapper] > 0 0 3 f70ac560 RU 0.0 0 0 [swapper] 1 0 1 f705ba90 IN 0.0 2828 1424 init ... several lines omitted 5566 1 1 f2592560 IN 0.0 12876 784 auditd 5567 1 2 ef427560 IN 0.0 12876 784 auditd 5587 5132 0 f196d030 IN 0.0 11064 3184 sshd > 5591 5587 2 f196d560 RU 0.0 5084 1648 bash",
"crash> vm PID: 5591 TASK: f196d560 CPU: 2 COMMAND: \"bash\" MM PGD RSS TOTAL_VM f19b5900 ef9c6000 1648k 5084k VMA START END FLAGS FILE f1bb0310 242000 260000 8000875 /lib/ld-2.12.so f26af0b8 260000 261000 8100871 /lib/ld-2.12.so efbc275c 261000 262000 8100873 /lib/ld-2.12.so efbc2a18 268000 3ed000 8000075 /lib/libc-2.12.so efbc23d8 3ed000 3ee000 8000070 /lib/libc-2.12.so efbc2888 3ee000 3f0000 8100071 /lib/libc-2.12.so efbc2cd4 3f0000 3f1000 8100073 /lib/libc-2.12.so efbc243c 3f1000 3f4000 100073 efbc28ec 3f6000 3f9000 8000075 /lib/libdl-2.12.so efbc2568 3f9000 3fa000 8100071 /lib/libdl-2.12.so efbc2f2c 3fa000 3fb000 8100073 /lib/libdl-2.12.so f26af888 7e6000 7fc000 8000075 /lib/libtinfo.so.5.7 f26aff2c 7fc000 7ff000 8100073 /lib/libtinfo.so.5.7 efbc211c d83000 d8f000 8000075 /lib/libnss_files-2.12.so efbc2504 d8f000 d90000 8100071 /lib/libnss_files-2.12.so efbc2950 d90000 d91000 8100073 /lib/libnss_files-2.12.so f26afe00 edc000 edd000 4040075 f1bb0a18 8047000 8118000 8001875 /bin/bash f1bb01e4 8118000 811d000 8101873 /bin/bash f1bb0c70 811d000 8122000 100073 f26afae0 9fd9000 9ffa000 100073 ... several lines omitted",
"crash> files PID: 5591 TASK: f196d560 CPU: 2 COMMAND: \"bash\" ROOT: / CWD: /root FD FILE DENTRY INODE TYPE PATH 0 f734f640 eedc2c6c eecd6048 CHR /pts/0 1 efade5c0 eee14090 f00431d4 REG /proc/sysrq-trigger 2 f734f640 eedc2c6c eecd6048 CHR /pts/0 10 f734f640 eedc2c6c eecd6048 CHR /pts/0 255 f734f640 eedc2c6c eecd6048 CHR /pts/0"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_monitoring_and_updating_the_kernel/analyzing-a-core-dump_managing-monitoring-and-updating-the-kernel |
Backup and restore | Backup and restore Red Hat Advanced Cluster Security for Kubernetes 4.7 Backing up and restoring Red Hat Advanced Cluster Security for Kubernetes Red Hat OpenShift Documentation Team | [
"export ROX_API_TOKEN=<api_token>",
"export ROX_ENDPOINT=<address>:<port_number>",
"roxctl central backup 1",
"export ROX_ENDPOINT=<address>:<port_number>",
"roxctl -p <admin_password> central backup 1",
"oc get central -n _<central-namespace>_ _<central-name>_ -o yaml > central-cr.yaml",
"oc get secret -n _<central-namespace>_ central-tls -o json | jq 'del(.metadata.ownerReferences)' > central-tls.json",
"oc get secret -n _<central-namespace>_ central-htpasswd -o json | jq 'del(.metadata.ownerReferences)' > central-htpasswd.json",
"helm get values --all -n _<central-namespace>_ _<central-helm-release>_ -o yaml > central-values-backup.yaml",
"export ROX_API_TOKEN=<api_token>",
"export ROX_ENDPOINT=<address>:<port_number>",
"roxctl central db restore <backup_file> 1",
"export ROX_ENDPOINT=<address>:<port_number>",
"roxctl -p <admin_password> \\ 1 central db restore <backup_file> 2",
"roxctl central generate interactive",
"Enter path to the backup bundle from which to restore keys and certificates (optional): _<backup-file-path>_",
"./central-bundle/central/scripts/setup.sh",
"cat central-bundle/password",
"oc apply -f central-tls.json",
"oc apply -f central-htpasswd.json",
"oc apply -f central-cr.yaml",
"roxctl central generate k8s pvc --backup-bundle _<path-to-backup-file>_ --output-format \"helm-values\"",
"helm install -n stackrox --create-namespace stackrox-central-services rhacs/central-services -f central-values-backup.yaml -f central-bundle/values-private.yaml"
] | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.7/html-single/backup_and_restore/index |
15.3. About Synchronized Attributes | 15.3. About Synchronized Attributes Identity Management synchronizes a subset of user attributes between IdM and Active Directory user entries. Any other attributes present in the entry, either in Identity Management or in Active Directory, are ignored by synchronization. Note Most POSIX attributes are not synchronized. Although there are significant schema differences between the Active Directory LDAP schema and the 389 Directory Server LDAP schema used by Identity Management, there are many attributes that are the same. These attributes are simply synchronized between the Active Directory and IdM user entries, with no changes to the attribute name or value format. User Schema That Are the Same in Identity Management and Windows Servers cn [5] physicalDeliveryOfficeName description postOfficeBox destinationIndicator postalAddress facsimileTelephoneNumber postalCode givenname registeredAddress homePhone sn homePostalAddress st initials street l telephoneNumber mail teletexTerminalIdentifier mobile telexNumber o title ou usercertificate pager x121Address Some attributes have different names but still have direct parity between IdM (which uses 389 Directory Server) and Active Directory. These attributes are mapped by the synchronization process. Table 15.1. User Schema Mapped between Identity Management and Active Directory Identity Management Active Directory cn [a] name nsAccountLock userAccountControl ntUserDomainId sAMAccountName ntUserHomeDir homeDirectory ntUserScriptPath scriptPath ntUserLastLogon lastLogon ntUserLastLogoff lastLogoff ntUserAcctExpires accountExpires ntUserCodePage codePage ntUserLogonHours logonHours ntUserMaxStorage maxStorage ntUserProfile profilePath ntUserParms userParameters ntUserWorkstations userWorkstations [a] The cn is mapped directly ( cn to cn ) when syncing from Identity Management to Active Directory. When syncing from Active Directory cn is mapped from the name attribute in Active Directory to the cn attribute in Identity Management. 15.3.1. User Schema Differences between Identity Management and Active Directory Even though attributes may be successfully synced between Active Directory and IdM, there may still be differences in how Active Directory and Identity Management define the underlying X.500 object classes. This could lead to differences in how the data are handled in the different LDAP services. This section describes the differences in how Active Directory and Identity Management handle some of the attributes which can be synchronized between the two domains. 15.3.1.1. Values for cn Attributes In 389 Directory Server, the cn attribute can be multi-valued, while in Active Directory this attribute must have only a single value. When the Identity Management cn attribute is synchronized, then, only one value is sent to the Active Directory peer. What this means for synchronization is that,potentially, if a cn value is added to an Active Directory entry and that value is not one of the values for cn in Identity Management, then all of the Identity Management cn values are overwritten with the single Active Directory value. One other important difference is that Active Directory uses the cn attribute as its naming attribute, where Identity Management uses uid . This means that there is the potential to rename the entry entirely (and accidentally) if the cn attribute is edited in the Identity Management. If that cn change is written over to the Active Directory entry, then the entry is renamed, and the new named entry is written back over to Identity Management. 15.3.1.2. Values for street and streetAddress Active Directory uses the attribute streetAddress for a user's postal address; this is the way that 389 Directory Server uses the street attribute. There are two important differences in the way that Active Directory and Identity Management use the streetAddress and street attributes, respectively: In 389 Directory Server, streetAddress is an alias for street . Active Directory also has the street attribute, but it is a separate attribute that can hold an independent value, not an alias for streetAddress . Active Directory defines both streetAddress and street as single-valued attributes, while 389 Directory Server defines street as a multi-valued attribute, as specified in RFC 4519. Because of the different ways that 389 Directory Server and Active Directory handle streetAddress and street attributes, there are two rules to follow when setting address attributes in Active Directory and Identity Management: The synchronization process maps streetAddress in the Active Directory entry to street in Identity Management. To avoid conflicts, the street attribute should not be used in Active Directory. Only one Identity Management street attribute value is synced to Active Directory. If the streetAddress attribute is changed in Active Directory and the new value does not already exist in Identity Management, then all street attribute values in Identity Management are replaced with the new, single Active Directory value. 15.3.1.3. Constraints on the initials Attribute For the initials attribute, Active Directory imposes a maximum length constraint of six characters, but 389 Directory Server does not have a length limit. If an initials attribute longer than six characters is added to Identity Management, the value is trimmed when it is synchronized with the Active Directory entry. 15.3.1.4. Requiring the surname (sn) Attribute Active Directory allows person entries to be created without a surname attribute. However, RFC 4519 defines the person object class as requiring a surname attribute, and this is the definition used in Directory Server. If an Active Directory person entry is created without a surname attribute, that entry will not be synced over to IdM since it fails with an object class violation. 15.3.2. Active Directory Entries and RFC 2307 Attributes Windows uses unique, random security IDs (SIDs) to identify users. These SIDs are assigned in blocks or ranges, identifying different system user types within the Windows domain. When users are synchronized between Identity Management and Active Directory, Windows SIDs for users are mapped to the Unix UIDs used by the Identity Management entry. Another way of saying this is that the Windows SID is the only ID within the Windows entry which is used as an identifier in the corresponding Unix entry, and then it is used in a mapping. When Active Directory domains interact with Unix-style applications or domains, then the Active Directory domain may use Services for Unix or IdM for Unix to enable Unix-style uidNumber and gidNumber attributes. This allows Windows user entries to follow the specifications for those attributes in RFC 2307 . However, the uidNumber and gidNumber attributes are not actually used as the uidNumber and gidNumber attributes for the Identity Management entry. The Identity Management uidNumber and gidNumber attributes are generated when the Windows user is synced over. Note The uidNumber and gidNumber attributes defined and used in Identity Management are not the same uidNumber and gidNumber attributes defined and used in the Active Directory entry, and the numbers are not related. [5] The cn is treated differently than other synced attributes. It is mapped directly ( cn to cn ) when syncing from Identity Management to Active Directory. When syncing from Active Directory to Identity Management, however, cn is mapped from the name attribute on Windows to the cn attribute in Identity Management. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/about-sync-schema |
Chapter 14. Managing vulnerabilities | Chapter 14. Managing vulnerabilities 14.1. Vulnerability management overview Security vulnerabilities in your environment might be exploited by an attacker to perform unauthorized actions such as carrying out a denial of service attack, executing remote code, or gaining unauthorized access to sensitive data. Therefore, the management of vulnerabilities is a foundational step towards a successful Kubernetes security program. 14.1.1. Vulnerability management process Vulnerability management is a continuous process to identify and remediate vulnerabilities. Red Hat Advanced Cluster Security for Kubernetes helps you to facilitate a vulnerability management process. A successful vulnerability management program often includes the following critical tasks: Performing asset assessment Prioritizing the vulnerabilities Assessing the exposure Taking action Continuously reassessing assets Red Hat Advanced Cluster Security for Kubernetes helps organizations to perform continuous assessments on their OpenShift Container Platform and Kubernetes clusters. It provides organizations with the contextual information they need to prioritize and act on vulnerabilities in their environment more effectively. 14.1.1.1. Performing asset assessment Performing an assessment of an organization's assets involve the following actions: Identifying the assets in your environment Scanning these assets to identify known vulnerabilities Reporting on the vulnerabilities in your environment to impacted stakeholders When you install Red Hat Advanced Cluster Security for Kubernetes on your Kubernetes or OpenShift Container Platform cluster, it first aggregates the assets running inside of your cluster to help you identify those assets. RHACS allows organizations to perform continuous assessments on their OpenShift Container Platform and Kubernetes clusters. RHACS provides organizations with the contextual information to prioritize and act on vulnerabilities in their environment more effectively. Important assets that should be monitored by the organization's vulnerability management process using RHACS include: Components : Components are software packages that may be used as part of an image or run on a node. Components are the lowest level where vulnerabilities are present. Therefore, organizations must upgrade, modify or remove software components in some way to remediate vulnerabilities. Images : A collection of software components and code that create an environment to run an executable portion of code. Images are where you upgrade components to fix vulnerabilities. Nodes : A server used to manage and run applications using OpenShift or Kubernetes and the components that make up the OpenShift Container Platform or Kubernetes service. RHACS groups these assets into the following structures: Deployment : A definition of an application in Kubernetes that may run pods with containers based on one or many images. Namespace : A grouping of resources such as Deployments that support and isolate an application. Cluster : A group of nodes used to run applications using OpenShift or Kubernetes. RHACS scans the assets for known vulnerabilities and uses the Common Vulnerabilities and Exposures (CVE) data to assess the impact of a known vulnerability. 14.1.1.2. Prioritizing the vulnerabilities Answer the following questions to prioritize the vulnerabilities in your environment for action and investigation: How important is an affected asset for your organization? How severe does a vulnerability need to be for investigation? Can the vulnerability be fixed by a patch for the affected software component? Does the existence of the vulnerability violate any of your organization's security policies? The answers to these questions help security and development teams decide if they want to gauge the exposure of a vulnerability. Red Hat Advanced Cluster Security for Kubernetes provides you the means to facilitate the prioritization of the vulnerabilities in your applications and components. 14.1.1.3. Assessing the exposure To assess your exposure to a vulnerability, answer the following questions: Is your application impacted by a vulnerability? Is the vulnerability mitigated by some other factor? Are there any known threats that could lead to the exploitation of this vulnerability? Are you using the software package which has the vulnerability? Is spending time on a specific vulnerability and the software package worth it? Take some of the following actions based on your assessment: Consider marking the vulnerability as a false positive if you determine that there is no exposure or that the vulnerability does not apply in your environment. Consider if you would prefer to remediate, mitigate or accept the risk if you are exposed. Consider if you want to remove or change the software package to reduce your attack surface. 14.1.1.4. Taking action Once you have decided to take action on a vulnerability, you can take one of the following actions: Remediate the vulnerability Mitigate and accept the risk Accept the risk Mark the vulnerability as a false positive You can remediate vulnerabilities by performing one of the following actions: Remove a software package Update a software package to a non-vulnerable version 14.2. Viewing and addressing vulnerabilities Common vulnerability management tasks involve identifying and prioritizing vulnerabilities, remedying them, and monitoring for new threats. Historically, RHACS provided a view of vulnerabilities discovered in your system in the vulnerability management dashboard. The dashboard is deprecated in RHACS 4.5 and will be removed in a future release. For more information about the dashboard, see Using the vulnerability management dashboard . 14.2.1. Prioritizing and managing scanned CVEs across images and deployments By viewing the Workload CVEs page, you can get information about the vulnerabilities in applications running on clusters in your system. You can view vulnerability information across images and deployments. The Workload CVEs page provides more advanced filtering capabilities than the dashboard, including the ability to view images and deployments with vulnerabilities and filter based on image, deployment, namespace, cluster, CVE, component, and component source. Procedure In the RHACS portal, click Vulnerability Management Workload CVEs . Choose the appropriate method to navigate through the images and deployments from the drop-down list, which is in the upper left of the page: To view the images and deployments with observed CVEs, select Image vulnerabilities . To view the images and deployments without observed CVEs, select Images without vulnerabilities . Optional: Choose the appropriate method to re-organize the information in the Workload CVEs page: To sort the table in ascending or descending order, select a column heading. To filter the table, use the filter bar. To select the categories that you want to display in the table, perform the following steps: Click Manage columns . Choose the appropriate method to manage the columns: To view all the categories, click Select all . To reset to the default categories, click Reset to default . To view only the selected categories, select the one or more categories that you want to view. To filter CVEs based on an entity, select the appropriate filters and attributes. To select multiple entities and attributes, click the right arrow icon to add another criteria. Depending on your choices, enter the appropriate information such as text, or select a date or object. The filter entities and attributes are listed in the following table. Table 14.1. CVE filtering Entity Attributes Image Name : The name of the image. Operating system : The operating system of the image. Tag : The tag for the image. Label : The label for the image. Registry : The registry where the image is located. CVE Name : The name of the CVE. Discovered time : The date when RHACS discovered the CVE. CVSS : The severity level for the CVE. The following values are associated with the severity level for the CVE: is greater than is greater than or equal to is equal to is less than or equal to is less than Image Component Name : The name of the image component, for example, activerecord-sql-server-adapter . Source : OS Python Java Ruby Node.js Go Dotnet Core Runtime Infrastructure Version : Version of the image component; for example, 3.4.21 . You can use this to search for a specific version of a component, for example, in conjunction with a component name. Deployment Name : Name of the deployment. Label : Label for the deployment. Annotation : The annotation for the deployment. Namespace Name : The name of the namespace. Label : The label for the namespace. Annotation : The annotation for the namespace. Cluster Name : The name of the cluster. Label : The label for the cluster. Type : The cluster type, for example, OCP. Platform type : The platform type, for example, OpenShift 4 cluster. To display a list of namespaces sorted according to the risk priority, click Prioritize by namespace view . You can use this view to quickly identify and address the most critical areas. In this view, click <number> deployments in a table row to return to the workload CVE list view, with filters applied to show only deployments, images and CVEs for the selected namespace. To apply the default filters, click Default filters . You can select filters for CVE severity and CVE status that are automatically applied when you visit the Workload CVEs page. These filters only apply to this page, and are applied when you visit the page from another section of the RHACS web portal or from a bookmarked URL. They are saved in the local storage of your browser. To filter the table based on the severity of a CVE, from the CVE severity drop-down list, select one or more severity levels. The following values are associated with the severity of a CVE: Critical Important Moderate Low To filter the table based on the status of a CVE, from the CVE status drop-down list, select one or more statuses. The following values are associated with the status of a CVE: Fixable Not fixable Note The Filtered view icon indicates that the displayed results were filtered based on the criteria that you selected. You can click Clear filters to remove all filters, or remove individual filters by clicking on them. In the list of results, click a CVE, image name, or deployment name to view more information about the item. For example, depending on the item type, you can view the following information: Whether a CVE is fixable Whether an image is active The Dockerfile line in the image that contains the CVE External links to information about the CVE in Red Hat and other CVE databases 14.2.1.1. Analyze images and deployments with observed CVEs When you select Image vulnerabilities , the Workload CVEs page shows the images and deployments in which Red Hat Advanced Cluster Security for Kubernetes (RHACS) has discovered CVEs. 14.2.1.1.1. CVEs tab The CVEs view organizes information into the following groups: CVE : Displays a unique identifier for Common Vulnerabilities and Exposures (CVE), each representing a specific vulnerability, to track and analyze it in detail. Images by severity : Groups images based on the severity level of the associated vulnerabilities. Top CVSS : Displays the highest CVSS score for each CVE across images to highlight the vulnerabilities with the most severe impact. Top NVD CVSS : Shows the highest severity scores from the National Vulnerability Database (NVD) to enable standardized impact assessments. Note You can see the Top NVD CVSS column only if you have enabled Scanner V4. Affected images : Displays the number of container images affected by specific CVEs to assess the scope of vulnerabilities. First discovered : Shows the date each vulnerability was first discovered in the environment to measure the duration of its exposure. Published : Indicates when the CVE was publicly disclosed. To review and triage the details associated with a CVE, click on the CVE. A window opens with information about the vulnerabilities associated with the CVE. 14.2.1.1.2. Images tab The images view organizes the information into the following groups: Image : Displays the name or identifier of each container image. CVEs by severity : Groups the vulnerabilities associated with each image based on their severity. Operating system : Highlights the operating system that the image uses and helps identify potential vulnerabilities specific to that operating system. Deployments : Shows all deployments where the image is actively running so you can assess the impact and prioritize remediation based on usage. Age : Shows how long the image has been in use and provides information about potential risks associated with outdated images. Scan time : Shows the timestamp of the last scan. To review and triage the details associated with an image, click on the image. A window opens with information about the vulnerabilities associated with the image. 14.2.1.1.3. Deployments tab The deployments view organizes information into the following groups: Deployment : Indicates the name or identifier of each deployment. CVEs by severity : Groups the vulnerabilities associated with each deployment based on their severity. Cluster : Displays the cluster in which each deployment is located. Namespace : Displays the namespace of each deployment. Images : Displays the container images that the deployment uses. First discovered : Shows the date on which the vulnerabilities associated with a deployment were first discovered. To review and triage the details associated with a deployment, click on the deployment. A window opens with information about the vulnerabilities associated with the deployment. 14.2.1.2. Analyze images and deployments without observed CVEs When you select Images without vulnerabilities , the Workload CVEs page shows the images that meet at least one of the following conditions: Images that do not have CVEs Images that report a scanner error that may result in a false negative of no CVEs Note An image that actually contains vulnerabilities can appear in this list inadvertently. For example, if Scanner was able to scan the image and it is known to Red Hat Advanced Cluster Security for Kubernetes (RHACS), but the scan was not successfully completed, RHACS cannot detect vulnerabilities. This scenario occurs if an image has an operating system that RHACS Scanner does not support. RHACS displays scan errors when you hover over an image in the image list or click the image name for more information. 14.2.1.2.1. Images tab The images view organizes the information into the following groups: Image : Displays the name or identifier of each container image. Operating system : Highlights the operating system that the image uses and helps identify potential vulnerabilities specific to that operating system. Deployments : Shows all deployments where the image is actively running so you can assess the impact and prioritize remediation based on usage. Age : Shows how long the image has been in use and provides information about potential risks associated with outdated images. Scan time : Shows the timestamp of the last scan. To review and triage the details associated with an image, click on the image. A window opens with information about the vulnerabilities associated with the image. 14.2.2. Deployments tab The deployments view organizes information into the following groups: Deployment : Indicates the name or identifier of each deployment. Cluster : Displays the cluster in which each deployment is located. Namespace : Displays the namespace of each deployment. Images : Displays the container images that the deployment uses. First discovered : Shows the date on which the vulnerabilities associated with a deployment were first discovered. To review and triage the details associated with a deployment, click on the deployment. A window opens with information about the vulnerabilities associated with the deployment. 14.2.3. Viewing Node CVEs You can identify vulnerabilities in your nodes by using RHACS. The vulnerabilities that are identified include the following: Vulnerabilities in core Kubernetes components Vulnerabilities in container runtimes such as Docker, CRI-O, runC, and containerd For more information about operating systems that RHACS can scan, see "Supported operating systems". Procedure In the RHACS portal, click Vulnerability Management Node CVEs . To view the data, do any of the following tasks: To view a list of all the CVEs affecting all of your nodes, select <number> CVEs . To view a list of nodes that contain CVEs, select <number> Nodes . Optional: To filter CVEs according to entity, select the appropriate filters and attributes. To add more filtering criteria, follow these steps: Select the entity or attribute from the list. Depending on your choices, enter the appropriate information such as text, or select a date or object. Click the right arrow icon. Optional: Select additional entities and attributes, and then click the right arrow icon to add them. The filter entities and attributes are listed in the following table. Table 14.2. CVE filtering Entity Attributes Node Name : The name of the node. Operating system : The operating system of the node, for example, Red Hat Enterprise Linux (RHEL). Label : The label of the node. Annotation : The annotation for the node. Scan time : The scan date of the node. CVE Name : The name of the CVE. Discovered time : The date when RHACS discovered the CVE. CVSS : The severity level for the CVE. The following values are associated with the severity level for the CVE: is greater than is greater than or equal to is equal to is less than or equal to is less than Node Component Name : The name of the component. Version : The version of the component, for example, 4.15.0-2024 . You can use this to search for a specific version of a component, for example, in conjunction with a component name. Cluster Name : The name of the cluster. Label : The label for the cluster. Type : The type of cluster, for example, OCP. Platform type : The type of platform, for example, OpenShift 4 cluster. Optional: To refine the list of results, do any of the following tasks: Click CVE severity , and then select one or more levels. Click CVE status , and then select Fixable or Not fixable . Optional: To view the details of the node and information about the CVEs according to the CVSS score and fixable CVEs for that node, click a node name in the list of nodes. 14.2.3.1. Disabling identifying vulnerabilities in nodes Identifying vulnerabilities in nodes is enabled by default. You can disable it from the RHACS portal. Procedure In the RHACS portal, go to Platform Configuration Integrations . Under Image Integrations , select StackRox Scanner . From the list of scanners, select StackRox Scanner to view its details. Click Edit . To use only the image scanner and not the node scanner, click Image Scanner . Click Save . Additional resources Supported operating systems 14.2.4. Viewing platform CVEs The platform CVEs page provides information about vulnerabilities in clusters in your system. Procedure Click Vulnerability Management Platform CVEs . You can filter CVEs by entity by selecting the appropriate filters and attributes. You can select multiple entities and attributes by clicking the right arrow icon to add another criteria. Depending on your choices, enter the appropriate information such as text, or select a date or object. The filter entities and attributes are listed in the following table. Table 14.3. CVE filtering Entity Attributes Cluster Name : The name of the cluster. Label : The label for the cluster. Type : The cluster type, for example, OCP. Platform type : The platform type, for example, OpenShift 4 cluster. CVE Name : The name of the CVE. Discovered time : The date when RHACS discovered the CVE. CVSS : The severity level for the CVE. You can select from the following options for the severity level: is greater than is greater than or equal to is equal to is less than or equal to is less than Type : The type of CVE: Kubernetes CVE Istio CVE OpenShift CVE To filter by CVE status, click CVE status and select Fixable or Not fixable . Note The Filtered view icon indicates that the displayed results were filtered based on the criteria that you selected. You can click Clear filters to remove all filters, or remove individual filters by clicking on them. In the list of results, click a CVE to view more information about the item. For example, you can view the following information if it is populated: Documentation for the CVE External links to information about the CVE in Red Hat and other CVE databases Whether the CVE is fixable or unfixable A list of affected clusters 14.2.5. Excluding CVEs You can exclude or ignore CVEs in RHACS by snoozing node and platform CVEs and deferring or marking node, platform, and image CVEs as false positives. You might want to exclude CVEs if you know that the CVE is a false positive or you have already taken steps to mitigate the CVE. Snoozed CVEs do not appear in vulnerability reports or trigger policy violations. You can snooze a CVE to ignore it globally for a specified period of time. Snoozing a CVE does not require approval. Note Snoozing node and platform CVEs requires that the ROX_VULN_MGMT_LEGACY_SNOOZE environment variable is set to true . Deferring or marking a CVE as a false positive is done through the exception management workflow. This workflow provides the ability to view pending, approved, and denied deferral and false positive requests. You can scope the CVE exception to a single image, all tags for a single image, or globally for all images. When approving or denying a request, you must add a comment. A CVE remains in the observed status until the exception request is approved. A pending request for deferral that is denied by another user is still visible in reports, policy violations, and other places in the system, but is indicated by a Pending exception label to the CVE when visiting Vulnerability Management Workload CVEs . An approved exception for a deferral or false positive has the following effects: Removes the CVE from the Observed tab in Vulnerability Management Workflow CVEs to either the Deferred or False positive tab Prevents the CVE from triggering policy violations that are related to the CVE Prevents the CVE from showing up in automatically generated vulnerability reports 14.2.5.1. Snoozing platform and node CVEs You can snooze platform and node CVEs that do not relate to your infrastructure. You can snooze CVEs for 1 day, 1 week, 2 weeks, 1 month, or indefinitely, until you unsnooze them. Snoozing a CVE takes effect immediately and does not require an additional approval step. Note The ability to snooze a CVE is not enabled by default in the web portal or in the API. To enable the ability to snooze CVEs, set the runtime environment variable ROX_VULN_MGMT_LEGACY_SNOOZE to true . Procedure In the RHACS portal, do any of the following tasks: To view platform CVEs, click Vulnerability Management Platform CVEs . To view node CVEs, click Vulnerability Management Node CVEs . Select one or more CVEs. Select the appropriate method to snooze the CVE: If you selected a single CVE, click the overflow menu, , and then select Snooze CVE . If you selected multiple CVEs, click Bulk actions Snooze CVEs . Select the duration of time to snooze. Click Snooze CVEs . You receive a confirmation that you have requested to snooze the CVEs. 14.2.5.2. Unsnoozing platform and node CVEs You can unsnooze platform and node CVEs that you have previously snoozed. Note The ability to snooze a CVE is not enabled by default in the web portal or in the API. To enable the ability to snooze CVEs, set the runtime environment variable ROX_VULN_MGMT_LEGACY_SNOOZE to true . Procedure In the RHACS portal, do any of the following tasks: To view the list of platform CVEs, click Vulnerability Management Platform CVEs . To view the list of node CVEs, click Vulnerability Management Node CVEs . To view the list of snoozed CVEs, click Show snoozed CVEs in the header view. Select one or more CVEs from the list of snoozed CVEs. Select the appropriate method to unsnooze the CVE: If you selected a single CVE, click the overflow menu, , and then select Unsnooze CVE . If you selected multiple CVEs, click Bulk actions Unsnooze CVEs . Click Unsnooze CVEs again. You receive a confirmation that you have requested to unsnooze the CVEs. 14.2.5.3. Viewing snoozed CVEs You can view a list of platform and node CVEs that have been snoozed. Note The ability to snooze a CVE is not enabled by default in the web portal or in the API. To enable the ability to snooze CVEs, set the runtime environment variable ROX_VULN_MGMT_LEGACY_SNOOZE to true . Procedure In the RHACS portal, do any of the following tasks: To view the list of platform CVEs, click Vulnerability Management Platform CVEs . To view the list of node CVEs, click Vulnerability Management Node CVEs . Click Show snoozed CVEs to view the list. 14.2.5.4. Marking a vulnerability as a false positive globally You can create an exception for a vulnerability by marking it as a false positive globally, or across all images. You must get requests to mark a vulnerability as a false positive approved in the exception management workflow. Prerequisites You have the write permission for the VulnerabilityManagementRequests resource. Procedure In the RHACS portal, click Vulnerability Management Workload CVEs . Choose the appropriate method to mark the CVEs: If you want to mark a single CVE, perform the following steps: Find the row which contains the CVE that you want to take action on. Click the overflow menu, , for the CVE that you identified, and then select Mark as false positive . If you want to mark multiple CVEs, perform the following steps: Select each CVE. From the Bulk actions drop-down list, select Mark as false positives . Enter a rationale for requesting the exception. Optional: To review the CVEs that are included in the exception request, click CVE selections . Click Submit request . You receive a confirmation that you have requested an exception. Optional: To copy the approval link and share it with your organization's exception approver, click the copy icon. Click Close . 14.2.5.5. Marking a vulnerability as a false positive for an image or image tag To create an exception for a vulnerability, you can mark it as a false positive for a single image, or across all tags associated with an image. You must get requests to mark a vulnerability as a false positive approved in the exception management workflow. Prerequisites You have the write permission for the VulnerabilityManagementRequests resource. Procedure In the RHACS portal, click Vulnerability Management Workload CVEs . To view the list of images, click <number> Images . Find the row that lists the image that you want to mark as a false positive, and click the image name. Choose the appropriate method to mark the CVEs: If you want to mark a single CVE, perform the following steps: Find the row which contains the CVE that you want to take action on. Click the overflow menu, , for the CVE that you identified, and then select Mark as false positive . If you want to mark multiple CVEs, perform the following steps: Select each CVE. From the Bulk actions drop-down list, select Mark as false positives . Select the scope. You can select either all tags associated with the image or only the image. Enter a rationale for requesting the exception. Optional: To review the CVEs that are included in the exception request, click CVE selections . Click Submit request . You receive a confirmation that you have requested an exception. Optional: To copy the approval link and share it with your organization's exception approver, click the copy icon. Click Close . 14.2.5.6. Viewing deferred and false positive CVEs You can view the CVEs that have been deferred or marked as false positives by using the Workload CVEs page. Procedure To see CVEs that have been deferred or marked as false positives, with the exceptions approved by an approver, click Vulnerability Management Workload CVEs . Complete any of the following actions: To see CVEs that have been deferred, click the Deferred tab. To see CVEs that have been marked as false positives, click the False positives tab. Note To approve, deny, or change deferred or false positive CVEs, click Vulnerability Management Exception Management . Optional: To view additional information about the deferral or false positive, click View in the Request details column. The Exception Management page is displayed. 14.2.5.7. Deferring CVEs You can accept risk with or without mitigation and defer CVEs. You must get deferral requests approved in the exception management workflow. Prerequisites You have write permission for the VulnerabilityManagementRequests resource. Procedure In the RHACS portal, click Vulnerability Management Workload CVEs . Choose the appropriate method to defer a CVE: If you want to defer a single CVE, perfom the following steps: Find the row which contains the CVE that you want to mark as a false positive. Click the overflow menu, , for the CVE that you identified, and then click Defer CVE . If you want to defer multiple CVEs, perform the following steps: Select each CVE. Click Bulk actions Defer CVEs . Select the time period for the deferral. Enter a rationale for requesting the exception. Optional: To review the CVEs that are included in the exception menu, click CVE selections . Click Submit request . You receive a confirmation that you have requested a deferral. Optional: To copy the approval link to share it with your organization's exception approver, click the copy icon. Click Close . 14.2.5.7.1. Configuring vulnerability exception expiration periods You can configure the time periods available for vulnerability management exceptions. These options are available when users request to defer a CVE. Prerequisites You have write permission for the VulnerabilityManagementRequests resource. Procedure In the RHACS portal, go to Platform Configuration Exception Configuration . You can configure expiration times that users can select when they request to defer a CVE. Enabling a time period makes it available to users and disabling it removes it from the user interface. 14.2.5.8. Reviewing and managing an exception request to defer or mark a CVE as false positive You can review, update, approve, or deny an exception requests for deferring and marking CVEs as false positives. Prerequisites You have the write permission for the VulnerabilityManagementRequests resource. Procedure To view the list of pending requests, do any of the following tasks: Paste the approval link into your browser. Click Vulnerability Management Exception Management , and then click the request name in the Pending requests tab. Review the scope of the vulnerability and decide whether or not to approve it. Choose the appropriate option to manage a pending request: If you want to deny the request and return the CVE to observed status, click Deny request . Enter a rationale for the denial, and click Deny . If you want to approve the request, click Approve request . Enter a rationale for the approval, and click Approve . To cancel a request that you have created and return the CVE to observed status, click Cancel request . You can only cancel requests that you have created. To update the deferral time period or rationale for a request that you have created, click Update request . You can only update requests that you have created. After you make changes, click Submit request . You receive a confirmation that you have submitted a request. 14.2.6. Identifying Dockerfile lines in images that introduced components with CVEs You can identify specific Dockerfile lines in an image that introduced components with CVEs. Procedure To view a problematic line: In the RHACS portal, click Vulnerability Management Workload CVEs . Click the tab to view the type of CVEs. The following tabs are available: Observed Deferred False positives In the list of CVEs, click the CVE name to open the page containing the CVE details. The Affected components column lists the components that include the CVE. Expand the CVE to display additional information, including the Dockerfile line that introduced the component. 14.2.7. Finding a new component version The following procedure finds a new component version to upgrade to. Procedure In the RHACS portal, click Vulnerability Management Workload CVEs . Click <number> Images and select an image. To view additional information, locate the CVE and click the expand icon. The additional information includes the component that the CVE is in and the version in which the CVE is fixed, if it is fixable. Update your image to a later version. 14.2.8. Exporting workload vulnerabilities by using the API You can export workload vulnerabilities in Red Hat Advanced Cluster Security for Kubernetes by using the API. For these examples, workloads are composed of deployments and their associated images. The export uses the /v1/export/vuln-mgmt/workloads streaming API. It allows the combined export of deployments and images. The images payload contains the full vulnerability information. The output is streamed and has the following schema: {"result": {"deployment": {...}, "images": [...]}} ... {"result": {"deployment": {...}, "images": [...]}} The following examples assume that these environment variables have been set: ROX_API_TOKEN : API token with view permissions for the Deployment and Image resources ROX_ENDPOINT : Endpoint under which Central's API is available To export all workloads, enter the following command: USD curl -H "Authorization: Bearer USDROX_API_TOKEN" USDROX_ENDPOINT/v1/export/vuln-mgmt/workloads To export all workloads with a query timeout of 60 seconds, enter the following command: USD curl -H "Authorization: Bearer USDROX_API_TOKEN" USDROX_ENDPOINT/v1/export/vuln-mgmt/workloads?timeout=60 To export all workloads matching the query Deployment:app Namespace:default , enter the following command: USD curl -H "Authorization: Bearer USDROX_API_TOKEN" USDROX_ENDPOINT/v1/export/vuln-mgmt/workloads?query=Deployment%3Aapp%2BNamespace%3Adefault Additional resources Searching and filtering 14.2.8.1. Scanning inactive images Red Hat Advanced Cluster Security for Kubernetes (RHACS) scans all active (deployed) images every 4 hours and updates the image scan results to reflect the latest vulnerability definitions. You can also configure RHACS to scan inactive (not deployed) images automatically. Procedure In the RHACS portal, click Vulnerability Management Workload CVEs . Click Manage watched images . In the Image name field, enter the fully-qualified image name that begins with the registry and ends with the image tag, for example, docker.io/library/nginx:latest . Click Add image to watch list . Optional: To remove a watched image, locate the image in the Manage watched images window, and click Remove watch . Important In the RHACS portal, click Platform Configuration System Configuration to view the data retention configuration. All the data related to the image removed from the watched image list continues to appear in the RHACS portal for the number of days mentioned on the System Configuration page and is only removed after that period is over. Click Close to return to the Workload CVEs page. 14.3. Vulnerability reporting You can create and download an on-demand image vulnerability report from the Vulnerability Management Vulnerability Reporting menu in the RHACS web portal. This report contains a comprehensive list of common vulnerabilities and exposures in images and deployments, referred to as workload CVEs in RHACS. To share this report with auditors or internal stakeholders, you can schedule emails in RHACS or download the report and share it by using other methods. 14.3.1. Reporting vulnerabilities to teams As organizations must constantly reassess and report on their vulnerabilities, some organizations find it helpful to have scheduled communications to key stakeholders to help in the vulnerability management process. You can use Red Hat Advanced Cluster Security for Kubernetes to schedule these reoccurring communications through e-mail. These communications should be scoped to the most relevant information that the key stakeholders need. For sending these communications, you must consider the following questions: What schedule would have the most impact when communicating with the stakeholders? Who is the audience? Should you only send specific severity vulnerabilities in your report? Should you only send fixable vulnerabilities in your report? 14.3.2. Creating vulnerability management report configurations RHACS guides you through the process of creating a vulnerability management report configuration. This configuration determines the information that will be included in a report job that runs at a scheduled time or that you run on demand. Procedure In the RHACS portal, click Vulnerability Management Vulnerability Reporting . Click Create report . In the Configure report parameters page, provide the following information: Report name : Enter a name for your report configuration. Report description : Enter a text describing the report configuration. This is optional. CVE severity : Select the severity of common vulnerabilities and exposures (CVEs) that you want to include in the report configuration. CVE status : Select one or more CVE statuses. The following values are associated with the CVE status: Fixable Unfixable Image type : Select one or more image types. The following values are associated with image types: Deployed images Watched images CVEs discovered since : Select the time period for which you want to include the CVEs in the report configuration. Optional: Select the Include NVD CVSS checkbox, if you want to include the NVD CVSS column in the report configuration. Configure collection included : To configure at least one collection, do any of the following tasks: Select an existing collection that you want to include. To view the collection information, edit the collection, and get a preview of collection results, click View . When viewing the collection, entering text in the field searches for collections matching that text string. To create a new collection, click Create collection . Note For more information about collections, see "Creating and using deployment collections". To configure the delivery destinations and optionally set up a schedule for delivery, click . 14.3.2.1. Configuring delivery destinations and scheduling Configuring destinations and delivery schedules for vulnerability reports is optional, unless on the page, you selected the option to include CVEs that were discovered since the last scheduled report. If you selected that option, configuring destinations and delivery schedules for vulnerability reports is required. Procedure To configure destinations for delivery, in the Configure delivery destinations section, you can add a delivery destination and set up a schedule for reporting. To email reports, you must configure at least one email notifier. Select an existing notifier or create a new email notifier to send your report by email. For more information about creating an email notifier, see "Configuring the email plugin" in the "Additional resources" section. When you select a notifier, the email addresses configured in the notifier as Default recipients appear in the Distribution list field. You can add additional email addresses that are separated by a comma. A default email template is automatically applied. To edit this default template, perform the following steps: Click the edit icon and enter a customized subject and email body in the Edit tab. Click the Preview tab to see your proposed template. Click Apply to save your changes to the template. Note When reviewing the report jobs for a specific report, you can see whether the default template or a customized template was used when creating the report. In the Configure schedule section, select the frequency and day of the week for the report. Click to review your vulnerability report configuration and finish creating it. 14.3.2.2. Reviewing and creating the report configuration You can review the details of your vulnerability report configuration before creating it. Procedure In the Review and create section, you can review the report configuration parameters, delivery destination, email template that is used if you selected email delivery, delivery schedule, and report format. To make any changes, click Back to go to the section and edit the fields that you want to change. Click Create to create the report configuration and save it. 14.3.3. Vulnerability report permissions The ability to create, view, and download reports depends on the access control settings, or roles and permission sets, for your user account. For example, you can only view, create, and download reports for data that your user account has permission to access. In addition, the following restrictions apply: You can only download reports that you have generated; you cannot download reports generated by other users. Report permissions are restricted depending on the access settings for user accounts. If the access settings for your account change, old reports do not reflect the change. For example, if you are given new permissions and want to view vulnerability data that is now allowed by those permissions, you must create a new vulnerability report. 14.3.4. Editing vulnerability report configurations You can edit existing vulnerability report configurations from the list of report configurations, or by selecting an individual report configuration first. Procedure In the RHACS web portal, click Vulnerability Management Vulnerability Reporting . To edit an existing vulnerability report configuration, complete any of the following actions: Locate the report configuration that you want to edit in the list of report configurations. Click the overflow menu, , and then select Edit report . Click the report configuration name in the list of report configurations. Then, click Actions and select Edit report . Make changes to the report configuration and save. 14.3.5. Downloading vulnerability reports You can generate an on-demand vulnerability report and then download it. Note You can only download reports that you have generated; you cannot download reports generated by other users. Procedure In the RHACS web portal, click Vulnerability Management Vulnerability Reporting . In the list of report configurations, locate the report configuration that you want to use to create the downloadable report. Generate the vulnerability report by using one of the following methods: To generate the report from the list: Click the overflow menu, , and then select Generate download . The My active job status column displays the status of your report creation. After the Processing status goes away, you can download the report. To generate the report from the report window: Click the report configuration name to open the configuration detail window. Click Actions and select Generate download . To download the report, if you are viewing the list of report configurations, click the report configuration name to open it. Click All report jobs from the menu on the header. If the report is completed, click the Ready for download link in the Status column. The report is in .csv format and is compressed into a .zip file for download. 14.3.6. Sending vulnerability reports on-demand You can send vulnerability reports immediately, rather than waiting for the scheduled send time. Procedure In the RHACS web portal, click Vulnerability Management Vulnerability Reporting . In the list of report configurations, locate the report configuration for the report that you want to send. Click the overflow menu, , and then select Send report now . 14.3.7. Cloning vulnerability report configurations You can make copies of vulnerability report configurations by cloning them. This is useful when you want to reuse report configurations with minor changes, such as reporting vulnerabilities in different deployments or namespaces. Procedure In the RHACS web portal, click Vulnerability Management Vulnerability Reporting . Locate the report configuration that you want to clone in the list of report configurations. Click Clone report . Make any changes that you want to the report parameters and delivery destinations. Click Create . 14.3.8. Deleting vulnerability report configurations Deleting a report configuration deletes the configuration and any reports that were previously run using this configuration. Procedure In the RHACS web portal, click Vulnerability Management Vulnerability Reporting . Locate the report configuration that you want to delete in the list of reports. Click the overflow menu, , and then select Delete report . 14.3.9. Configuring vulnerability management report job retention settings You can configure settings that determine when vulnerability report job requests expire and other retention settings for report jobs. Note These settings do not affect the following vulnerability report jobs: Jobs in the WAITING or PREPARING state (unfinished jobs) The last successful scheduled report job The last successful on-demand emailed report job The last successful downloadable report job Downloadable report jobs for which the report file has not been deleted by either manual deletion or by configuring the downloadable report pruning settings Procedure In the RHACS web portal, go to Platform Configuration System Configuration . You can configure the following settings for vulnerability report jobs: Vulnerability report run history retention : The number of days that a record is kept of vulnerability report jobs that have been run. This setting controls how many days that report jobs are listed in the All report jobs tab under Vulnerability Management Vulnerability Reporting when a report configuration is selected. The entire report history after the exclusion date is deleted, with the exception of the following jobs: Unfinished jobs. Jobs for which prepared downloadable reports still exist in the system. The last successful report job for each job type (scheduled email, on-demand email, or download). This ensures users have information about the last run job for each type. Prepared downloadable vulnerability reports retention days : The number of days that prepared, on-demand downloadable vulnerability report jobs are available for download on the All report jobs tab under Vulnerability Management Vulnerability Reporting when a report configuration is selected. Prepared downloadable vulnerability reports limit : The limit, in MB, of space allocated to prepared downloadable vulnerability report jobs. After the limit is reached, the oldest report job in the download queue is removed. To change these values, click Edit , make your changes, and then click Save . 14.3.10. Additional resources Creating and using deployment collections Migration of access scopes to collections Configuring the email plugin 14.4. Using the vulnerability management dashboard (deprecated) Historically, RHACS has provided a view of vulnerabilities discovered in your system in the vulnerability management dashboard. With the dashboard, you can view vulnerabilities by image, node, or platform. You can also view vulnerabilities by clusters, namespaces, deployments, node components, and image components. The dashboard is deprecated in RHACS 4.5 and will be removed in a future release. Important To perform actions on vulnerabilities, such as view additional information about a vulnerability, defer a vulnerability, or mark a vulnerability as a false positive, click Vulnerability Management Workload CVEs . To review requests for deferring and marking CVEs as false positives, click Vulnerability Management Exception Management . 14.4.1. Viewing application vulnerabilities by using the dashboard You can view application vulnerabilities in Red Hat Advanced Cluster Security for Kubernetes by using the dashboard. Procedure In the RHACS portal, go to Vulnerability Management Dashboard . On the Dashboard view header, select Application & Infrastructure Namespaces or Deployments . From the list, search for and select the Namespace or Deployment you want to review. To get more information about the application, select an entity from Related entities on the right. 14.4.2. Viewing image vulnerabilities by using the dashboard You can view image vulnerabilities in Red Hat Advanced Cluster Security for Kubernetes by using the dashboard. Procedure In the RHACS portal, go to Vulnerability Management Dashboard . On the Dashboard view header, select <number> Images . From the list of images, select the image you want to investigate. You can also filter the list by performing one of the following steps: Enter Image in the search bar and then select the Image attribute. Enter the image name in the search bar. In the image details view, review the listed CVEs and prioritize taking action to address the impacted components. Select Components from Related entities on the right to get more information about all the components that are impacted by the selected image. Or select Components from the Affected components column under the Image findings section for a list of components affected by specific CVEs. 14.4.3. Viewing cluster vulnerabilities by using the dashboard You can view vulnerabilities in clusters by using Red Hat Advanced Cluster Security for Kubernetes. Procedure In the RHACS portal, go to Vulnerability Management Dashboard . On the Dashboard view header, select Application & Infrastructure Clusters . From the list of clusters, select the cluster you want to investigate. Review the cluster's vulnerabilities and prioritize taking action on the impacted nodes on the cluster. 14.4.4. Viewing node vulnerabilities by using the dashboard You can view vulnerabilities in specific nodes by using Red Hat Advanced Cluster Security for Kubernetes. Procedure In the RHACS portal, go to Vulnerability Management Dashboard . On the Dashboard view header, select Nodes . From the list of nodes, select the node you want to investigate. Review vulnerabilities for the selected node and prioritize taking action. To get more information about the affected components in a node, select Components from Related entities on the right. 14.4.5. Finding the most vulnerable image components by using the dashboard Use the Vulnerability Management view for identifying highly vulnerable image components. Procedure Go to the RHACS portal and click Vulnerability Management Dashboard from the navigation menu. From the Vulnerability Management view header, select Application & Infrastructure Image Components . In the Image Components view, select the Image CVEs column header to arrange the components in descending order (highest first) based on the CVEs count. 14.4.6. Viewing details only for fixable CVEs by using the dashboard Use the Vulnerability Management view to filter and show only the fixable CVEs. Procedure In the RHACS portal, go to Vulnerability Management Dashboard . From the Vulnerability Management view header, under Filter CVEs , click Fixable . 14.4.7. Identifying the operating system of the base image by using the dashboard Use the Vulnerability Management view to identify the operating system of the base image. Procedure Go to the RHACS portal and click Vulnerability Management Dashboard from the navigation menu. From the Vulnerability Management view header, select Images . View the base operating system (OS) and OS version for all images under the Image OS column. Select an image to view its details. The base operating system is also available under the Image Summary Details and Metadata section. Note Red Hat Advanced Cluster Security for Kubernetes lists the Image OS as unknown when either: The operating system information is not available, or If the image scanner in use does not provide this information. Docker Trusted Registry, Google Container Registry, and Anchore do not provide this information. 14.4.8. Identifying top risky objects by using the dashboard Use the Vulnerability Management view for identifying the top risky objects in your environment. The Top Risky widget displays information about the top risky images, deployments, clusters, and namespaces in your environment. The risk is determined based on the number of vulnerabilities and their CVSS scores. Procedure Go to the RHACS portal and click Vulnerability Management Dashboard from the navigation menu. Select the Top Risky widget header to choose between riskiest images, deployments, clusters, and namespaces. The small circles on the chart represent the chosen object (image, deployment, cluster, namespace). Hover over the circles to see an overview of the object they represent. And select a circle to view detailed information about the selected object, its related entities, and the connections between them. For example, if you are viewing Top Risky Deployments by CVE Count and CVSS score , each circle on the chart represents a deployment. When you hover over a deployment, you see an overview of the deployment, which includes deployment name, name of the cluster and namespace, severity, risk priority, CVSS, and CVE count (including fixable). When you select a deployment, the Deployment view opens for the selected deployment. The Deployment view shows in-depth details of the deployment and includes information about policy violations, common vulnerabilities, CVEs, and riskiest images for that deployment. Select View All on the widget header to view all objects of the chosen type. For example, if you chose Top Risky Deployments by CVE Count and CVSS score , you can select View All to view detailed information about all deployments in your infrastructure. 14.4.9. Identifying top riskiest images and components by using the dashboard Similar to the Top Risky , the Top Riskiest widget lists the names of the top riskiest images and components. This widget also includes the total number of CVEs and the number of fixable CVEs in the listed images. Procedure Go to the RHACS portal and click Vulnerability Management from the navigation menu. Select the Top Riskiest Images widget header to choose between the riskiest images and components. If you are viewing Top Riskiest Images : When you hover over an image in the list, you see an overview of the image, which includes image name, scan time, and the number of CVEs along with severity (critical, high, medium, and low). When you select an image, the Image view opens for the selected image. The Image view shows in-depth details of the image and includes information about CVEs by CVSS score, top riskiest components, fixable CVEs, and Dockerfile for the image. Select View All on the widget header to view all objects of the chosen type. For example, if you chose Top Riskiest Components , you can select View All to view detailed information about all components in your infrastructure. 14.4.10. Viewing the Dockerfile for an image by using the dashboard Use the Vulnerability Management view to find the root cause of vulnerabilities in an image. You can view the Dockerfile and find exactly which command in the Dockerfile introduced the vulnerabilities and all components that are associated with that single command. The Dockerfile section shows information about: All the layers in the Dockerfile The instructions and their value for each layer The components included in each layer The number of CVEs in components for each layer When there are components introduced by a specific layer, you can select the expand icon to see a summary of its components. If there are any CVEs in those components, you can select the expand icon for an individual component to get more details about the CVEs affecting that component. Procedure In the RHACS portal, go to Vulnerability Management Dashboard . Select an image from either the Top Riskiest Images widget or click the Images button at the top of the dashboard and select an image. In the Image details view, to Dockerfile , select the expand icon to see a summary of instructions, values, creation date, and components. Select the expand icon for an individual component to view more information. 14.4.11. Identifying the container image layer that introduces vulnerabilities by using the dashboard You can use the Vulnerability Management dashboard to identify vulnerable components and the image layer they appear in. Procedure Go to the RHACS portal and click Vulnerability Management Dashboard from the navigation menu. Select an image from either the Top Riskiest Images widget or click the Images button at the top of the dashboard and select an image. In the Image details view, to Dockerfile , select the expand icon to see a summary of image components. Select the expand icon for specific components to get more details about the CVEs affecting the selected component. 14.4.12. Viewing recently detected vulnerabilities by using the dashboard The Recently Detected Vulnerabilities widget on the Vulnerability Management Dashboard view shows a list of recently discovered vulnerabilities in your scanned images, based on the scan time and CVSS score. It also includes information about the number of images affected by the CVE and its impact (percentage) on your environment. When you hover over a CVE in the list, you see an overview of the CVE, which includes scan time, CVSS score, description, impact, and whether it's scored by using CVSS v2 or v3. When you select a CVE, the CVE details view opens for the selected CVE. The CVE details view shows in-depth details of the CVE and the components, images, and deployments and deployments in which it appears. Select View All on the Recently Detected Vulnerabilities widget header to view a list of all the CVEs in your infrastructure. You can also filter the list of CVEs. 14.4.13. Viewing the most common vulnerabilities by using the dashboard The Most Common Vulnerabilities widget on the Vulnerability Management Dashboard view shows a list of vulnerabilities that affect the largest number of deployments and images arranged by their CVSS score. When you hover over a CVE in the list, you see an overview of the CVE which includes, scan time, CVSS score, description, impact, and whether it is scored by using CVSS v2 or v3. When you select a CVE, the CVE details view opens for the selected CVE. The CVE details view shows in-depth details of the CVE and the components, images, and deployments and deployments in which it appears. Select View All on the Most Common Vulnerabilities widget header to view a list of all the CVEs in your infrastructure. You can also filter the list of CVEs. To export the CVEs as a CSV file, select Export Download CVES as CSV . 14.4.14. Finding clusters with most Kubernetes and Istio vulnerabilities by using the dashboard You can identify the clusters with most Kubernetes, Red Hat OpenShift, and Istio vulnerabilities (deprecated) in your environment by using the vulnerability management dashboard. Procedure In the RHACS portal, click Vulnerability Management -> Dashboard . The Clusters with most orchestrator and Istio vulnerabilities widget shows a list of clusters, ranked by the number of Kubernetes, Red Hat OpenShift, and Istio vulnerabilities (deprecated) in each cluster. The cluster on top of the list is the cluster with the highest number of vulnerabilities. Click on one of the clusters from the list to view details about the cluster. The Cluster view includes: Cluster Summary section, which shows cluster details and metadata, top risky objects (deployments, namespaces, and images), recently detected vulnerabilities, riskiest images, and deployments with the most severe policy violations. Cluster Findings section, which includes a list of failing policies and list of fixable CVEs. Related Entities section, which shows the number of namespaces, deployments, policies, images, components, and CVEs the cluster contains. You can select these entities to view more details. Click View All on the widget header to view the list of all clusters. 14.4.15. Identifying vulnerabilities in nodes by using the dashboard You can use the Vulnerability Management view to identify vulnerabilities in your nodes. The identified vulnerabilities include vulnerabilities in core Kubernetes components and container runtimes such as Docker, CRI-O, runC, and containerd. For more information on operating systems that RHACS can scan, see "Supported operating systems". Procedure In the RHACS portal, go to Vulnerability Management Dashboard . Select Nodes on the header to view a list of all the CVEs affecting your nodes. Select a node from the list to view details of all CVEs affecting that node. When you select a node, the Node details panel opens for the selected node. The Node view shows in-depth details of the node and includes information about CVEs by CVSS score and fixable CVEs for that node. Select View All on the CVEs by CVSS score widget header to view a list of all the CVEs in the selected node. You can also filter the list of CVEs. To export the fixable CVEs as a CSV file, select Export as CSV under the Node Findings section. Additional resources Supported operating systems 14.4.16. Creating policies to block specific CVEs by using the dashboard You can create new policies or add specific CVEs to an existing policy from the Vulnerability Management view. Procedure Click CVEs from the Vulnerability Management view header. You can select the checkboxes for one or more CVEs, and then click Add selected CVEs to Policy ( add icon) or move the mouse over a CVE in the list, and select the Add icon. For Policy Name : To add the CVE to an existing policy, select an existing policy from the drop-down list box. To create a new policy, enter the name for the new policy, and select Create <policy_name> . Select a value for Severity , either Critical , High , Medium , or Low . Choose the Lifecycle Stage to which your policy is applicable, from Build , or Deploy . You can also select both life-cycle stages. Enter details about the policy in the Description box. Turn off the Enable Policy toggle if you want to create the policy but enable it later. The Enable Policy toggle is on by default. Verify the listed CVEs which are included in this policy. Click Save Policy . 14.5. Scanning RHCOS node hosts For OpenShift Container Platform, Red Hat Enterprise Linux CoreOS (RHCOS) is the only supported operating system for control plane. For node hosts, OpenShift Container Platform supports both RHCOS and Red Hat Enterprise Linux (RHEL). With Red Hat Advanced Cluster Security for Kubernetes (RHACS), you can scan RHCOS nodes for vulnerabilities and detect potential security threats. RHACS scans RHCOS RPMs installed on the node host, as part of the RHCOS installation, for any known vulnerabilities. First, RHACS analyzes and detects RHCOS components. Then it matches vulnerabilities for identified components by using RHEL and the following data streams: OpenShift 4.X Open Vulnerability and Assessment Language (OVAL) v2 security data streams is used if StackRox Scanner is used for node scanning. Red Hat Common Security Advisory Framework (CSAF) Vulnerability Exploitability eXchange (VEX) is used if Scanner V4 is used for node scanning. Note If you installed RHACS by using the roxctl CLI, you must manually enable the RHCOS node scanning features. When you use Helm or Operator installation methods on OpenShift Container Platform, this feature is enabled by default. Additional resources RHEL Versions Utilized by RHEL CoreOS and OCP 14.5.1. Enabling RHCOS node scanning with the StackRox Scanner If you use OpenShift Container Platform, you can enable scanning of Red Hat Enterprise Linux CoreOS (RHCOS) nodes for vulnerabilities by using Red Hat Advanced Cluster Security for Kubernetes (RHACS). Prerequisites For scanning RHCOS node hosts of the secured cluster, you must have installed Secured Cluster services on OpenShift Container Platform 4.12 or later. For information about supported platforms and architecture, see the Red Hat Advanced Cluster Security for Kubernetes Support Matrix . For life cycle support information for RHACS, see the Red Hat Advanced Cluster Security for Kubernetes Support Policy . This procedure describes how to enable node scanning for the first time. If you are reconfiguring Red Hat Advanced Cluster Security for Kubernetes to use the StackRox Scanner instead of Scanner V4, follow the procedure in "Restoring RHCOS node scanning with the StackRox Scanner". Procedure Run one of the following commands to update the compliance container. For a default compliance container with metrics disabled, run the following command: USD oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"containers":[{"name":"compliance","env":[{"name":"ROX_METRICS_PORT","value":"disabled"},{"name":"ROX_NODE_SCANNING_ENDPOINT","value":"127.0.0.1:8444"},{"name":"ROX_NODE_SCANNING_INTERVAL","value":"4h"},{"name":"ROX_NODE_SCANNING_INTERVAL_DEVIATION","value":"24m"},{"name":"ROX_NODE_SCANNING_MAX_INITIAL_WAIT","value":"5m"},{"name":"ROX_RHCOS_NODE_SCANNING","value":"true"},{"name":"ROX_CALL_NODE_INVENTORY_ENABLED","value":"true"}]}]}}}}' For a compliance container with Prometheus metrics enabled, run the following command: USD oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"containers":[{"name":"compliance","env":[{"name":"ROX_METRICS_PORT","value":":9091"},{"name":"ROX_NODE_SCANNING_ENDPOINT","value":"127.0.0.1:8444"},{"name":"ROX_NODE_SCANNING_INTERVAL","value":"4h"},{"name":"ROX_NODE_SCANNING_INTERVAL_DEVIATION","value":"24m"},{"name":"ROX_NODE_SCANNING_MAX_INITIAL_WAIT","value":"5m"},{"name":"ROX_RHCOS_NODE_SCANNING","value":"true"},{"name":"ROX_CALL_NODE_INVENTORY_ENABLED","value":"true"}]}]}}}}' Update the Collector DaemonSet (DS) by taking the following steps: Add new volume mounts to Collector DS by running the following command: USD oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"volumes":[{"name":"tmp-volume","emptyDir":{}},{"name":"cache-volume","emptyDir":{"sizeLimit":"200Mi"}}]}}}}' Add the new NodeScanner container by running the following command: USD oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"containers":[{"command":["/scanner","--nodeinventory","--config=",""],"env":[{"name":"ROX_NODE_NAME","valueFrom":{"fieldRef":{"apiVersion":"v1","fieldPath":"spec.nodeName"}}},{"name":"ROX_CLAIR_V4_SCANNING","value":"true"},{"name":"ROX_COMPLIANCE_OPERATOR_INTEGRATION","value":"true"},{"name":"ROX_CSV_EXPORT","value":"false"},{"name":"ROX_DECLARATIVE_CONFIGURATION","value":"false"},{"name":"ROX_INTEGRATIONS_AS_CONFIG","value":"false"},{"name":"ROX_NETPOL_FIELDS","value":"true"},{"name":"ROX_NETWORK_DETECTION_BASELINE_SIMULATION","value":"true"},{"name":"ROX_NETWORK_GRAPH_PATTERNFLY","value":"true"},{"name":"ROX_NODE_SCANNING_CACHE_TIME","value":"3h36m"},{"name":"ROX_NODE_SCANNING_INITIAL_BACKOFF","value":"30s"},{"name":"ROX_NODE_SCANNING_MAX_BACKOFF","value":"5m"},{"name":"ROX_PROCESSES_LISTENING_ON_PORT","value":"false"},{"name":"ROX_QUAY_ROBOT_ACCOUNTS","value":"true"},{"name":"ROX_ROXCTL_NETPOL_GENERATE","value":"true"},{"name":"ROX_SOURCED_AUTOGENERATED_INTEGRATIONS","value":"false"},{"name":"ROX_SYSLOG_EXTRA_FIELDS","value":"true"},{"name":"ROX_SYSTEM_HEALTH_PF","value":"false"},{"name":"ROX_VULN_MGMT_WORKLOAD_CVES","value":"false"}],"image":"registry.redhat.io/advanced-cluster-security/rhacs-scanner-slim-rhel8:4.6.3","imagePullPolicy":"IfNotPresent","name":"node-inventory","ports":[{"containerPort":8444,"name":"grpc","protocol":"TCP"}],"volumeMounts":[{"mountPath":"/host","name":"host-root-ro","readOnly":true},{"mountPath":"/tmp/","name":"tmp-volume"},{"mountPath":"/cache","name":"cache-volume"}]}]}}}}' 14.5.2. Enabling RHCOS node scanning with Scanner V4 If you use OpenShift Container Platform, you can enable scanning of Red Hat Enterprise Linux CoreOS (RHCOS) nodes for vulnerabilities by using Red Hat Advanced Cluster Security for Kubernetes (RHACS). Important RHCOS node scanning with Scanner V4 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites For scanning RHCOS node hosts of the secured cluster, you must have installed the following software: Secured Cluster services on OpenShift Container Platform 4.12 or later RHACS version 4.6 or later For information about supported platforms and architecture, see the Red Hat Advanced Cluster Security for Kubernetes Support Matrix . For life cycle support information for RHACS, see the Red Hat Advanced Cluster Security for Kubernetes Support Policy . Procedure To enable node indexing, also known as node scanning, by using Scanner V4: Ensure that Scanner V4 is deployed in the Central cluster: USD kubectl -n stackrox get deployment scanner-v4-indexer scanner-v4-matcher scanner-v4-db 1 1 For OpenShift Container Platform, use oc instead of kubectl . In the Central pod, on the central container, set the ROX_NODE_INDEX_ENABLED and the ROX_SCANNER_V4 variables to true by running the following command on the Central cluster: USD kubectl -n stackrox set env deployment/central ROX_NODE_INDEX_ENABLED=true ROX_SCANNER_V4=true 1 1 For OpenShift Container Platform, use oc instead of kubectl . In the Sensor pod, on the sensor container, set the ROX_NODE_INDEX_ENABLED and the ROX_SCANNER_V4 variables to true by running the following command on all secured clusters where you want to enable node scanning: USD kubectl -n stackrox set env deployment/sensor ROX_NODE_INDEX_ENABLED=true ROX_SCANNER_V4=true 1 1 For OpenShift Container Platform, use oc instead of kubectl . In the Collector Daemonset, in the compliance container, set the ROX_NODE_INDEX_ENABLED and the ROX_SCANNER_V4 variables to true by running the following command on all secured clusters where you want to enable node scanning: USD kubectl -n stackrox set env daemonset/collector ROX_NODE_INDEX_ENABLED=true ROX_SCANNER_V4=true 1 1 For OpenShift Container Platform, use oc instead of kubectl . To verify that node scanning is working, examine the Central logs for the following message: Scanned index report and found <number> components for node <node_name>. where: <number> Specifies the number of discovered components. <node_name> Specifies the name of the node. 14.5.3. Restoring RHCOS node scanning with the StackRox Scanner If you use OpenShift Container Platform, you can enable scanning of Red Hat Enterprise Linux CoreOS (RHCOS) nodes for vulnerabilities by using Red Hat Advanced Cluster Security for Kubernetes (RHACS). This feature is available with both the StackRox Scanner and Scanner V4. Follow this procedure if you want to use the StackRox Scanner to scan Red Hat Enterprise Linux CoreOS (RHCOS) nodes, but you want to keep using Scanner V4 to scan other nodes. Prerequisites For scanning RHCOS node hosts of the secured cluster, you must have installed Secured Cluster services on OpenShift Container Platform 4.12 or later. For information about supported platforms and architecture, see the Red Hat Advanced Cluster Security for Kubernetes Support Matrix . For life cycle support information for RHACS, see the Red Hat Advanced Cluster Security for Kubernetes Support Policy . Procedure To enable node indexing, also known as node scanning, by using the StackRox Scanner: Ensure that the StackRox Scanner is deployed in the Central cluster: USD kubectl -n stackrox get deployment scanner scanner-db 1 1 For OpenShift Container Platform, use oc instead of kubectl . In the Central pod, on the central container, set ROX_NODE_INDEX_ENABLED to false by running the following command on the Central cluster: USD kubectl -n stackrox set env deployment/central ROX_NODE_INDEX_ENABLED=false 1 1 For OpenShift Container Platform, use oc instead of kubectl . In the Collector Daemonset, in the compliance container, set ROX_CALL_NODE_INVENTORY_ENABLED to true by running the following command on all secured clusters where you want to enable node scanning: USD kubectl -n stackrox set env daemonset/collector ROX_CALL_NODE_INVENTORY_ENABLED=true 1 1 For OpenShift Container Platform, use oc instead of kubectl . To verify that node scanning is working, examine the Central logs for the following message: Scanned node inventory <node_name> (id: <node_id>) with <number> components. where: <number> Specifies the number of discovered components. <node_name> Specifies the name of the node. <node_id> Specifies the internal ID of the node. 14.5.4. Analysis and detection When you use RHACS with OpenShift Container Platform, RHACS creates two coordinating containers for analysis and detection, the Compliance container and the Node-inventory container. The Compliance container was already a part of earlier RHACS versions. However, the Node-inventory container is new with RHACS 4.0 and works only with OpenShift Container Platform cluster nodes. Upon start-up, the Compliance and Node-inventory containers begin the first inventory scan of Red Hat Enterprise Linux CoreOS (RHCOS) software components within five minutes. , the Node-inventory container scans the node's file system to identify installed RPM packages and report on RHCOS software components. Afterward, inventory scanning occurs at periodic intervals, typically every four hours. You can customize the default interval by configuring the ROX_NODE_SCANNING_INTERVAL environment variable for the Compliance container. 14.5.5. Vulnerability matching on RHCOS nodes Central services, which include Central and Scanner, perform vulnerability matching. Node scanning is performed using the following scanners: StackRox Scanner: This is the default scanner. StackRox Scanner uses Red Hat's Open Vulnerability and Assessment Language (OVAL) v2 security data streams to match vulnerabilities on Red Hat Enterprise Linux CoreOS (RHCOS) software components. Scanner V4: Scanner V4 is available for node scanning as a Technology Preview feature. Scanner V4 must be explicitly enabled. See the documentation in "Additional resources" for more information. When scanning RHCOS nodes, RHACS releases after 4.0 no longer use the Kubernetes node metadata to find the kernel and container runtime versions. Instead, RHACS uses the installed RHCOS RPMs to assess that information. Additional resources Scanner V4 settings for installing RHACS for OpenShift Container Platform by using the Operator Scanner V4 settings for installing RHACS for OpenShift Container Platform by using Helm Scanner V4 settings for installing RHACS for Kubernetes by using Helm 14.5.6. Related environment variables You can use the following environment variables to configure RHCOS node scanning on RHACS. Table 14.4. Node-inventory configuration Environment Variable Description ROX_NODE_SCANNING_CACHE_TIME The time after which a cached inventory is considered outdated. Defaults to 90% of ROX_NODE_SCANNING_INTERVAL that is 3h36m . ROX_NODE_SCANNING_INITIAL_BACKOFF The initial time in seconds a node scan will be delayed if a backoff file is found. The default value is 30s . ROX_NODE_SCANNING_MAX_BACKOFF The upper limit of backoff. The default value is 5m, being 50% of Kubernetes restart policy stability timer. Table 14.5. Compliance configuration Environment Variable Description ROX_NODE_INDEX_ENABLED Controls whether node indexing is enabled for this cluster. The default value is false . Set this variable to use Scanner V4-based RHCOS node scanning. ROX_NODE_SCANNING_INTERVAL The base value of the interval duration between node scans. The default value is 4h . ROX_NODE_SCANNING_INTERVAL_DEVIATION The duration of node scans can differ from the base interval time. However, the maximum value is limited by the ROX_NODE_SCANNING_INTERVAL . ROX_NODE_SCANNING_MAX_INITIAL_WAIT The maximum wait time before the first node scan, which is randomly generated. You can set this value to 0 to disable the initial node scanning wait time. The default value is 5m . 14.5.7. Identifying vulnerabilities in nodes by using the dashboard You can use the Vulnerability Management view to identify vulnerabilities in your nodes. The identified vulnerabilities include vulnerabilities in core Kubernetes components and container runtimes such as Docker, CRI-O, runC, and containerd. For more information on operating systems that RHACS can scan, see "Supported operating systems". Procedure In the RHACS portal, go to Vulnerability Management Dashboard . Select Nodes on the header to view a list of all the CVEs affecting your nodes. Select a node from the list to view details of all CVEs affecting that node. When you select a node, the Node details panel opens for the selected node. The Node view shows in-depth details of the node and includes information about CVEs by CVSS score and fixable CVEs for that node. Select View All on the CVEs by CVSS score widget header to view a list of all the CVEs in the selected node. You can also filter the list of CVEs. To export the fixable CVEs as a CSV file, select Export as CSV under the Node Findings section. 14.5.8. Viewing Node CVEs You can identify vulnerabilities in your nodes by using RHACS. The vulnerabilities that are identified include the following: Vulnerabilities in core Kubernetes components Vulnerabilities in container runtimes such as Docker, CRI-O, runC, and containerd For more information about operating systems that RHACS can scan, see "Supported operating systems". Procedure In the RHACS portal, click Vulnerability Management Node CVEs . To view the data, do any of the following tasks: To view a list of all the CVEs affecting all of your nodes, select <number> CVEs . To view a list of nodes that contain CVEs, select <number> Nodes . Optional: To filter CVEs according to entity, select the appropriate filters and attributes. To add more filtering criteria, follow these steps: Select the entity or attribute from the list. Depending on your choices, enter the appropriate information such as text, or select a date or object. Click the right arrow icon. Optional: Select additional entities and attributes, and then click the right arrow icon to add them. The filter entities and attributes are listed in the following table. Table 14.6. CVE filtering Entity Attributes Node Name : The name of the node. Operating system : The operating system of the node, for example, Red Hat Enterprise Linux (RHEL). Label : The label of the node. Annotation : The annotation for the node. Scan time : The scan date of the node. CVE Name : The name of the CVE. Discovered time : The date when RHACS discovered the CVE. CVSS : The severity level for the CVE. The following values are associated with the severity level for the CVE: is greater than is greater than or equal to is equal to is less than or equal to is less than Node Component Name : The name of the component. Version : The version of the component, for example, 4.15.0-2024 . You can use this to search for a specific version of a component, for example, in conjunction with a component name. Cluster Name : The name of the cluster. Label : The label for the cluster. Type : The type of cluster, for example, OCP. Platform type : The type of platform, for example, OpenShift 4 cluster. Optional: To refine the list of results, do any of the following tasks: Click CVE severity , and then select one or more levels. Click CVE status , and then select Fixable or Not fixable . Optional: To view the details of the node and information about the CVEs according to the CVSS score and fixable CVEs for that node, click a node name in the list of nodes. 14.5.9. Understanding differences in scanning results between the Stackrox Scanner and Scanner V4 Scanning RHCOS node hosts with Scanner V4 reports significantly more CVEs for the same operating system version. For example, Scanner V4 reports about 390 CVEs, compared to about 50 CVEs that are reported by StackRox Scanner. A manual review of selected vulnerabilities revealed the following causes: The Vulnerability Exploitability eXchange (VEX) data used in Scanner V4 is more accurate. The VEX data includes granular statuses, such as "no fix planned" and "fix deferred". Some vulnerabilities reported by StackRox Scanner were false positives. As a result, Scanner V4 provides a more accurate and realistic vulnerability assessment. Users might find discrepancies in reported vulnerabilities surprising, especially if some secured clusters still use older RHACS versions with StackRox Scanner while others use Scanner V4. To help you understand this difference, the following example provides an explanation and guidance on how to manually verify reported vulnerabilities. 14.5.9.1. Example of discrepancies in reported vulnerabilities In this example, we analyzed the differences in reported CVEs for three arbitrarily selected RHCOS versions. This example presents findings for RHCOS version 417.94.202501071621-0 . For this version, RHACS provided the following scan results: StackRox Scanner reported 49 CVEs. Scanner V4 reported 389 CVEs. The breakdown is as follows: 1 CVE is reported only by the StackRox Scanner. 48 CVEs are reported by both scanners. 341 CVEs are reported only by Scanner V4. 14.5.9.1.1. CVEs reported only by the StackRox Scanner The single CVE reported exclusively by StackRox Scanner was a false positive. CVE-2022-4122 was flagged for the podman package in version 5:5.2.2-1.rhaos4.17.el9.x86_64 . However, a manual review of VEX data from RHSA-2024:9102 indicated that this vulnerability was fixed in version 5:5.2.2-1.el9 . Therefore, the package version scanned was the first to contain the fix and was no longer affected. 14.5.9.1.2. CVEs reported only by Scanner V4 We randomly selected 10 CVEs from the 341 unique to Scanner V4 and conducted a detailed analysis using VEX data. The vulnerabilities fell into two categories: Affected packages with a fine-grained status indicating that no fix is planned Affected packages with no additional status details regarding a fix For example, the following results were analyzed: The git-core package (version 2.43.5-1.el9_4 ) was flagged for CVE-2024-50349 ( VEX data ) and marked as "Affected" with a fine-grained status of "Fix deferred." This means a fix is not guaranteed due to higher-priority development work. The package is affected by three CVEs in total. The vim-minimal package (version 2:8.2.2637-20.el9_1 ) was flagged for 109 CVEs, 108 of which have low CVSS scores. Most are marked as "Affected" with a fine-grained status of "Will not fix." The krb5-libs package (version 1.21.1-2.el9_4.1 ) was flagged for CVE-2025-24528 ( VEX data ), but no fine-grained status was available. Given that this CVE was recently discovered at the time of this analysis, its status might be updated soon. 14.5.9.1.3. CVEs reported by both scanners We manually verified three randomly selected packages, finding that the OVAL v2 data used in the StackRox Scanner and the VEX data used in Scanner V4 provided consistent explanations for the detected CVEs. In some cases, CVSS scores differed slightly, which is expected due to variations in VEX publisher data. 14.5.9.2. Verifying the status of vulnerabilities As a best practice, verify the fine-grained statuses of vulnerabilities in node host components that are critical to your environment using publicly available VEX data. VEX data is accessible in both human-readable and machine-readable formats. For more information about interpreting VEX data, visit Recent improvements in Red Hat Enterprise Linux CoreOS security data . | [
"{\"result\": {\"deployment\": {...}, \"images\": [...]}} {\"result\": {\"deployment\": {...}, \"images\": [...]}}",
"curl -H \"Authorization: Bearer USDROX_API_TOKEN\" USDROX_ENDPOINT/v1/export/vuln-mgmt/workloads",
"curl -H \"Authorization: Bearer USDROX_API_TOKEN\" USDROX_ENDPOINT/v1/export/vuln-mgmt/workloads?timeout=60",
"curl -H \"Authorization: Bearer USDROX_API_TOKEN\" USDROX_ENDPOINT/v1/export/vuln-mgmt/workloads?query=Deployment%3Aapp%2BNamespace%3Adefault",
"oc -n stackrox patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"name\":\"compliance\",\"env\":[{\"name\":\"ROX_METRICS_PORT\",\"value\":\"disabled\"},{\"name\":\"ROX_NODE_SCANNING_ENDPOINT\",\"value\":\"127.0.0.1:8444\"},{\"name\":\"ROX_NODE_SCANNING_INTERVAL\",\"value\":\"4h\"},{\"name\":\"ROX_NODE_SCANNING_INTERVAL_DEVIATION\",\"value\":\"24m\"},{\"name\":\"ROX_NODE_SCANNING_MAX_INITIAL_WAIT\",\"value\":\"5m\"},{\"name\":\"ROX_RHCOS_NODE_SCANNING\",\"value\":\"true\"},{\"name\":\"ROX_CALL_NODE_INVENTORY_ENABLED\",\"value\":\"true\"}]}]}}}}'",
"oc -n stackrox patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"name\":\"compliance\",\"env\":[{\"name\":\"ROX_METRICS_PORT\",\"value\":\":9091\"},{\"name\":\"ROX_NODE_SCANNING_ENDPOINT\",\"value\":\"127.0.0.1:8444\"},{\"name\":\"ROX_NODE_SCANNING_INTERVAL\",\"value\":\"4h\"},{\"name\":\"ROX_NODE_SCANNING_INTERVAL_DEVIATION\",\"value\":\"24m\"},{\"name\":\"ROX_NODE_SCANNING_MAX_INITIAL_WAIT\",\"value\":\"5m\"},{\"name\":\"ROX_RHCOS_NODE_SCANNING\",\"value\":\"true\"},{\"name\":\"ROX_CALL_NODE_INVENTORY_ENABLED\",\"value\":\"true\"}]}]}}}}'",
"oc -n stackrox patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"volumes\":[{\"name\":\"tmp-volume\",\"emptyDir\":{}},{\"name\":\"cache-volume\",\"emptyDir\":{\"sizeLimit\":\"200Mi\"}}]}}}}'",
"oc -n stackrox patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"command\":[\"/scanner\",\"--nodeinventory\",\"--config=\",\"\"],\"env\":[{\"name\":\"ROX_NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"apiVersion\":\"v1\",\"fieldPath\":\"spec.nodeName\"}}},{\"name\":\"ROX_CLAIR_V4_SCANNING\",\"value\":\"true\"},{\"name\":\"ROX_COMPLIANCE_OPERATOR_INTEGRATION\",\"value\":\"true\"},{\"name\":\"ROX_CSV_EXPORT\",\"value\":\"false\"},{\"name\":\"ROX_DECLARATIVE_CONFIGURATION\",\"value\":\"false\"},{\"name\":\"ROX_INTEGRATIONS_AS_CONFIG\",\"value\":\"false\"},{\"name\":\"ROX_NETPOL_FIELDS\",\"value\":\"true\"},{\"name\":\"ROX_NETWORK_DETECTION_BASELINE_SIMULATION\",\"value\":\"true\"},{\"name\":\"ROX_NETWORK_GRAPH_PATTERNFLY\",\"value\":\"true\"},{\"name\":\"ROX_NODE_SCANNING_CACHE_TIME\",\"value\":\"3h36m\"},{\"name\":\"ROX_NODE_SCANNING_INITIAL_BACKOFF\",\"value\":\"30s\"},{\"name\":\"ROX_NODE_SCANNING_MAX_BACKOFF\",\"value\":\"5m\"},{\"name\":\"ROX_PROCESSES_LISTENING_ON_PORT\",\"value\":\"false\"},{\"name\":\"ROX_QUAY_ROBOT_ACCOUNTS\",\"value\":\"true\"},{\"name\":\"ROX_ROXCTL_NETPOL_GENERATE\",\"value\":\"true\"},{\"name\":\"ROX_SOURCED_AUTOGENERATED_INTEGRATIONS\",\"value\":\"false\"},{\"name\":\"ROX_SYSLOG_EXTRA_FIELDS\",\"value\":\"true\"},{\"name\":\"ROX_SYSTEM_HEALTH_PF\",\"value\":\"false\"},{\"name\":\"ROX_VULN_MGMT_WORKLOAD_CVES\",\"value\":\"false\"}],\"image\":\"registry.redhat.io/advanced-cluster-security/rhacs-scanner-slim-rhel8:4.6.3\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"node-inventory\",\"ports\":[{\"containerPort\":8444,\"name\":\"grpc\",\"protocol\":\"TCP\"}],\"volumeMounts\":[{\"mountPath\":\"/host\",\"name\":\"host-root-ro\",\"readOnly\":true},{\"mountPath\":\"/tmp/\",\"name\":\"tmp-volume\"},{\"mountPath\":\"/cache\",\"name\":\"cache-volume\"}]}]}}}}'",
"kubectl -n stackrox get deployment scanner-v4-indexer scanner-v4-matcher scanner-v4-db 1",
"kubectl -n stackrox set env deployment/central ROX_NODE_INDEX_ENABLED=true ROX_SCANNER_V4=true 1",
"kubectl -n stackrox set env deployment/sensor ROX_NODE_INDEX_ENABLED=true ROX_SCANNER_V4=true 1",
"kubectl -n stackrox set env daemonset/collector ROX_NODE_INDEX_ENABLED=true ROX_SCANNER_V4=true 1",
"Scanned index report and found <number> components for node <node_name>.",
"kubectl -n stackrox get deployment scanner scanner-db 1",
"kubectl -n stackrox set env deployment/central ROX_NODE_INDEX_ENABLED=false 1",
"kubectl -n stackrox set env daemonset/collector ROX_CALL_NODE_INVENTORY_ENABLED=true 1",
"Scanned node inventory <node_name> (id: <node_id>) with <number> components."
] | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/operating/managing-vulnerabilities |
4.2. Preparing for a Hard Drive Installation | 4.2. Preparing for a Hard Drive Installation Note Hard drive installations only work from ext2, ext3, ext4, or FAT file systems. You cannot use a hard drives formatted for any other file system as an installation source for Red Hat Enterprise Linux. To check the file system of a hard drive partition on a Windows operating system, use the Disk Management tool. To check the file system of a hard drive partition on a Linux operating system, use the fdisk tool. Important You cannot use ISO files on partitions controlled by LVM (Logical Volume Management). Use this option to install Red Hat Enterprise Linux on systems without a DVD drive or network connection. Hard drive installations use the following files: an ISO image of the installation DVD. An ISO image is a file that contains an exact copy of the content of a DVD. an install.img file extracted from the ISO image. optionally, a product.img file extracted from the ISO image. With these files present on a hard drive, you can choose Hard drive as the installation source when you boot the installation program (refer to Section 8.3, "Installation Method" ). Ensure that you have boot media available on CD, DVD, or a USB storage device such as a flash drive. To prepare a hard drive as an installation source, follow these steps: Obtain an ISO image of the Red Hat Enterprise Linux installation DVD (refer to Chapter 1, Obtaining Red Hat Enterprise Linux ). Alternatively, if you have the DVD on physical media, you can create an image of it with the following command on a Linux system: where dvd is your DVD drive device, name_of_image is the name you give to the resulting ISO image file, and path_to_image is the path to the location on your system where the resulting ISO image will be stored. Transfer the ISO image to the hard drive. The ISO image must be located on a hard drive that is either internal to the computer on which you will install Red Hat Enterprise Linux, or on a hard drive that is attached to that computer by USB. Use a SHA256 checksum program to verify that the ISO image that you copied is intact. Many SHA256 checksum programs are available for various operating systems. On a Linux system, run: where name_of_image is the name of the ISO image file. The SHA256 checksum program displays a string of 64 characters called a hash . Compare this hash to the hash displayed for this particular image on the Downloads page in the Red Hat Customer Portal (refer to Chapter 1, Obtaining Red Hat Enterprise Linux ). The two hashes should be identical. Copy the images/ directory from inside the ISO image to the same directory in which you stored the ISO image file itself. Enter the following commands: where path_to_image is the path to the ISO image file, name_of_image is the name of the ISO image file, and mount_point is a mount point on which to mount the image while you copy files from the image. For example: The ISO image file and an images/ directory are now present, side-by-side, in the same directory. Verify that the images/ directory contains at least the install.img file, without which installation cannot proceed. Optionally, the images/ directory should contain the product.img file, without which only the packages for a Minimal installation will be available during the package group selection stage (refer to Section 9.17, "Package Group Selection" ). Important install.img and product.img must be the only files in the images/ directory. Note anaconda has the ability to test the integrity of the installation media. It works with the DVD, hard drive ISO, and NFS ISO installation methods. We recommend that you test all installation media before starting the installation process, and before reporting any installation-related bugs (many of the bugs reported are actually due to improperly-burned DVDs). To use this test, type the following command at the boot: prompt: | [
"dd if=/dev/ dvd of=/ path_to_image / name_of_image .iso",
"sha256sum name_of_image .iso",
"mount -t iso9660 / path_to_image / name_of_image .iso / mount_point -o loop,ro cp -pr / mount_point /images / publicly_available_directory / umount / mount_point",
"mount -t iso9660 /var/isos/RHEL6.iso /mnt/tmp -o loop,ro cp -pr /mnt/tmp/images /var/isos/ umount /mnt/tmp",
"linux mediacheck"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s1-steps-hd-installs-x86 |
Assessing and Reporting Malware Signatures on RHEL Systems | Assessing and Reporting Malware Signatures on RHEL Systems Red Hat Insights 1-latest Know when systems in your RHEL infrastructure are exposed to malware risks Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/assessing_and_reporting_malware_signatures_on_rhel_systems/index |
5.2. Creating a Striped Logical Volume | 5.2. Creating a Striped Logical Volume This example creates an LVM striped logical volume called striped_logical_volume that stripes data across the disks at /dev/sda1 , /dev/sdb1 , and /dev/sdc1 . 5.2.1. Creating the Physical Volumes Label the disks you will use in the volume groups as LVM physical volumes. Warning This command destroys any data on /dev/sda1 , /dev/sdb1 , and /dev/sdc1 . | [
"pvcreate /dev/sda1 /dev/sdb1 /dev/sdc1 Physical volume \"/dev/sda1\" successfully created Physical volume \"/dev/sdb1\" successfully created Physical volume \"/dev/sdc1\" successfully created"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/stripe_create_ex |
Chapter 19. compute | Chapter 19. compute This chapter describes the commands under the compute command. 19.1. compute agent create Create compute agent Usage: Table 19.1. Positional Arguments Value Summary <os> Type of os <architecture> Type of architecture <version> Version <url> Url <md5hash> Md5 hash <hypervisor> Type of hypervisor Table 19.2. Optional Arguments Value Summary -h, --help Show this help message and exit Table 19.3. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 19.4. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 19.5. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 19.6. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 19.2. compute agent delete Delete compute agent(s) Usage: Table 19.7. Positional Arguments Value Summary <id> Id of agent(s) to delete Table 19.8. Optional Arguments Value Summary -h, --help Show this help message and exit 19.3. compute agent list List compute agents Usage: Table 19.9. Optional Arguments Value Summary -h, --help Show this help message and exit --hypervisor <hypervisor> Type of hypervisor Table 19.10. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 19.11. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 19.12. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 19.13. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 19.4. compute agent set Set compute agent properties Usage: Table 19.14. Positional Arguments Value Summary <id> Id of the agent Table 19.15. Optional Arguments Value Summary -h, --help Show this help message and exit --agent-version <version> Version of the agent --url <url> Url of the agent --md5hash <md5hash> Md5 hash of the agent 19.5. compute service delete Delete compute service(s) Usage: Table 19.16. Positional Arguments Value Summary <service> Compute service(s) to delete (id only). if using ``--os-compute- api-version`` 2.53 or greater, the ID is a UUID which can be retrieved by listing compute services using the same 2.53+ microversion. Table 19.17. Optional Arguments Value Summary -h, --help Show this help message and exit 19.6. compute service list List compute services. Using ``--os-compute-api-version`` 2.53 or greater will return the ID as a UUID value which can be used to uniquely identify the service in a multi-cell deployment. Usage: Table 19.18. Optional Arguments Value Summary -h, --help Show this help message and exit --host <host> List services on specified host (name only) --service <service> List only specified service binaries (name only). for example, ``nova-compute``, ``nova-conductor``, etc. --long List additional fields in output Table 19.19. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 19.20. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 19.21. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 19.22. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 19.7. compute service set Set compute service properties Usage: Table 19.23. Positional Arguments Value Summary <host> Name of host <service> Name of service (binary name), for example ``nova- compute`` Table 19.24. Optional Arguments Value Summary -h, --help Show this help message and exit --enable Enable service --disable Disable service --disable-reason <reason> Reason for disabling the service (in quotes). should be used with --disable option. --up Force up service. requires ``--os-compute-api- version`` 2.11 or greater. --down Force down service. requires ``--os-compute-api- version`` 2.11 or greater. | [
"openstack compute agent create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <os> <architecture> <version> <url> <md5hash> <hypervisor>",
"openstack compute agent delete [-h] <id> [<id> ...]",
"openstack compute agent list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--hypervisor <hypervisor>]",
"openstack compute agent set [-h] [--agent-version <version>] [--url <url>] [--md5hash <md5hash>] <id>",
"openstack compute service delete [-h] <service> [<service> ...]",
"openstack compute service list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--host <host>] [--service <service>] [--long]",
"openstack compute service set [-h] [--enable | --disable] [--disable-reason <reason>] [--up | --down] <host> <service>"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/command_line_interface_reference/compute |
Chapter 21. Example decisions in Red Hat Process Automation Manager for an IDE | Chapter 21. Example decisions in Red Hat Process Automation Manager for an IDE Red Hat Process Automation Manager provides example decisions distributed as Java classes that you can import into your integrated development environment (IDE). You can use these examples to better understand decision engine capabilities or use them as a reference for the decisions that you define in your own Red Hat Process Automation Manager projects. The following example decision sets are some of the examples available in Red Hat Process Automation Manager: Hello World example : Demonstrates basic rule execution and use of debug output State example : Demonstrates forward chaining and conflict resolution through rule salience and agenda groups Fibonacci example : Demonstrates recursion and conflict resolution through rule salience Banking example : Demonstrates pattern matching, basic sorting, and calculation Pet Store example : Demonstrates rule agenda groups, global variables, callbacks, and GUI integration Sudoku example : Demonstrates complex pattern matching, problem solving, callbacks, and GUI integration House of Doom example : Demonstrates backward chaining and recursion Note For optimization examples provided with Red Hat build of OptaPlanner, see Getting started with Red Hat build of OptaPlanner . 21.1. Importing and executing Red Hat Process Automation Manager example decisions in an IDE You can import Red Hat Process Automation Manager example decisions into your integrated development environment (IDE) and execute them to explore how the rules and code function. You can use these examples to better understand decision engine capabilities or use them as a reference for the decisions that you define in your own Red Hat Process Automation Manager projects. Prerequisites Java 8 or later is installed. Maven 3.5.x or later is installed. An IDE is installed, such as Red Hat CodeReady Studio. Procedure Download and unzip the Red Hat Process Automation Manager 7.13.5 Source Distribution from the Red Hat Customer Portal to a temporary directory, such as /rhpam-7.13.5-sources . Open your IDE and select File Import Maven Existing Maven Projects , or the equivalent option for importing a Maven project. Click Browse , navigate to ~/rhpam-7.13.5-sources/src/drools-USDVERSION/drools-examples (or, for the Conway's Game of Life example, ~/rhpam-7.13.5-sources/src/droolsjbpm-integration-USDVERSION/droolsjbpm-integration-examples ), and import the project. Navigate to the example package that you want to run and find the Java class with the main method. Right-click the Java class and select Run As Java Application to run the example. To run all examples through a basic user interface, run the DroolsExamplesApp.java class (or, for Conway's Game of Life, the DroolsJbpmIntegrationExamplesApp.java class) in the org.drools.examples main class. Figure 21.1. Interface for all examples in drools-examples (DroolsExamplesApp.java) Figure 21.2. Interface for all examples in droolsjbpm-integration-examples (DroolsJbpmIntegrationExamplesApp.java) 21.2. Hello World example decisions (basic rules and debugging) The Hello World example decision set demonstrates how to insert objects into the decision engine working memory, how to match the objects using rules, and how to configure logging to trace the internal activity of the decision engine. The following is an overview of the Hello World example: Name : helloworld Main class : org.drools.examples.helloworld.HelloWorldExample (in src/main/java ) Module : drools-examples Type : Java application Rule file : org.drools.examples.helloworld.HelloWorld.drl (in src/main/resources ) Objective : Demonstrates basic rule execution and use of debug output In the Hello World example, a KIE session is generated to enable rule execution. All rules require a KIE session for execution. KIE session for rule execution KieServices ks = KieServices.Factory.get(); 1 KieContainer kc = ks.getKieClasspathContainer(); 2 KieSession ksession = kc.newKieSession("HelloWorldKS"); 3 1 Obtains the KieServices factory. This is the main interface that applications use to interact with the decision engine. 2 Creates a KieContainer from the project class path. This detects a /META-INF/kmodule.xml file from which it configures and instantiates a KieContainer with a KieModule . 3 Creates a KieSession based on the "HelloWorldKS" KIE session configuration defined in the /META-INF/kmodule.xml file. Note For more information about Red Hat Process Automation Manager project packaging, see Packaging and deploying an Red Hat Process Automation Manager project . Red Hat Process Automation Manager has an event model that exposes internal engine activity. Two default debug listeners, DebugAgendaEventListener and DebugRuleRuntimeEventListener , print debug event information to the System.err output. The KieRuntimeLogger provides execution auditing, the result of which you can view in a graphical viewer. Debug listeners and audit loggers // Set up listeners. ksession.addEventListener( new DebugAgendaEventListener() ); ksession.addEventListener( new DebugRuleRuntimeEventListener() ); // Set up a file-based audit logger. KieRuntimeLogger logger = KieServices.get().getLoggers().newFileLogger( ksession, "./target/helloworld" ); // Set up a ThreadedFileLogger so that the audit view reflects events while debugging. KieRuntimeLogger logger = ks.getLoggers().newThreadedFileLogger( ksession, "./target/helloworld", 1000 ); The logger is a specialized implementation built on the Agenda and RuleRuntime listeners. When the decision engine has finished executing, logger.close() is called. The example creates a single Message object with the message "Hello World" , inserts the status HELLO into the KieSession , executes rules with fireAllRules() . Data insertion and execution // Insert facts into the KIE session. final Message message = new Message(); message.setMessage( "Hello World" ); message.setStatus( Message.HELLO ); ksession.insert( message ); // Fire the rules. ksession.fireAllRules(); Rule execution uses a data model to pass data as inputs and outputs to the KieSession . The data model in this example has two fields: the message , which is a String , and the status , which can be HELLO or GOODBYE . Data model class public static class Message { public static final int HELLO = 0; public static final int GOODBYE = 1; private String message; private int status; ... } The two rules are located in the file src/main/resources/org/drools/examples/helloworld/HelloWorld.drl . The when condition of the "Hello World" rule states that the rule is activated for each Message object inserted into the KIE session that has the status Message.HELLO . Additionally, two variable bindings are created: the variable message is bound to the message attribute and the variable m is bound to the matched Message object itself. The then action of the rule specifies to print the content of the bound variable message to System.out , and then changes the values of the message and status attributes of the Message object bound to m . The rule uses the modify statement to apply a block of assignments in one statement and to notify the decision engine of the changes at the end of the block. "Hello World" rule The "Good Bye" rule is similar to the "Hello World" rule except that it matches Message objects that have the status Message.GOODBYE . "Good Bye" rule To execute the example, run the org.drools.examples.helloworld.HelloWorldExample class as a Java application in your IDE. The rule writes to System.out , the debug listener writes to System.err , and the audit logger creates a log file in target/helloworld.log . System.out output in the IDE console System.err output in the IDE console To better understand the execution flow of this example, you can load the audit log file from target/helloworld.log into your IDE debug view or Audit View , if available (for example, in Window Show View in some IDEs). In this example, the Audit view shows that the object is inserted, which creates an activation for the "Hello World" rule. The activation is then executed, which updates the Message object and causes the "Good Bye" rule to activate. Finally, the "Good Bye" rule is executed. When you select an event in the Audit View , the origin event, which is the "Activation created" event in this example, is highlighted in green. Figure 21.3. Hello World example Audit View 21.3. State example decisions (forward chaining and conflict resolution) The State example decision set demonstrates how the decision engine uses forward chaining and any changes to facts in the working memory to resolve execution conflicts for rules in a sequence. The example focuses on resolving conflicts through salience values or through agenda groups that you can define in rules. The following is an overview of the State example: Name : state Main classes : org.drools.examples.state.StateExampleUsingSalience , org.drools.examples.state.StateExampleUsingAgendaGroup (in src/main/java ) Module : drools-examples Type : Java application Rule files : org.drools.examples.state.*.drl (in src/main/resources ) Objective : Demonstrates forward chaining and conflict resolution through rule salience and agenda groups A forward-chaining rule system is a data-driven system that starts with a fact in the working memory of the decision engine and reacts to changes to that fact. When objects are inserted into working memory, any rule conditions that become true as a result of the change are scheduled for execution by the agenda. In contrast, a backward-chaining rule system is a goal-driven system that starts with a conclusion that the decision engine attempts to satisfy, often using recursion. If the system cannot reach the conclusion or goal, it searches for subgoals, which are conclusions that complete part of the current goal. The system continues this process until either the initial conclusion is satisfied or all subgoals are satisfied. The decision engine in Red Hat Process Automation Manager uses both forward and backward chaining to evaluate rules. The following diagram illustrates how the decision engine evaluates rules using forward chaining overall with a backward-chaining segment in the logic flow: Figure 21.4. Rule evaluation logic using forward and backward chaining In the State example, each State class has fields for its name and its current state (see the class org.drools.examples.state.State ). The following states are the two possible states for each object: NOTRUN FINISHED State class public class State { public static final int NOTRUN = 0; public static final int FINISHED = 1; private final PropertyChangeSupport changes = new PropertyChangeSupport( this ); private String name; private int state; ... setters and getters go here... } The State example contains two versions of the same example to resolve rule execution conflicts: A StateExampleUsingSalience version that resolves conflicts by using rule salience A StateExampleUsingAgendaGroups version that resolves conflicts by using rule agenda groups Both versions of the state example involve four State objects: A , B , C , and D . Initially, their states are set to NOTRUN , which is the default value for the constructor that the example uses. State example using salience The StateExampleUsingSalience version of the State example uses salience values in rules to resolve rule execution conflicts. Rules with a higher salience value are given higher priority when ordered in the activation queue. The example inserts each State instance into the KIE session and then calls fireAllRules() . Salience State example execution final State a = new State( "A" ); final State b = new State( "B" ); final State c = new State( "C" ); final State d = new State( "D" ); ksession.insert( a ); ksession.insert( b ); ksession.insert( c ); ksession.insert( d ); ksession.fireAllRules(); // Dispose KIE session if stateful (not required if stateless). ksession.dispose(); To execute the example, run the org.drools.examples.state.StateExampleUsingSalience class as a Java application in your IDE. After the execution, the following output appears in the IDE console window: Salience State example output in the IDE console Four rules are present. First, the "Bootstrap" rule fires, setting A to state FINISHED , which then causes B to change its state to FINISHED . Objects C and D are both dependent on B , causing a conflict that is resolved by the salience values. To better understand the execution flow of this example, you can load the audit log file from target/state.log into your IDE debug view or Audit View , if available (for example, in Window Show View in some IDEs). In this example, the Audit View shows that the assertion of the object A in the state NOTRUN activates the "Bootstrap" rule, while the assertions of the other objects have no immediate effect. Figure 21.5. Salience State example Audit View Rule "Bootstrap" in salience State example The execution of the "Bootstrap" rule changes the state of A to FINISHED , which activates rule "A to B" . Rule "A to B" in salience State example The execution of rule "A to B" changes the state of B to FINISHED , which activates both rules "B to C" and "B to D" , placing their activations onto the decision engine agenda. Rules "B to C" and "B to D" in salience State example From this point on, both rules may fire and, therefore, the rules are in conflict. The conflict resolution strategy enables the decision engine agenda to decide which rule to fire. Rule "B to C" has the higher salience value ( 10 versus the default salience value of 0 ), so it fires first, modifying object C to state FINISHED . The Audit View in your IDE shows the modification of the State object in the rule "A to B" , which results in two activations being in conflict. You can also use the Agenda View in your IDE to investigate the state of the decision engine agenda. In this example, the Agenda View shows the breakpoint in the rule "A to B" and the state of the agenda with the two conflicting rules. Rule "B to D" fires last, modifying object D to state FINISHED . Figure 21.6. Salience State example Agenda View State example using agenda groups The StateExampleUsingAgendaGroups version of the State example uses agenda groups in rules to resolve rule execution conflicts. Agenda groups enable you to partition the decision engine agenda to provide more execution control over groups of rules. By default, all rules are in the agenda group MAIN . You can use the agenda-group attribute to specify a different agenda group for the rule. Initially, a working memory has its focus on the agenda group MAIN . Rules in an agenda group only fire when the group receives the focus. You can set the focus either by using the method setFocus() or the rule attribute auto-focus . The auto-focus attribute enables the rule to be given a focus automatically for its agenda group when the rule is matched and activated. In this example, the auto-focus attribute enables rule "B to C" to fire before "B to D" . Rule "B to C" in agenda group State example The rule "B to C" calls setFocus() on the agenda group "B to D" , enabling its active rules to fire, which then enables the rule "B to D" to fire. Rule "B to D" in agenda group State example To execute the example, run the org.drools.examples.state.StateExampleUsingAgendaGroups class as a Java application in your IDE. After the execution, the following output appears in the IDE console window (same as the salience version of the State example): Agenda group State example output in the IDE console Dynamic facts in the State example Another notable concept in this State example is the use of dynamic facts , based on objects that implement a PropertyChangeListener object. In order for the decision engine to see and react to changes of fact properties, the application must notify the decision engine that changes occurred. You can configure this communication explicitly in the rules by using the modify statement, or implicitly by specifying that the facts implement the PropertyChangeSupport interface as defined by the JavaBeans specification. This example demonstrates how to use the PropertyChangeSupport interface to avoid the need for explicit modify statements in the rules. To make use of this interface, ensure that your facts implement PropertyChangeSupport in the same way that the class org.drools.example.State implements it, and then use the following code in the DRL rule file to configure the decision engine to listen for property changes on those facts: Declaring a dynamic fact When you use PropertyChangeListener objects, each setter must implement additional code for the notification. For example, the following setter for state is in the class org.drools.examples : Setter example with PropertyChangeSupport public void setState(final int newState) { int oldState = this.state; this.state = newState; this.changes.firePropertyChange( "state", oldState, newState ); } 21.4. Fibonacci example decisions (recursion and conflict resolution) The Fibonacci example decision set demonstrates how the decision engine uses recursion to resolve execution conflicts for rules in a sequence. The example focuses on resolving conflicts through salience values that you can define in rules. The following is an overview of the Fibonacci example: Name : fibonacci Main class : org.drools.examples.fibonacci.FibonacciExample (in src/main/java ) Module : drools-examples Type : Java application Rule file : org.drools.examples.fibonacci.Fibonacci.drl (in src/main/resources ) Objective : Demonstrates recursion and conflict resolution through rule salience The Fibonacci Numbers form a sequence starting with 0 and 1. The Fibonacci number is obtained by adding the two preceding Fibonacci numbers: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765, 10946, and so on. The Fibonacci example uses the single fact class Fibonacci with the following two fields: sequence value The sequence field indicates the position of the object in the Fibonacci number sequence. The value field shows the value of that Fibonacci object for that sequence position, where -1 indicates a value that still needs to be computed. Fibonacci class public static class Fibonacci { private int sequence; private long value; public Fibonacci( final int sequence ) { this.sequence = sequence; this.value = -1; } ... setters and getters go here... } To execute the example, run the org.drools.examples.fibonacci.FibonacciExample class as a Java application in your IDE. After the execution, the following output appears in the IDE console window: Fibonacci example output in the IDE console To achieve this behavior in Java, the example inserts a single Fibonacci object with a sequence field of 50 . The example then uses a recursive rule to insert the other 49 Fibonacci objects. Instead of implementing the PropertyChangeSupport interface to use dynamic facts, this example uses the MVEL dialect modify keyword to enable a block setter action and notify the decision engine of changes. Fibonacci example execution ksession.insert( new Fibonacci( 50 ) ); ksession.fireAllRules(); This example uses the following three rules: "Recurse" "Bootstrap" "Calculate" The rule "Recurse" matches each asserted Fibonacci object with a value of -1 , creating and asserting a new Fibonacci object with a sequence of one less than the currently matched object. Each time a Fibonacci object is added while the one with a sequence field equal to 1 does not exist, the rule re-matches and fires again. The not conditional element is used to stop the rule matching once you have all 50 Fibonacci objects in memory. The rule also has a salience value because you need to have all 50 Fibonacci objects asserted before you execute the "Bootstrap" rule. Rule "Recurse" To better understand the execution flow of this example, you can load the audit log file from target/fibonacci.log into your IDE debug view or Audit View , if available (for example, in Window Show View in some IDEs). In this example, the Audit View shows the original assertion of the Fibonacci object with a sequence field of 50 , done from Java code. From there on, the Audit View shows the continual recursion of the rule, where each asserted Fibonacci object causes the "Recurse" rule to become activated and to fire again. Figure 21.7. Rule "Recurse" in Audit View When a Fibonacci object with a sequence field of 2 is asserted, the "Bootstrap" rule is matched and activated along with the "Recurse" rule. Notice the multiple restrictions on field sequence that test for equality with 1 or 2 : Rule "Bootstrap" You can also use the Agenda View in your IDE to investigate the state of the decision engine agenda. The "Bootstrap" rule does not fire yet because the "Recurse" rule has a higher salience value. Figure 21.8. Rules "Recurse" and "Bootstrap" in Agenda View 1 When a Fibonacci object with a sequence of 1 is asserted, the "Bootstrap" rule is matched again, causing two activations for this rule. The "Recurse" rule does not match and activate because the not conditional element stops the rule matching as soon as a Fibonacci object with a sequence of 1 exists. Figure 21.9. Rules "Recurse" and "Bootstrap" in Agenda View 2 The "Bootstrap" rule sets the objects with a sequence of 1 and 2 to a value of 1 . Now that you have two Fibonacci objects with values not equal to -1 , the "Calculate" rule is able to match. At this point in the example, nearly 50 Fibonacci objects exist in the working memory. You need to select a suitable triple to calculate each of their values in turn. If you use three Fibonacci patterns in a rule without field constraints to confine the possible cross products, the result would be 50x49x48 possible combinations, leading to about 125,000 possible rule firings, most of them incorrect. The "Calculate" rule uses field constraints to evaluate the three Fibonacci patterns in the correct order. This technique is called cross-product matching . The first pattern finds any Fibonacci object with a value != -1 and binds both the pattern and the field. The second Fibonacci object does the same thing, but adds an additional field constraint to ensure that its sequence is greater by one than the Fibonacci object bound to f1 . When this rule fires for the first time, you know that only sequences 1 and 2 have values of 1 , and the two constraints ensure that f1 references sequence 1 and that f2 references sequence 2 . The final pattern finds the Fibonacci object with a value equal to -1 and with a sequence one greater than f2 . At this point in the example, three Fibonacci objects are correctly selected from the available cross products, and you can calculate the value for the third Fibonacci object that is bound to f3 . Rule "Calculate" The modify statement updates the value of the Fibonacci object bound to f3 . This means that you now have another new Fibonacci object with a value not equal to -1 , which allows the "Calculate" rule to re-match and calculate the Fibonacci number. The debug view or Audit View of your IDE shows how the firing of the last "Bootstrap" rule modifies the Fibonacci object, enabling the "Calculate" rule to match, which then modifies another Fibonacci object that enables the "Calculate" rule to match again. This process continues until the value is set for all Fibonacci objects. Figure 21.10. Rules in Audit View 21.5. Pricing example decisions (decision tables) The Pricing example decision set demonstrates how to use a spreadsheet decision table for calculating the retail cost of an insurance policy in tabular format instead of directly in a DRL file. The following is an overview of the Pricing example: Name : decisiontable Main class : org.drools.examples.decisiontable.PricingRuleDTExample (in src/main/java ) Module : drools-examples Type : Java application Rule file : org.drools.examples.decisiontable.ExamplePolicyPricing.xls (in src/main/resources ) Objective : Demonstrates use of spreadsheet decision tables to define rules Spreadsheet decision tables are XLS or XLSX spreadsheets that contain business rules defined in a tabular format. You can include spreadsheet decision tables with standalone Red Hat Process Automation Manager projects or upload them to projects in Business Central. Each row in a decision table is a rule, and each column is a condition, an action, or another rule attribute. After you create and upload your decision tables into your Red Hat Process Automation Manager project, the rules you defined are compiled into Drools Rule Language (DRL) rules as with all other rule assets. The purpose of the Pricing example is to provide a set of business rules to calculate the base price and a discount for a car driver applying for a specific type of insurance policy. The driver's age and history and the policy type all contribute to calculate the basic premium, and additional rules calculate potential discounts for which the driver might be eligible. To execute the example, run the org.drools.examples.decisiontable.PricingRuleDTExample class as a Java application in your IDE. After the execution, the following output appears in the IDE console window: The code to execute the example follows the typical execution pattern: the rules are loaded, the facts are inserted, and a stateless KIE session is created. The difference in this example is that the rules are defined in an ExamplePolicyPricing.xls file instead of a DRL file or other source. The spreadsheet file is loaded into the decision engine using templates and DRL rules. Spreadsheet decision table setup The ExamplePolicyPricing.xls spreadsheet contains two decision tables in the first tab: Base pricing rules Promotional discount rules As the example spreadsheet demonstrates, you can use only the first tab of a spreadsheet to create decision tables, but multiple tables can be within a single tab. Decision tables do not necessarily follow top-down logic, but are more of a means to capture data resulting in rules. The evaluation of the rules is not necessarily in the given order, because all of the normal mechanics of the decision engine still apply. This is why you can have multiple decision tables in the same tab of a spreadsheet. The decision tables are executed through the corresponding rule template files BasePricing.drt and PromotionalPricing.drt . These template files reference the decision tables through their template parameter and directly reference the various headers for the conditions and actions in the decision tables. BasePricing.drt rule template file PromotionalPricing.drt rule template file The rules are executed through the kmodule.xml reference of the KIE Session DTableWithTemplateKB , which specifically mentions the ExamplePolicyPricing.xls spreadsheet and is required for successful execution of the rules. This execution method enables you to execute the rules as a standalone unit (as in this example) or to include the rules in a packaged knowledge JAR (KJAR) file, so that the spreadsheet is packaged along with the rules for execution. The following section of the kmodule.xml file is required for the execution of the rules and spreadsheet to work successfully: <kbase name="DecisionTableKB" packages="org.drools.examples.decisiontable"> <ksession name="DecisionTableKS" type="stateless"/> </kbase> <kbase name="DTableWithTemplateKB" packages="org.drools.examples.decisiontable-template"> <ruleTemplate dtable="org/drools/examples/decisiontable-template/ExamplePolicyPricingTemplateData.xls" template="org/drools/examples/decisiontable-template/BasePricing.drt" row="3" col="3"/> <ruleTemplate dtable="org/drools/examples/decisiontable-template/ExamplePolicyPricingTemplateData.xls" template="org/drools/examples/decisiontable-template/PromotionalPricing.drt" row="18" col="3"/> <ksession name="DTableWithTemplateKS"/> </kbase> As an alternative to executing the decision tables using rule template files, you can use the DecisionTableConfiguration object and specify an input spreadsheet as the input type, such as DecisionTableInputType.xls : DecisionTableConfiguration dtableconfiguration = KnowledgeBuilderFactory.newDecisionTableConfiguration(); dtableconfiguration.setInputType( DecisionTableInputType.XLS ); KnowledgeBuilder kbuilder = KnowledgeBuilderFactory.newKnowledgeBuilder(); Resource xlsRes = ResourceFactory.newClassPathResource( "ExamplePolicyPricing.xls", getClass() ); kbuilder.add( xlsRes, ResourceType.DTABLE, dtableconfiguration ); The Pricing example uses two fact types: Driver Policy . The example sets the default values for both facts in their respective Java classes Driver.java and Policy.java . The Driver is 30 years old, has had no prior claims, and currently has a risk profile of LOW . The Policy that the driver is applying for is COMPREHENSIVE . In any decision table, each row is considered a different rule and each column is a condition or an action. Each row is evaluated in a decision table unless the agenda is cleared upon execution. Decision table spreadsheets (XLS or XLSX) require two key areas that define rule data: A RuleSet area A RuleTable area The RuleSet area of the spreadsheet defines elements that you want to apply globally to all rules in the same package (not only the spreadsheet), such as a rule set name or universal rule attributes. The RuleTable area defines the actual rules (rows) and the conditions, actions, and other rule attributes (columns) that constitute that rule table within the specified rule set. A decision table spreadsheet can contain multiple RuleTable areas, but only one RuleSet area. Figure 21.11. Decision table configuration The RuleTable area also defines the objects to which the rule attributes apply, in this case Driver and Policy , followed by constraints on the objects. For example, the Driver object constraint that defines the Age Bracket column is age >= USD1, age <= USD2 , where the comma-separated range is defined in the table column values, such as 18,24 . Base pricing rules The Base pricing rules decision table in the Pricing example evaluates the age, risk profile, number of claims, and policy type of the driver and produces the base price of the policy based on these conditions. Figure 21.12. Base price calculation The Driver attributes are defined in the following table columns: Age Bracket : The age bracket has a definition for the condition age >=USD1, age <=USD2 , which defines the condition boundaries for the driver's age. This condition column highlights the use of USD1 and USD2 , which is comma delimited in the spreadsheet. You can write these values as 18,24 or 18, 24 and both formats work in the execution of the business rules. Location risk profile : The risk profile is a string that the example program passes always as LOW but can be changed to reflect MED or HIGH . Number of prior claims : The number of claims is defined as an integer that the condition column must exactly equal to trigger the action. The value is not a range, only exact matches. The Policy of the decision table is used in both the conditions and the actions of the rule and has attributes defined in the following table columns: Policy type applying for : The policy type is a condition that is passed as a string that defines the type of coverage: COMPREHENSIVE , FIRE_THEFT , or THIRD_PARTY . Base USD AUD : The basePrice is defined as an ACTION that sets the price through the constraint policy.setBasePrice(USDparam); based on the spreadsheet cells corresponding to this value. When you execute the corresponding DRL rule for this decision table, the then portion of the rule executes this action statement on the true conditions matching the facts and sets the base price to the corresponding value. Record Reason : When the rule successfully executes, this action generates an output message to the System.out console reflecting which rule fired. This is later captured in the application and printed. The example also uses the first column on the left to categorize rules. This column is for annotation only and has no affect on rule execution. Promotional discount rules The Promotional discount rules decision table in the Pricing example evaluates the age, number of prior claims, and policy type of the driver to generate a potential discount on the price of the insurance policy. Figure 21.13. Discount calculation This decision table contains the conditions for the discount for which the driver might be eligible. Similar to the base price calculation, this table evaluates the Age , Number of prior claims of the driver, and the Policy type applying for to determine a Discount % rate to be applied. For example, if the driver is 30 years old, has no prior claims, and is applying for a COMPREHENSIVE policy, the driver is given a discount of 20 percent. 21.6. Pet Store example decisions (agenda groups, global variables, callbacks, and GUI integration) The Pet Store example decision set demonstrates how to use agenda groups and global variables in rules and how to integrate Red Hat Process Automation Manager rules with a graphical user interface (GUI), in this case a Swing-based desktop application. The example also demonstrates how to use callbacks to interact with a running decision engine to update the GUI based on changes in the working memory at run time. The following is an overview of the Pet Store example: Name : petstore Main class : org.drools.examples.petstore.PetStoreExample (in src/main/java ) Module : drools-examples Type : Java application Rule file : org.drools.examples.petstore.PetStore.drl (in src/main/resources ) Objective : Demonstrates rule agenda groups, global variables, callbacks, and GUI integration In the Pet Store example, the sample PetStoreExample.java class defines the following principal classes (in addition to several classes to handle Swing events): Petstore contains the main() method. PetStoreUI is responsible for creating and displaying the Swing-based GUI. This class contains several smaller classes, mainly for responding to various GUI events, such as user mouse clicks. TableModel holds the table data. This class is essentially a JavaBean that extends the Swing class AbstractTableModel . CheckoutCallback enables the GUI to interact with the rules. Ordershow keeps the items that you want to buy. Purchase stores details of the order and the products that you are buying. Product is a JavaBean containing details of the product available for purchase and its price. Much of the Java code in this example is either plain JavaBean or Swing based. For more information about Swing components, see the Java tutorial on Creating a GUI with JFC/Swing . Rule execution behavior in the Pet Store example Unlike other example decision sets where the facts are asserted and fired immediately, the Pet Store example does not execute the rules until more facts are gathered based on user interaction. The example executes rules through a PetStoreUI object, created by a constructor, that accepts the Vector object stock for collecting the products. The example then uses an instance of the CheckoutCallback class containing the rule base that was previously loaded. Pet Store KIE container and fact execution setup // KieServices is the factory for all KIE services. KieServices ks = KieServices.Factory.get(); // Create a KIE container on the class path. KieContainer kc = ks.getKieClasspathContainer(); // Create the stock. Vector<Product> stock = new Vector<Product>(); stock.add( new Product( "Gold Fish", 5 ) ); stock.add( new Product( "Fish Tank", 25 ) ); stock.add( new Product( "Fish Food", 2 ) ); // A callback is responsible for populating the working memory and for firing all rules. PetStoreUI ui = new PetStoreUI( stock, new CheckoutCallback( kc ) ); ui.createAndShowGUI(); The Java code that fires the rules is in the CheckoutCallBack.checkout() method. This method is triggered when the user clicks Checkout in the UI. Rule execution from CheckoutCallBack.checkout() public String checkout(JFrame frame, List<Product> items) { Order order = new Order(); // Iterate through list and add to cart. for ( Product p: items ) { order.addItem( new Purchase( order, p ) ); } // Add the JFrame to the ApplicationData to allow for user interaction. // From the KIE container, a KIE session is created based on // its definition and configuration in the META-INF/kmodule.xml file. KieSession ksession = kcontainer.newKieSession("PetStoreKS"); ksession.setGlobal( "frame", frame ); ksession.setGlobal( "textArea", this.output ); ksession.insert( new Product( "Gold Fish", 5 ) ); ksession.insert( new Product( "Fish Tank", 25 ) ); ksession.insert( new Product( "Fish Food", 2 ) ); ksession.insert( new Product( "Fish Food Sample", 0 ) ); ksession.insert( order ); // Execute rules. ksession.fireAllRules(); // Return the state of the cart return order.toString(); } The example code passes two elements into the CheckoutCallBack.checkout() method. One element is the handle for the JFrame Swing component surrounding the output text frame, found at the bottom of the GUI. The second element is a list of order items, which comes from the TableModel that stores the information from the Table area at the upper-right section of the GUI. The for loop transforms the list of order items coming from the GUI into the Order JavaBean, also contained in the file PetStoreExample.java . In this case, the rule is firing in a stateless KIE session because all of the data is stored in Swing components and is not executed until the user clicks Checkout in the UI. Each time the user clicks Checkout , the content of the list is moved from the Swing TableModel into the KIE session working memory and is then executed with the ksession.fireAllRules() method. Within this code, there are nine calls to KieSession . The first of these creates a new KieSession from the KieContainer (the example passed in this KieContainer from the CheckoutCallBack class in the main() method). The two calls pass in the two objects that hold the global variables in the rules: the Swing text area and the Swing frame used for writing messages. More inserts put information on products into the KieSession , as well as the order list. The final call is the standard fireAllRules() . Pet Store rule file imports, global variables, and Java functions The PetStore.drl file contains the standard package and import statements to make various Java classes available to the rules. The rule file also includes global variables to be used within the rules, defined as frame and textArea . The global variables hold references to the Swing components JFrame and JTextArea components that were previously passed on by the Java code that called the setGlobal() method. Unlike standard variables in rules, which expire as soon as the rule has fired, global variables retain their value for the lifetime of the KIE session. This means the contents of these global variables are available for evaluation on all subsequent rules. PetStore.drl package, imports, and global variables package org.drools.examples; import org.kie.api.runtime.KieRuntime; import org.drools.examples.petstore.PetStoreExample.Order; import org.drools.examples.petstore.PetStoreExample.Purchase; import org.drools.examples.petstore.PetStoreExample.Product; import java.util.ArrayList; import javax.swing.JOptionPane; import javax.swing.JFrame; global JFrame frame global javax.swing.JTextArea textArea The PetStore.drl file also contains two functions that the rules in the file use: PetStore.drl Java functions function void doCheckout(JFrame frame, KieRuntime krt) { Object[] options = {"Yes", "No"}; int n = JOptionPane.showOptionDialog(frame, "Would you like to checkout?", "", JOptionPane.YES_NO_OPTION, JOptionPane.QUESTION_MESSAGE, null, options, options[0]); if (n == 0) { krt.getAgenda().getAgendaGroup( "checkout" ).setFocus(); } } function boolean requireTank(JFrame frame, KieRuntime krt, Order order, Product fishTank, int total) { Object[] options = {"Yes", "No"}; int n = JOptionPane.showOptionDialog(frame, "Would you like to buy a tank for your " + total + " fish?", "Purchase Suggestion", JOptionPane.YES_NO_OPTION, JOptionPane.QUESTION_MESSAGE, null, options, options[0]); System.out.print( "SUGGESTION: Would you like to buy a tank for your " + total + " fish? - " ); if (n == 0) { Purchase purchase = new Purchase( order, fishTank ); krt.insert( purchase ); order.addItem( purchase ); System.out.println( "Yes" ); } else { System.out.println( "No" ); } return true; } The two functions perform the following actions: doCheckout() displays a dialog that asks the user if she or he wants to check out. If the user does, the focus is set to the checkout agenda group, enabling rules in that group to (potentially) fire. requireTank() displays a dialog that asks the user if she or he wants to buy a fish tank. If the user does, a new fish tank Product is added to the order list in the working memory. Note For this example, all rules and functions are within the same rule file for efficiency. In a production environment, you typically separate the rules and functions in different files or build a static Java method and import the files using the import function, such as import function my.package.name.hello . Pet Store rules with agenda groups Most of the rules in the Pet Store example use agenda groups to control rule execution. Agenda groups allow you to partition the decision engine agenda to provide more execution control over groups of rules. By default, all rules are in the agenda group MAIN . You can use the agenda-group attribute to specify a different agenda group for the rule. Initially, a working memory has its focus on the agenda group MAIN . Rules in an agenda group only fire when the group receives the focus. You can set the focus either by using the method setFocus() or the rule attribute auto-focus . The auto-focus attribute enables the rule to be given a focus automatically for its agenda group when the rule is matched and activated. The Pet Store example uses the following agenda groups for rules: "init" "evaluate" "show items" "checkout" For example, the sample rule "Explode Cart" uses the "init" agenda group to ensure that it has the option to fire and insert shopping cart items into the KIE session working memory: Rule "Explode Cart" This rule matches against all orders that do not yet have their grossTotal calculated. The execution loops for each purchase item in that order. The rule uses the following features related to its agenda group: agenda-group "init" defines the name of the agenda group. In this case, only one rule is in the group. However, neither the Java code nor a rule consequence sets the focus to this group, and therefore it relies on the auto-focus attribute for its chance to fire. auto-focus true ensures that this rule, while being the only rule in the agenda group, gets a chance to fire when fireAllRules() is called from the Java code. kcontext... .setFocus() sets the focus to the "show items" and "evaluate" agenda groups, enabling their rules to fire. In practice, you loop through all items in the order, insert them into memory, and then fire the other rules after each insertion. The "show items" agenda group contains only one rule, "Show Items" . For each purchase in the order currently in the KIE session working memory, the rule logs details to the text area at the bottom of the GUI, based on the textArea variable defined in the rule file. Rule "Show Items" The "evaluate" agenda group also gains focus from the "Explode Cart" rule. This agenda group contains two rules, "Free Fish Food Sample" and "Suggest Tank" , which are executed in that order. Rule "Free Fish Food Sample" The rule "Free Fish Food Sample" fires only if all of the following conditions are true: 1 The agenda group "evaluate" is being evaluated in the rules execution. 2 User does not already have fish food. 3 User does not already have a free fish food sample. 4 User has a goldfish in the order. If the order facts meet all of these requirements, then a new product is created (Fish Food Sample) and is added to the order in working memory. Rule "Suggest Tank" The rule "Suggest Tank" fires only if the following conditions are true: 1 User does not have a fish tank in the order. 2 User has more than five fish in the order. When the rule fires, it calls the requireTank() function defined in the rule file. This function displays a dialog that asks the user if she or he wants to buy a fish tank. If the user does, a new fish tank Product is added to the order list in the working memory. When the rule calls the requireTank() function, the rule passes the frame global variable so that the function has a handle for the Swing GUI. The "do checkout" rule in the Pet Store example has no agenda group and no when conditions, so the rule is always executed and considered part of the default MAIN agenda group. Rule "do checkout" When the rule fires, it calls the doCheckout() function defined in the rule file. This function displays a dialog that asks the user if she or he wants to check out. If the user does, the focus is set to the checkout agenda group, enabling rules in that group to (potentially) fire. When the rule calls the doCheckout() function, the rule passes the frame global variable so that the function has a handle for the Swing GUI. Note This example also demonstrates a troubleshooting technique if results are not executing as you expect: You can remove the conditions from the when statement of a rule and test the action in the then statement to verify that the action is performed correctly. The "checkout" agenda group contains three rules for processing the order checkout and applying any discounts: "Gross Total" , "Apply 5% Discount" , and "Apply 10% Discount" . Rules "Gross Total", "Apply 5% Discount", and "Apply 10% Discount" If the user has not already calculated the gross total, the Gross Total accumulates the product prices into a total, puts this total into the KIE session, and displays it through the Swing JTextArea using the textArea global variable. If the gross total is between 10 and 20 (currency units), the "Apply 5% Discount" rule calculates the discounted total, adds it to the KIE session, and displays it in the text area. If the gross total is not less than 20 , the "Apply 10% Discount" rule calculates the discounted total, adds it to the KIE session, and displays it in the text area. Pet Store example execution Similar to other Red Hat Process Automation Manager decision examples, you execute the Pet Store example by running the org.drools.examples.petstore.PetStoreExample class as a Java application in your IDE. When you execute the Pet Store example, the Pet Store Demo GUI window appears. This window displays a list of available products (upper left), an empty list of selected products (upper right), Checkout and Reset buttons (middle), and an empty system messages area (bottom). Figure 21.14. Pet Store example GUI after launch The following events occurred in this example to establish this execution behavior: The main() method has run and loaded the rule base but has not yet fired the rules. So far, this is the only code in connection with rules that has been run. A new PetStoreUI object has been created and given a handle for the rule base, for later use. Various Swing components have performed their functions, and the initial UI screen is displayed and waits for user input. You can click various products from the list to explore the UI setup: Figure 21.15. Explore the Pet Store example GUI No rules code has been fired yet. The UI uses Swing code to detect user mouse clicks and add selected products to the TableModel object for display in the upper-right corner of the UI. This example illustrates the Model-View-Controller design pattern. When you click Checkout , the rules are then fired in the following way: Method CheckOutCallBack.checkout() is called (eventually) by the Swing class waiting for a user to click Checkout . This inserts the data from the TableModel object (upper-right corner of the UI) into the KIE session working memory. The method then fires the rules. The "Explode Cart" rule is the first to fire, with the auto-focus attribute set to true . The rule loops through all of the products in the cart, ensures that the products are in the working memory, and then gives the "show Items" and "evaluate" agenda groups the option to fire. The rules in these groups add the contents of the cart to the text area (bottom of the UI), evaluate if you are eligible for free fish food, and determine whether to ask if you want to buy a fish tank. Figure 21.16. Fish tank qualification The "do checkout" rule is the to fire because no other agenda group currently has focus and because it is part of the default MAIN agenda group. This rule always calls the doCheckout() function, which asks you if you want to check out. The doCheckout() function sets the focus to the "checkout" agenda group, giving the rules in that group the option to fire. The rules in the "checkout" agenda group display the contents of the cart and apply the appropriate discount. Swing then waits for user input to either select more products (and cause the rules to fire again) or to close the UI. Figure 21.17. Pet Store example GUI after all rules have fired You can add more System.out calls to demonstrate this flow of events in your IDE console: System.out output in the IDE console 21.7. Honest Politician example decisions (truth maintenance and salience) The Honest Politician example decision set demonstrates the concept of truth maintenance with logical insertions and the use of salience in rules. The following is an overview of the Honest Politician example: Name : honestpolitician Main class : org.drools.examples.honestpolitician.HonestPoliticianExample (in src/main/java ) Module : drools-examples Type : Java application Rule file : org.drools.examples.honestpolitician.HonestPolitician.drl (in src/main/resources ) Objective : Demonstrates the concept of truth maintenance based on the logical insertion of facts and the use of salience in rules The basic premise of the Honest Politician example is that an object can only exist while a statement is true. A rule consequence can logically insert an object with the insertLogical() method. This means the object remains in the KIE session working memory as long as the rule that logically inserted it remains true. When the rule is no longer true, the object is automatically retracted. In this example, rule execution causes a group of politicians to change from being honest to being dishonest as a result of a corrupt corporation. As each politician is evaluated, they start out with their honesty attribute being set to true , but a rule fires that makes the politicians no longer honest. As they switch their state from being honest to dishonest, they are then removed from the working memory. The rule salience notifies the decision engine how to prioritize any rules that have a salience defined for them, otherwise utilizing the default salience value of 0 . Rules with a higher salience value are given higher priority when ordered in the activation queue. Politician and Hope classes The sample class Politician in the example is configured for an honest politician. The Politician class is made up of a String item name and a Boolean item honest : Politician class public class Politician { private String name; private boolean honest; ... } The Hope class determines if a Hope object exists. This class has no meaningful members, but is present in the working memory as long as society has hope. Hope class public class Hope { public Hope() { } } Rule definitions for politician honesty In the Honest Politician example, when at least one honest politician exists in the working memory, the "We have an honest Politician" rule logically inserts a new Hope object. As soon as all politicians become dishonest, the Hope object is automatically retracted. This rule has a salience attribute with a value of 10 to ensure that it fires before any other rule, because at that stage the "Hope is Dead" rule is true. Rule "We have an honest politician" As soon as a Hope object exists, the "Hope Lives" rule matches and fires. This rule also has a salience value of 10 so that it takes priority over the "Corrupt the Honest" rule. Rule "Hope Lives" Initially, four honest politicians exist so this rule has four activations, all in conflict. Each rule fires in turn, corrupting each politician so that they are no longer honest. When all four politicians have been corrupted, no politicians have the property honest == true . The rule "We have an honest Politician" is no longer true and the object it logically inserted (due to the last execution of new Hope() ) is automatically retracted. Rule "Corrupt the Honest" With the Hope object automatically retracted through the truth maintenance system, the conditional element not applied to Hope is no longer true so that the "Hope is Dead" rule matches and fires. Rule "Hope is Dead" Example execution and audit trail In the HonestPoliticianExample.java class, the four politicians with the honest state set to true are inserted for evaluation against the defined business rules: HonestPoliticianExample.java class execution public static void execute( KieContainer kc ) { KieSession ksession = kc.newKieSession("HonestPoliticianKS"); final Politician p1 = new Politician( "President of Umpa Lumpa", true ); final Politician p2 = new Politician( "Prime Minster of Cheeseland", true ); final Politician p3 = new Politician( "Tsar of Pringapopaloo", true ); final Politician p4 = new Politician( "Omnipotence Om", true ); ksession.insert( p1 ); ksession.insert( p2 ); ksession.insert( p3 ); ksession.insert( p4 ); ksession.fireAllRules(); ksession.dispose(); } To execute the example, run the org.drools.examples.honestpolitician.HonestPoliticianExample class as a Java application in your IDE. After the execution, the following output appears in the IDE console window: Execution output in the IDE console The output shows that, while there is at least one honest politician, democracy lives. However, as each politician is corrupted by some corporation, all politicians become dishonest, and democracy is dead. To better understand the execution flow of this example, you can modify the HonestPoliticianExample.java class to include a DebugRuleRuntimeEventListener listener and an audit logger to view execution details: HonestPoliticianExample.java class with an audit logger package org.drools.examples.honestpolitician; import org.kie.api.KieServices; import org.kie.api.event.rule.DebugAgendaEventListener; 1 import org.kie.api.event.rule.DebugRuleRuntimeEventListener; import org.kie.api.runtime.KieContainer; import org.kie.api.runtime.KieSession; public class HonestPoliticianExample { /** * @param args */ public static void main(final String[] args) { KieServices ks = KieServices.Factory.get(); 2 //ks = KieServices.Factory.get(); KieContainer kc = KieServices.Factory.get().getKieClasspathContainer(); System.out.println(kc.verify().getMessages().toString()); //execute( kc ); execute( ks, kc); 3 } public static void execute( KieServices ks, KieContainer kc ) { 4 KieSession ksession = kc.newKieSession("HonestPoliticianKS"); final Politician p1 = new Politician( "President of Umpa Lumpa", true ); final Politician p2 = new Politician( "Prime Minster of Cheeseland", true ); final Politician p3 = new Politician( "Tsar of Pringapopaloo", true ); final Politician p4 = new Politician( "Omnipotence Om", true ); ksession.insert( p1 ); ksession.insert( p2 ); ksession.insert( p3 ); ksession.insert( p4 ); // The application can also setup listeners 5 ksession.addEventListener( new DebugAgendaEventListener() ); ksession.addEventListener( new DebugRuleRuntimeEventListener() ); // Set up a file-based audit logger. ks.getLoggers().newFileLogger( ksession, "./target/honestpolitician" ); 6 ksession.fireAllRules(); ksession.dispose(); } } 1 Adds to your imports the packages that handle the DebugAgendaEventListener and DebugRuleRuntimeEventListener 2 Creates a KieServices Factory and a ks element to produce the logs because this audit log is not available at the KieContainer level 3 Modifies the execute method to use both KieServices and KieContainer 4 Modifies the execute method to pass in KieServices in addition to the KieContainer 5 Creates the listeners 6 Builds the log that can be passed into the debug view or Audit View or your IDE after executing of the rules When you run the Honest Politician with this modified logging capability, you can load the audit log file from target/honestpolitician.log into your IDE debug view or Audit View , if available (for example, in Window Show View in some IDEs). In this example, the Audit View shows the flow of executions, insertions, and retractions as defined in the example classes and rules: Figure 21.18. Honest Politician example Audit View When the first politician is inserted, two activations occur. The rule "We have an honest Politician" is activated only one time for the first inserted politician because it uses an exists conditional element, which matches when at least one politician is inserted. The rule "Hope is Dead" is also activated at this stage because the Hope object is not yet inserted. The rule "We have an honest Politician" fires first because it has a higher salience value than the rule "Hope is Dead" , and inserts the Hope object (highlighted in green). The insertion of the Hope object activates the rule "Hope Lives" and deactivates the rule "Hope is Dead" . The insertion also activates the rule "Corrupt the Honest" for each inserted honest politician. The rule "Hope Lives" is executed and prints "Hurrah!!! Democracy Lives" . , for each politician, the rule "Corrupt the Honest" fires, printing "I'm an evil corporation and I have corrupted X" , where X is the name of the politician, and modifies the politician honesty value to false . When the last honest politician is corrupted, Hope is automatically retracted by the truth maintenance system (highlighted in blue). The green highlighted area shows the origin of the currently selected blue highlighted area. After the Hope fact is retracted, the rule "Hope is dead" fires, printing "We are all Doomed!!! Democracy is Dead" . 21.8. Sudoku example decisions (complex pattern matching, callbacks, and GUI integration) The Sudoku example decision set, based on the popular number puzzle Sudoku, demonstrates how to use rules in Red Hat Process Automation Manager to find a solution in a large potential solution space based on various constraints. This example also shows how to integrate Red Hat Process Automation Manager rules into a graphical user interface (GUI), in this case a Swing-based desktop application, and how to use callbacks to interact with a running decision engine to update the GUI based on changes in the working memory at run time. The following is an overview of the Sudoku example: Name : sudoku Main class : org.drools.examples.sudoku.SudokuExample (in src/main/java ) Module : drools-examples Type : Java application Rule files : org.drools.examples.sudoku.*.drl (in src/main/resources ) Objective : Demonstrates complex pattern matching, problem solving, callbacks, and GUI integration Sudoku is a logic-based number placement puzzle. The objective is to fill a 9x9 grid so that each column, each row, and each of the nine 3x3 zones contains the digits from 1 to 9 only one time. The puzzle setter provides a partially completed grid and the puzzle solver's task is to complete the grid with these constraints. The general strategy to solve the problem is to ensure that when you insert a new number, it must be unique in its particular 3x3 zone, row, and column. This Sudoku example decision set uses Red Hat Process Automation Manager rules to solve Sudoku puzzles from a range of difficulty levels, and to attempt to resolve flawed puzzles that contain invalid entries. Sudoku example execution and interaction Similar to other Red Hat Process Automation Manager decision examples, you execute the Sudoku example by running the org.drools.examples.sudoku.SudokuExample class as a Java application in your IDE. When you execute the Sudoku example, the Drools Sudoku Example GUI window appears. This window contains an empty grid, but the program comes with various grids stored internally that you can load and solve. Click File Samples Simple to load one of the examples. Notice that all buttons are disabled until a grid is loaded. Figure 21.19. Sudoku example GUI after launch When you load the Simple example, the grid is filled according to the puzzle's initial state. Figure 21.20. Sudoku example GUI after loading Simple sample Choose from the following options: Click Solve to fire the rules defined in the Sudoku example that fill out the remaining values and that make the buttons inactive again. Figure 21.21. Simple sample solved Click Step to see the digit found by the rule set. The console window in your IDE displays detailed information about the rules that are executing to solve the step. Step execution output in the IDE console Click Dump to see the state of the grid, with cells showing either the established value or the remaining possibilities. Dump execution output in the IDE console The Sudoku example includes a deliberately broken sample file that the rules defined in the example can resolve. Click File Samples !DELIBERATELY BROKEN! to load the broken sample. The grid starts with some issues, for example, the value 5 appears two times in the first row, which is not allowed. Figure 21.22. Broken Sudoku example initial state Click Solve to apply the solving rules to this invalid grid. The associated solving rules in the Sudoku example detect the issues in the sample and attempts to solve the puzzle as far as possible. This process does not complete and leaves some cells empty. The solving rule activity is displayed in the IDE console window: Detected issues in the broken sample Figure 21.23. Broken sample solution attempt The sample Sudoku files labeled Hard are more complex and the solving rules might not be able to solve them. The unsuccessful solution attempt is displayed in the IDE console window: Hard sample unresolved The rules that work to solve the broken sample implement standard solving techniques based on the sets of values that are still candidates for a cell. For example, if a set contains a single value, then this is the value for the cell. For a single occurrence of a value in one of the groups of nine cells, the rules insert a fact of type Setting with the solution value for some specific cell. This fact causes the elimination of this value from all other cells in any of the groups the cell belongs to and the value is retracted. Other rules in the example reduce the permissible values for some cells. The rules "naked pair" , "hidden pair in row" , "hidden pair in column" , and "hidden pair in square" eliminate possibilities but do not establish solutions. The rules "X-wings in rows" , "`X-wings in columns"`, "intersection removal row" , and "intersection removal column" perform more sophisticated eliminations. Sudoku example classes The package org.drools.examples.sudoku.swing contains the following core set of classes that implement a framework for Sudoku puzzles: The SudokuGridModel class defines an interface that is implemented to store a Sudoku puzzle as a 9x9 grid of Cell objects. The SudokuGridView class is a Swing component that can visualize any implementation of the SudokuGridModel class. The SudokuGridEvent and SudokuGridListener classes communicate state changes between the model and the view. Events are fired when a cell value is resolved or changed. The SudokuGridSamples class provides partially filled Sudoku puzzles for demonstration purposes. Note This package does not have any dependencies on Red Hat Process Automation Manager libraries. The package org.drools.examples.sudoku contains the following core set of classes that implement the elementary Cell object and its various aggregations: The CellFile class, with subtypes CellRow , CellCol , and CellSqr , all of which are subtypes of the CellGroup class. The Cell and CellGroup subclasses of SetOfNine , which provides a property free with the type Set<Integer> . For a Cell class, the set represents the individual candidate set. For a CellGroup class, the set is the union of all candidate sets of its cells (the set of digits that still need to be allocated). In the Sudoku example are 81 Cell and 27 CellGroup objects and a linkage provided by the Cell properties cellRow , cellCol , and cellSqr , and by the CellGroup property cells (a list of Cell objects). With these components, you can write rules that detect the specific situations that permit the allocation of a value to a cell or the elimination of a value from some candidate set. The Setting class is used to trigger the operations that accompany the allocation of a value. The presence of a Setting fact is used in all rules that detect a new situation in order to avoid reactions to inconsistent intermediary states. The Stepping class is used in a low priority rule to execute an emergency halt when a "Step" does not terminate regularly. This behavior indicates that the program cannot solve the puzzle. The main class org.drools.examples.sudoku.SudokuExample implements a Java application combining all of these components. Sudoku validation rules (validate.drl) The validate.drl file in the Sudoku example contains validation rules that detect duplicate numbers in cell groups. They are combined in a "validate" agenda group that enables the rules to be explicitly activated after a user loads the puzzle. The when conditions of the three rules "duplicate in cell ... " all function in the following ways: The first condition in the rule locates a cell with an allocated value. The second condition in the rule pulls in any of the three cell groups to which the cell belongs. The final condition finds a cell (other than the first one) with the same value as the first cell and in the same row, column, or square, depending on the rule. Rules "duplicate in cell ... " The rule "terminate group" is the last to fire. This rule prints a message and stops the sequence. Rule "terminate group" Sudoku solving rules (sudoku.drl) The sudoku.drl file in the Sudoku example contains three types of rules: one group handles the allocation of a number to a cell, another group detects feasible allocations, and the third group eliminates values from candidate sets. The rules "set a value" , "eliminate a value from Cell" , and "retract setting" depend on the presence of a Setting object. The first rule handles the assignment to the cell and the operations for removing the value from the free sets of the three groups of the cell. This group also reduces a counter that, when zero, returns control to the Java application that has called fireUntilHalt() . The purpose of the rule "eliminate a value from Cell" is to reduce the candidate lists of all cells that are related to the newly assigned cell. Finally, when all eliminations have been made, the rule "retract setting" retracts the triggering Setting fact. Rules "set a value", "eliminate a value from a Cell", and "retract setting" Two solving rules detect a situation where an allocation of a number to a cell is possible. The rule "single" fires for a Cell with a candidate set containing a single number. The rule "hidden single" fires when no cell exists with a single candidate, but when a cell exists containing a candidate, this candidate is absent from all other cells in one of the three groups to which the cell belongs. Both rules create and insert a Setting fact. Rules "single" and "hidden single" Rules from the largest group, either individually or in groups of two or three, implement various solving techniques used for solving Sudoku puzzles manually. The rule "naked pair" detects identical candidate sets of size 2 in two cells of a group. These two values may be removed from all other candidate sets of that group. Rule "naked pair" The three rules "hidden pair in ... " functions similarly to the rule "naked pair" . These rules detect a subset of two numbers in exactly two cells of a group, with neither value occurring in any of the other cells of the group. This means that all other candidates can be eliminated from the two cells harboring the hidden pair. Rules "hidden pair in ... " Two rules deal with "X-wings" in rows and columns. When only two possible cells for a value exist in each of two different rows (or columns) and these candidates lie also in the same columns (or rows), then all other candidates for this value in the columns (or rows) can be eliminated. When you follow the pattern sequence in one of these rules, notice how the conditions that are conveniently expressed by words such as same or only result in patterns with suitable constraints or that are prefixed with not . Rules "X-wings in ... " The two rules "intersection removal ... " are based on the restricted occurrence of some number within one square, either in a single row or in a single column. This means that this number must be in one of those two or three cells of the row or column and can be removed from the candidate sets of all other cells of the group. The pattern establishes the restricted occurrence and then fires for each cell outside of the square and within the same cell file. Rules "intersection removal ... " These rules are sufficient for many but not all Sudoku puzzles. To solve very difficult grids, the rule set requires more complex rules. (Ultimately, some puzzles can be solved only by trial and error.) 21.9. Conway's Game of Life example decisions (ruleflow groups and GUI integration) The Conway's Game of Life example decision set, based on the famous cellular automaton by John Conway, demonstrates how to use ruleflow groups in rules to control rule execution. The example also demonstrates how to integrate Red Hat Process Automation Manager rules with a graphical user interface (GUI), in this case a Swing-based implementation of Conway's Game of Life. The following is an overview of the Conway's Game of Life (Conway) example: Name : conway Main classes : org.drools.examples.conway.ConwayRuleFlowGroupRun , org.drools.examples.conway.ConwayAgendaGroupRun (in src/main/java ) Module : droolsjbpm-integration-examples Type : Java application Rule files : org.drools.examples.conway.*.drl (in src/main/resources ) Objective : Demonstrates ruleflow groups and GUI integration Note The Conway's Game of Life example is separate from most of the other example decision sets in Red Hat Process Automation Manager and is located in ~/rhpam-7.13.5-sources/src/droolsjbpm-integration-USDVERSION/droolsjbpm-integration-examples of the Red Hat Process Automation Manager 7.13.5 Source Distribution from the Red Hat Customer Portal . In Conway's Game of Life, a user interacts with the game by creating an initial configuration or an advanced pattern with defined properties and then observing how the initial state evolves. The objective of the game is to show the development of a population, generation by generation. Each generation results from the preceding one, based on the simultaneous evaluation of all cells. The following basic rules govern what the generation looks like: If a live cell has fewer than two live neighbors, it dies of loneliness. If a live cell has more than three live neighbors, it dies from overcrowding. If a dead cell has exactly three live neighbors, it comes to life. Any cell that does not meet any of those criteria is left as is for the generation. The Conway's Game of Life example uses Red Hat Process Automation Manager rules with ruleflow-group attributes to define the pattern implemented in the game. The example also contains a version of the decision set that achieves the same behavior using agenda groups. Agenda groups enable you to partition the decision engine agenda to provide execution control over groups of rules. By default, all rules are in the agenda group MAIN . You can use the agenda-group attribute to specify a different agenda group for the rule. This overview does not explore the version of the Conway example using agenda groups. For more information about agenda groups, see the Red Hat Process Automation Manager example decision sets that specifically address agenda groups. Conway example execution and interaction Similar to other Red Hat Process Automation Manager decision examples, you execute the Conway ruleflow example by running the org.drools.examples.conway.ConwayRuleFlowGroupRun class as a Java application in your IDE. When you execute the Conway example, the Conway's Game of Life GUI window appears. This window contains an empty grid, or "arena" where the life simulation takes place. Initially the grid is empty because no live cells are in the system yet. Figure 21.24. Conway example GUI after launch Select a predefined pattern from the Pattern drop-down menu and click Generation to click through each population generation. Each cell is either alive or dead, where live cells contain a green ball. As the population evolves from the initial pattern, cells live or die relative to neighboring cells, according to the rules of the game. Figure 21.25. Generation evolution in Conway example Neighbors include not only cells to the left, right, top, and bottom but also cells that are connected diagonally, so that each cell has a total of eight neighbors. Exceptions are the corner cells, which have only three neighbors, and the cells along the four borders, with five neighbors each. You can manually intervene to create or kill cells by clicking the cell. To run through an evolution automatically from the initial pattern, click Start . Conway example rules with ruleflow groups The rules in the ConwayRuleFlowGroupRun example use ruleflow groups to control rule execution. A ruleflow group is a group of rules associated by the ruleflow-group rule attribute. These rules can only fire when the group is activated. The group itself can only become active when the elaboration of the ruleflow diagram reaches the node representing the group. The Conway example uses the following ruleflow groups for rules: "register neighbor" "evaluate" "calculate" "reset calculate" "birth" "kill" "kill all" All of the Cell objects are inserted into the KIE session and the "register ... " rules in the ruleflow group "register neighbor" are allowed to execute by the ruleflow process. This group of four rules creates Neighbor relations between some cell and its northeastern, northern, northwestern, and western neighbors. This relation is bidirectional and handles the other four directions. Border cells do not require any special treatment. These cells are not paired with neighboring cells where there is not any. By the time all activations have fired for these rules, all cells are related to all their neighboring cells. Rules "register ... " After all the cells are inserted, some Java code applies the pattern to the grid, setting certain cells to Live . Then, when the user clicks Start or Generation , the example executes the Generation ruleflow. This ruleflow manages all changes of cells in each generation cycle. Figure 21.26. Generation ruleflow The ruleflow process enters the "evaluate" ruleflow group and any active rules in the group can fire. The rules "Kill the ... " and "Give Birth" in this group apply the game rules to birth or kill cells. The example uses the phase attribute to drive the reasoning of the Cell object by specific groups of rules. Typically, the phase is tied to a ruleflow group in the ruleflow process definition. Notice that the example does not change the state of any Cell objects at this point because it must complete the full evaluation before those changes can be applied. The example sets the cell to a phase that is either Phase.KILL or Phase.BIRTH , which is used later to control actions applied to the Cell object. Rules "Kill the ... " and "Give Birth" After all Cell objects in the grid have been evaluated, the example uses the "reset calculate" rule to clear any activations in the "calculate" ruleflow group. The example then enters a split in the ruleflow that enables the rules "kill" and "birth" to fire, if the ruleflow group is activated. These rules apply the state change. Rules "reset calculate", "kill", and "birth" At this stage, several Cell objects have been modified with the state changed to either LIVE or DEAD . When a cell becomes live or dead, the example uses the Neighbor relation in the rules "Calculate ... " to iterate over all surrounding cells, increasing or decreasing the liveNeighbor count. Any cell that has its count changed is also set to the EVALUATE phase to make sure it is included in the reasoning during the evaluation stage of the ruleflow process. After the live count has been determined and set for all cells, the ruleflow process ends. If the user initially clicked Start , the decision engine restarts the ruleflow at that point. If the user initially clicked Generation , the user can request another generation. Rules "Calculate ... " 21.10. House of Doom example decisions (backward chaining and recursion) The House of Doom example decision set demonstrates how the decision engine uses backward chaining and recursion to reach defined goals or subgoals in a hierarchical system. The following is an overview of the House of Doom example: Name : backwardchaining Main class : org.drools.examples.backwardchaining.HouseOfDoomMain (in src/main/java ) Module : drools-examples Type : Java application Rule file : org.drools.examples.backwardchaining.BC-Example.drl (in src/main/resources ) Objective : Demonstrates backward chaining and recursion A backward-chaining rule system is a goal-driven system that starts with a conclusion that the decision engine attempts to satisfy, often using recursion. If the system cannot reach the conclusion or goal, it searches for subgoals, which are conclusions that complete part of the current goal. The system continues this process until either the initial conclusion is satisfied or all subgoals are satisfied. In contrast, a forward-chaining rule system is a data-driven system that starts with a fact in the working memory of the decision engine and reacts to changes to that fact. When objects are inserted into working memory, any rule conditions that become true as a result of the change are scheduled for execution by the agenda. The decision engine in Red Hat Process Automation Manager uses both forward and backward chaining to evaluate rules. The following diagram illustrates how the decision engine evaluates rules using forward chaining overall with a backward-chaining segment in the logic flow: Figure 21.27. Rule evaluation logic using forward and backward chaining The House of Doom example uses rules with various types of queries to find the location of rooms and items within the house. The sample class Location.java contains the item and location elements used in the example. The sample class HouseOfDoomMain.java inserts the items or rooms in their respective locations in the house and executes the rules. Items and locations in HouseOfDoomMain.java class ksession.insert( new Location("Office", "House") ); ksession.insert( new Location("Kitchen", "House") ); ksession.insert( new Location("Knife", "Kitchen") ); ksession.insert( new Location("Cheese", "Kitchen") ); ksession.insert( new Location("Desk", "Office") ); ksession.insert( new Location("Chair", "Office") ); ksession.insert( new Location("Computer", "Desk") ); ksession.insert( new Location("Drawer", "Desk") ); The example rules rely on backward chaining and recursion to determine the location of all items and rooms in the house structure. The following diagram illustrates the structure of the House of Doom and the items and rooms within it: Figure 21.28. House of Doom structure To execute the example, run the org.drools.examples.backwardchaining.HouseOfDoomMain class as a Java application in your IDE. After the execution, the following output appears in the IDE console window: Execution output in the IDE console All rules in the example have fired to detect the location of all items in the house and to print the location of each in the output. Recursive query and related rules A recursive query repeatedly searches through the hierarchy of a data structure for relationships between elements. In the House of Doom example, the BC-Example.drl file contains an isContainedIn query that most of the rules in the example use to recursively evaluate the house data structure for data inserted into the decision engine: Recursive query in BC-Example.drl The rule "go" prints every string inserted into the system to determine how items are implemented, and the rule "go1" calls the query isContainedIn : Rules "go" and "go1" The example inserts the "go1" string into the decision engine and activates the "go1" rule to detect that item Office is in the location House : Insert string and fire rules Rule "go1" output in the IDE console Transitive closure rule Transitive closure is a relationship between an element contained in a parent element that is multiple levels higher in a hierarchical structure. The rule "go2" identifies the transitive closure relationship of the Drawer and the House : The Drawer is in the Desk in the Office in the House . The example inserts the "go2" string into the decision engine and activates the "go2" rule to detect that item Drawer is ultimately within the location House : Insert string and fire rules Rule "go2" output in the IDE console The decision engine determines this outcome based on the following logic: The query recursively searches through several levels in the house to detect the transitive closure between Drawer and House . Instead of using Location( x, y; ) , the query uses the value of (z, y; ) because Drawer is not directly in House . The z argument is currently unbound, which means it has no value and returns everything that is in the argument. The y argument is currently bound to House , so z returns Office and Kitchen . The query gathers information from the Office and checks recursively if the Drawer is in the Office . The query line isContainedIn( x, z; ) is called for these parameters. No instance of Drawer exists directly in Office , so no match is found. With z unbound, the query returns data within the Office and determines that z == Desk . The isContainedIn query recursively searches three times, and on the third time, the query detects an instance of Drawer in Desk . After this match on the first location, the query recursively searches back up the structure to determine that the Drawer is in the Desk , the Desk is in the Office , and the Office is in the House . Therefore, the Drawer is in the House and the rule is satisfied. Reactive query rule A reactive query searches through the hierarchy of a data structure for relationships between elements and is dynamically updated when elements in the structure are modified. The rule "go3" functions as a reactive query that detects if a new item Key ever becomes present in the Office by transitive closure: A Key in the Drawer in the Office . Rule "go3" The example inserts the "go3" string into the decision engine and activates the "go3" rule. Initially, this rule is not satisfied because no item Key exists in the house structure, so the rule produces no output. Insert string and fire rules Rule "go3" output in the IDE console (unsatisfied) The example then inserts a new item Key in the location Drawer , which is in Office . This change satisfies the transitive closure in the "go3" rule and the output is populated accordingly. Insert new item location and fire rules Rule "go3" output in the IDE console (satisfied) This change also adds another level in the structure that the query includes in subsequent recursive searches. Queries with unbound arguments in rules A query with one or more unbound arguments returns all undefined (unbound) items within a defined (bound) argument of the query. If all arguments in a query are unbound, then the query returns all items within the scope of the query. The rule "go4" uses an unbound argument thing to search for all items within the bound argument Office , instead of using a bound argument to search for a specific item in the Office : Rule "go4" The example inserts the "go4" string into the decision engine and activates the "go4" rule to return all items in the Office : Insert string and fire rules Rule "go4" output in the IDE console The rule "go5" uses both unbound arguments thing and location to search for all items and their locations in the entire House data structure: Rule "go5" The example inserts the "go5" string into the decision engine and activates the "go5" rule to return all items and their locations in the House data structure: Insert string and fire rules Rule "go5" output in the IDE console | [
"KieServices ks = KieServices.Factory.get(); 1 KieContainer kc = ks.getKieClasspathContainer(); 2 KieSession ksession = kc.newKieSession(\"HelloWorldKS\"); 3",
"// Set up listeners. ksession.addEventListener( new DebugAgendaEventListener() ); ksession.addEventListener( new DebugRuleRuntimeEventListener() ); // Set up a file-based audit logger. KieRuntimeLogger logger = KieServices.get().getLoggers().newFileLogger( ksession, \"./target/helloworld\" ); // Set up a ThreadedFileLogger so that the audit view reflects events while debugging. KieRuntimeLogger logger = ks.getLoggers().newThreadedFileLogger( ksession, \"./target/helloworld\", 1000 );",
"// Insert facts into the KIE session. final Message message = new Message(); message.setMessage( \"Hello World\" ); message.setStatus( Message.HELLO ); ksession.insert( message ); // Fire the rules. ksession.fireAllRules();",
"public static class Message { public static final int HELLO = 0; public static final int GOODBYE = 1; private String message; private int status; }",
"rule \"Hello World\" when m : Message( status == Message.HELLO, message : message ) then System.out.println( message ); modify ( m ) { message = \"Goodbye cruel world\", status = Message.GOODBYE }; end",
"rule \"Good Bye\" when Message( status == Message.GOODBYE, message : message ) then System.out.println( message ); end",
"Hello World Goodbye cruel world",
"==>[ActivationCreated(0): rule=Hello World; tuple=[fid:1:1:org.drools.examples.helloworld.HelloWorldExampleUSDMessage@17cec96]] [ObjectInserted: handle=[fid:1:1:org.drools.examples.helloworld.HelloWorldExampleUSDMessage@17cec96]; object=org.drools.examples.helloworld.HelloWorldExampleUSDMessage@17cec96] [BeforeActivationFired: rule=Hello World; tuple=[fid:1:1:org.drools.examples.helloworld.HelloWorldExampleUSDMessage@17cec96]] ==>[ActivationCreated(4): rule=Good Bye; tuple=[fid:1:2:org.drools.examples.helloworld.HelloWorldExampleUSDMessage@17cec96]] [ObjectUpdated: handle=[fid:1:2:org.drools.examples.helloworld.HelloWorldExampleUSDMessage@17cec96]; old_object=org.drools.examples.helloworld.HelloWorldExampleUSDMessage@17cec96; new_object=org.drools.examples.helloworld.HelloWorldExampleUSDMessage@17cec96] [AfterActivationFired(0): rule=Hello World] [BeforeActivationFired: rule=Good Bye; tuple=[fid:1:2:org.drools.examples.helloworld.HelloWorldExampleUSDMessage@17cec96]] [AfterActivationFired(4): rule=Good Bye]",
"public class State { public static final int NOTRUN = 0; public static final int FINISHED = 1; private final PropertyChangeSupport changes = new PropertyChangeSupport( this ); private String name; private int state; ... setters and getters go here }",
"final State a = new State( \"A\" ); final State b = new State( \"B\" ); final State c = new State( \"C\" ); final State d = new State( \"D\" ); ksession.insert( a ); ksession.insert( b ); ksession.insert( c ); ksession.insert( d ); ksession.fireAllRules(); // Dispose KIE session if stateful (not required if stateless). ksession.dispose();",
"A finished B finished C finished D finished",
"rule \"Bootstrap\" when a : State(name == \"A\", state == State.NOTRUN ) then System.out.println(a.getName() + \" finished\" ); a.setState( State.FINISHED ); end",
"rule \"A to B\" when State(name == \"A\", state == State.FINISHED ) b : State(name == \"B\", state == State.NOTRUN ) then System.out.println(b.getName() + \" finished\" ); b.setState( State.FINISHED ); end",
"rule \"B to C\" salience 10 when State(name == \"B\", state == State.FINISHED ) c : State(name == \"C\", state == State.NOTRUN ) then System.out.println(c.getName() + \" finished\" ); c.setState( State.FINISHED ); end rule \"B to D\" when State(name == \"B\", state == State.FINISHED ) d : State(name == \"D\", state == State.NOTRUN ) then System.out.println(d.getName() + \" finished\" ); d.setState( State.FINISHED ); end",
"rule \"B to C\" agenda-group \"B to C\" auto-focus true when State(name == \"B\", state == State.FINISHED ) c : State(name == \"C\", state == State.NOTRUN ) then System.out.println(c.getName() + \" finished\" ); c.setState( State.FINISHED ); kcontext.getKnowledgeRuntime().getAgenda().getAgendaGroup( \"B to D\" ).setFocus(); end",
"rule \"B to D\" agenda-group \"B to D\" when State(name == \"B\", state == State.FINISHED ) d : State(name == \"D\", state == State.NOTRUN ) then System.out.println(d.getName() + \" finished\" ); d.setState( State.FINISHED ); end",
"A finished B finished C finished D finished",
"declare type State @propertyChangeSupport end",
"public void setState(final int newState) { int oldState = this.state; this.state = newState; this.changes.firePropertyChange( \"state\", oldState, newState ); }",
"public static class Fibonacci { private int sequence; private long value; public Fibonacci( final int sequence ) { this.sequence = sequence; this.value = -1; } ... setters and getters go here }",
"recurse for 50 recurse for 49 recurse for 48 recurse for 47 recurse for 5 recurse for 4 recurse for 3 recurse for 2 1 == 1 2 == 1 3 == 2 4 == 3 5 == 5 6 == 8 47 == 2971215073 48 == 4807526976 49 == 7778742049 50 == 12586269025",
"ksession.insert( new Fibonacci( 50 ) ); ksession.fireAllRules();",
"rule \"Recurse\" salience 10 when f : Fibonacci ( value == -1 ) not ( Fibonacci ( sequence == 1 ) ) then insert( new Fibonacci( f.sequence - 1 ) ); System.out.println( \"recurse for \" + f.sequence ); end",
"rule \"Bootstrap\" when f : Fibonacci( sequence == 1 || == 2, value == -1 ) // multi-restriction then modify ( f ){ value = 1 }; System.out.println( f.sequence + \" == \" + f.value ); end",
"rule \"Calculate\" when // Bind f1 and s1. f1 : Fibonacci( s1 : sequence, value != -1 ) // Bind f2 and v2, refer to bound variable s1. f2 : Fibonacci( sequence == (s1 + 1), v2 : value != -1 ) // Bind f3 and s3, alternative reference of f2.sequence. f3 : Fibonacci( s3 : sequence == (f2.sequence + 1 ), value == -1 ) then // Note the various referencing techniques. modify ( f3 ) { value = f1.value + v2 }; System.out.println( s3 + \" == \" + f3.value ); end",
"Cheapest possible BASE PRICE IS: 120 DISCOUNT IS: 20",
"template header age[] profile priorClaims policyType base reason package org.drools.examples.decisiontable; template \"Pricing bracket\" age policyType base rule \"Pricing bracket_@{row.rowNumber}\" when Driver(age >= @{age0}, age <= @{age1} , priorClaims == \"@{priorClaims}\" , locationRiskProfile == \"@{profile}\" ) policy: Policy(type == \"@{policyType}\") then policy.setBasePrice(@{base}); System.out.println(\"@{reason}\"); end end template",
"template header age[] priorClaims policyType discount package org.drools.examples.decisiontable; template \"discounts\" age priorClaims policyType discount rule \"Discounts_@{row.rowNumber}\" when Driver(age >= @{age0}, age <= @{age1}, priorClaims == \"@{priorClaims}\") policy: Policy(type == \"@{policyType}\") then policy.applyDiscount(@{discount}); end end template",
"<kbase name=\"DecisionTableKB\" packages=\"org.drools.examples.decisiontable\"> <ksession name=\"DecisionTableKS\" type=\"stateless\"/> </kbase> <kbase name=\"DTableWithTemplateKB\" packages=\"org.drools.examples.decisiontable-template\"> <ruleTemplate dtable=\"org/drools/examples/decisiontable-template/ExamplePolicyPricingTemplateData.xls\" template=\"org/drools/examples/decisiontable-template/BasePricing.drt\" row=\"3\" col=\"3\"/> <ruleTemplate dtable=\"org/drools/examples/decisiontable-template/ExamplePolicyPricingTemplateData.xls\" template=\"org/drools/examples/decisiontable-template/PromotionalPricing.drt\" row=\"18\" col=\"3\"/> <ksession name=\"DTableWithTemplateKS\"/> </kbase>",
"DecisionTableConfiguration dtableconfiguration = KnowledgeBuilderFactory.newDecisionTableConfiguration(); dtableconfiguration.setInputType( DecisionTableInputType.XLS ); KnowledgeBuilder kbuilder = KnowledgeBuilderFactory.newKnowledgeBuilder(); Resource xlsRes = ResourceFactory.newClassPathResource( \"ExamplePolicyPricing.xls\", getClass() ); kbuilder.add( xlsRes, ResourceType.DTABLE, dtableconfiguration );",
"// KieServices is the factory for all KIE services. KieServices ks = KieServices.Factory.get(); // Create a KIE container on the class path. KieContainer kc = ks.getKieClasspathContainer(); // Create the stock. Vector<Product> stock = new Vector<Product>(); stock.add( new Product( \"Gold Fish\", 5 ) ); stock.add( new Product( \"Fish Tank\", 25 ) ); stock.add( new Product( \"Fish Food\", 2 ) ); // A callback is responsible for populating the working memory and for firing all rules. PetStoreUI ui = new PetStoreUI( stock, new CheckoutCallback( kc ) ); ui.createAndShowGUI();",
"public String checkout(JFrame frame, List<Product> items) { Order order = new Order(); // Iterate through list and add to cart. for ( Product p: items ) { order.addItem( new Purchase( order, p ) ); } // Add the JFrame to the ApplicationData to allow for user interaction. // From the KIE container, a KIE session is created based on // its definition and configuration in the META-INF/kmodule.xml file. KieSession ksession = kcontainer.newKieSession(\"PetStoreKS\"); ksession.setGlobal( \"frame\", frame ); ksession.setGlobal( \"textArea\", this.output ); ksession.insert( new Product( \"Gold Fish\", 5 ) ); ksession.insert( new Product( \"Fish Tank\", 25 ) ); ksession.insert( new Product( \"Fish Food\", 2 ) ); ksession.insert( new Product( \"Fish Food Sample\", 0 ) ); ksession.insert( order ); // Execute rules. ksession.fireAllRules(); // Return the state of the cart return order.toString(); }",
"package org.drools.examples; import org.kie.api.runtime.KieRuntime; import org.drools.examples.petstore.PetStoreExample.Order; import org.drools.examples.petstore.PetStoreExample.Purchase; import org.drools.examples.petstore.PetStoreExample.Product; import java.util.ArrayList; import javax.swing.JOptionPane; import javax.swing.JFrame; global JFrame frame global javax.swing.JTextArea textArea",
"function void doCheckout(JFrame frame, KieRuntime krt) { Object[] options = {\"Yes\", \"No\"}; int n = JOptionPane.showOptionDialog(frame, \"Would you like to checkout?\", \"\", JOptionPane.YES_NO_OPTION, JOptionPane.QUESTION_MESSAGE, null, options, options[0]); if (n == 0) { krt.getAgenda().getAgendaGroup( \"checkout\" ).setFocus(); } } function boolean requireTank(JFrame frame, KieRuntime krt, Order order, Product fishTank, int total) { Object[] options = {\"Yes\", \"No\"}; int n = JOptionPane.showOptionDialog(frame, \"Would you like to buy a tank for your \" + total + \" fish?\", \"Purchase Suggestion\", JOptionPane.YES_NO_OPTION, JOptionPane.QUESTION_MESSAGE, null, options, options[0]); System.out.print( \"SUGGESTION: Would you like to buy a tank for your \" + total + \" fish? - \" ); if (n == 0) { Purchase purchase = new Purchase( order, fishTank ); krt.insert( purchase ); order.addItem( purchase ); System.out.println( \"Yes\" ); } else { System.out.println( \"No\" ); } return true; }",
"// Insert each item in the shopping cart into the working memory. rule \"Explode Cart\" agenda-group \"init\" auto-focus true salience 10 when USDorder : Order( grossTotal == -1 ) USDitem : Purchase() from USDorder.items then insert( USDitem ); kcontext.getKnowledgeRuntime().getAgenda().getAgendaGroup( \"show items\" ).setFocus(); kcontext.getKnowledgeRuntime().getAgenda().getAgendaGroup( \"evaluate\" ).setFocus(); end",
"rule \"Show Items\" agenda-group \"show items\" when USDorder : Order() USDp : Purchase( order == USDorder ) then textArea.append( USDp.product + \"\\n\"); end",
"// Free fish food sample when users buy a goldfish if they did not already buy // fish food and do not already have a fish food sample. rule \"Free Fish Food Sample\" agenda-group \"evaluate\" 1 when USDorder : Order() not ( USDp : Product( name == \"Fish Food\") && Purchase( product == USDp ) ) 2 not ( USDp : Product( name == \"Fish Food Sample\") && Purchase( product == USDp ) ) 3 exists ( USDp : Product( name == \"Gold Fish\") && Purchase( product == USDp ) ) 4 USDfishFoodSample : Product( name == \"Fish Food Sample\" ); then System.out.println( \"Adding free Fish Food Sample to cart\" ); purchase = new Purchase(USDorder, USDfishFoodSample); insert( purchase ); USDorder.addItem( purchase ); end",
"// Suggest a fish tank if users buy more than five goldfish and // do not already have a tank. rule \"Suggest Tank\" agenda-group \"evaluate\" when USDorder : Order() not ( USDp : Product( name == \"Fish Tank\") && Purchase( product == USDp ) ) 1 ArrayList( USDtotal : size > 5 ) from collect( Purchase( product.name == \"Gold Fish\" ) ) 2 USDfishTank : Product( name == \"Fish Tank\" ) then requireTank(frame, kcontext.getKieRuntime(), USDorder, USDfishTank, USDtotal); end",
"rule \"do checkout\" when then doCheckout(frame, kcontext.getKieRuntime()); end",
"rule \"Gross Total\" agenda-group \"checkout\" when USDorder : Order( grossTotal == -1) Number( total : doubleValue ) from accumulate( Purchase( USDprice : product.price ), sum( USDprice ) ) then modify( USDorder ) { grossTotal = total } textArea.append( \"\\ngross total=\" + total + \"\\n\" ); end rule \"Apply 5% Discount\" agenda-group \"checkout\" when USDorder : Order( grossTotal >= 10 && < 20 ) then USDorder.discountedTotal = USDorder.grossTotal * 0.95; textArea.append( \"discountedTotal total=\" + USDorder.discountedTotal + \"\\n\" ); end rule \"Apply 10% Discount\" agenda-group \"checkout\" when USDorder : Order( grossTotal >= 20 ) then USDorder.discountedTotal = USDorder.grossTotal * 0.90; textArea.append( \"discountedTotal total=\" + USDorder.discountedTotal + \"\\n\" ); end",
"Adding free Fish Food Sample to cart SUGGESTION: Would you like to buy a tank for your 6 fish? - Yes",
"public class Politician { private String name; private boolean honest; }",
"public class Hope { public Hope() { } }",
"rule \"We have an honest Politician\" salience 10 when exists( Politician( honest == true ) ) then insertLogical( new Hope() ); end",
"rule \"Hope Lives\" salience 10 when exists( Hope() ) then System.out.println(\"Hurrah!!! Democracy Lives\"); end",
"rule \"Corrupt the Honest\" when politician : Politician( honest == true ) exists( Hope() ) then System.out.println( \"I'm an evil corporation and I have corrupted \" + politician.getName() ); modify ( politician ) { honest = false }; end",
"rule \"Hope is Dead\" when not( Hope() ) then System.out.println( \"We are all Doomed!!! Democracy is Dead\" ); end",
"public static void execute( KieContainer kc ) { KieSession ksession = kc.newKieSession(\"HonestPoliticianKS\"); final Politician p1 = new Politician( \"President of Umpa Lumpa\", true ); final Politician p2 = new Politician( \"Prime Minster of Cheeseland\", true ); final Politician p3 = new Politician( \"Tsar of Pringapopaloo\", true ); final Politician p4 = new Politician( \"Omnipotence Om\", true ); ksession.insert( p1 ); ksession.insert( p2 ); ksession.insert( p3 ); ksession.insert( p4 ); ksession.fireAllRules(); ksession.dispose(); }",
"Hurrah!!! Democracy Lives I'm an evil corporation and I have corrupted President of Umpa Lumpa I'm an evil corporation and I have corrupted Prime Minster of Cheeseland I'm an evil corporation and I have corrupted Tsar of Pringapopaloo I'm an evil corporation and I have corrupted Omnipotence Om We are all Doomed!!! Democracy is Dead",
"package org.drools.examples.honestpolitician; import org.kie.api.KieServices; import org.kie.api.event.rule.DebugAgendaEventListener; 1 import org.kie.api.event.rule.DebugRuleRuntimeEventListener; import org.kie.api.runtime.KieContainer; import org.kie.api.runtime.KieSession; public class HonestPoliticianExample { /** * @param args */ public static void main(final String[] args) { KieServices ks = KieServices.Factory.get(); 2 //ks = KieServices.Factory.get(); KieContainer kc = KieServices.Factory.get().getKieClasspathContainer(); System.out.println(kc.verify().getMessages().toString()); //execute( kc ); execute( ks, kc); 3 } public static void execute( KieServices ks, KieContainer kc ) { 4 KieSession ksession = kc.newKieSession(\"HonestPoliticianKS\"); final Politician p1 = new Politician( \"President of Umpa Lumpa\", true ); final Politician p2 = new Politician( \"Prime Minster of Cheeseland\", true ); final Politician p3 = new Politician( \"Tsar of Pringapopaloo\", true ); final Politician p4 = new Politician( \"Omnipotence Om\", true ); ksession.insert( p1 ); ksession.insert( p2 ); ksession.insert( p3 ); ksession.insert( p4 ); // The application can also setup listeners 5 ksession.addEventListener( new DebugAgendaEventListener() ); ksession.addEventListener( new DebugRuleRuntimeEventListener() ); // Set up a file-based audit logger. ks.getLoggers().newFileLogger( ksession, \"./target/honestpolitician\" ); 6 ksession.fireAllRules(); ksession.dispose(); } }",
"single 8 at [0,1] column elimination due to [1,2]: remove 9 from [4,2] hidden single 9 at [1,2] row elimination due to [2,8]: remove 7 from [2,4] remove 6 from [3,8] due to naked pair at [3,2] and [3,7] hidden pair in row at [4,6] and [4,4]",
"Col: 0 Col: 1 Col: 2 Col: 3 Col: 4 Col: 5 Col: 6 Col: 7 Col: 8 Row 0: 123456789 --- 5 --- --- 6 --- --- 8 --- 123456789 --- 1 --- --- 9 --- --- 4 --- 123456789 Row 1: --- 9 --- 123456789 123456789 --- 6 --- 123456789 --- 5 --- 123456789 123456789 --- 3 --- Row 2: --- 7 --- 123456789 123456789 --- 4 --- --- 9 --- --- 3 --- 123456789 123456789 --- 8 --- Row 3: --- 8 --- --- 9 --- --- 7 --- 123456789 --- 4 --- 123456789 --- 6 --- --- 3 --- --- 5 --- Row 4: 123456789 123456789 --- 3 --- --- 9 --- 123456789 --- 6 --- --- 8 --- 123456789 123456789 Row 5: --- 4 --- --- 6 --- --- 5 --- 123456789 --- 8 --- 123456789 --- 2 --- --- 9 --- --- 1 --- Row 6: --- 5 --- 123456789 123456789 --- 2 --- --- 6 --- --- 9 --- 123456789 123456789 --- 7 --- Row 7: --- 6 --- 123456789 123456789 --- 5 --- 123456789 --- 4 --- 123456789 123456789 --- 9 --- Row 8: 123456789 --- 4 --- --- 9 --- --- 7 --- 123456789 --- 8 --- --- 3 --- --- 5 --- 123456789",
"cell [0,8]: 5 has a duplicate in row 0 cell [0,0]: 5 has a duplicate in row 0 cell [6,0]: 8 has a duplicate in col 0 cell [4,0]: 8 has a duplicate in col 0 Validation complete.",
"Validation complete. Sorry - can't solve this grid.",
"rule \"duplicate in cell row\" when USDc: Cell( USDv: value != null ) USDcr: CellRow( cells contains USDc ) exists Cell( this != USDc, value == USDv, cellRow == USDcr ) then System.out.println( \"cell \" + USDc.toString() + \" has a duplicate in row \" + USDcr.getNumber() ); end rule \"duplicate in cell col\" when USDc: Cell( USDv: value != null ) USDcc: CellCol( cells contains USDc ) exists Cell( this != USDc, value == USDv, cellCol == USDcc ) then System.out.println( \"cell \" + USDc.toString() + \" has a duplicate in col \" + USDcc.getNumber() ); end rule \"duplicate in cell sqr\" when USDc: Cell( USDv: value != null ) USDcs: CellSqr( cells contains USDc ) exists Cell( this != USDc, value == USDv, cellSqr == USDcs ) then System.out.println( \"cell \" + USDc.toString() + \" has duplicate in its square of nine.\" ); end",
"rule \"terminate group\" salience -100 when then System.out.println( \"Validation complete.\" ); drools.halt(); end",
"// A Setting object is inserted to define the value of a Cell. // Rule for updating the cell and all cell groups that contain it rule \"set a value\" when // A Setting with row and column number, and a value USDs: Setting( USDrn: rowNo, USDcn: colNo, USDv: value ) // A matching Cell, with no value set USDc: Cell( rowNo == USDrn, colNo == USDcn, value == null, USDcr: cellRow, USDcc: cellCol, USDcs: cellSqr ) // Count down USDctr: Counter( USDcount: count ) then // Modify the Cell by setting its value. modify( USDc ){ setValue( USDv ) } // System.out.println( \"set cell \" + USDc.toString() ); modify( USDcr ){ blockValue( USDv ) } modify( USDcc ){ blockValue( USDv ) } modify( USDcs ){ blockValue( USDv ) } modify( USDctr ){ setCount( USDcount - 1 ) } end // Rule for removing a value from all cells that are siblings // in one of the three cell groups rule \"eliminate a value from Cell\" when // A Setting with row and column number, and a value USDs: Setting( USDrn: rowNo, USDcn: colNo, USDv: value ) // The matching Cell, with the value already set Cell( rowNo == USDrn, colNo == USDcn, value == USDv, USDexCells: exCells ) // For all Cells that are associated with the updated cell USDc: Cell( free contains USDv ) from USDexCells then // System.out.println( \"clear \" + USDv + \" from cell \" + USDc.posAsString() ); // Modify a related Cell by blocking the assigned value. modify( USDc ){ blockValue( USDv ) } end // Rule for eliminating the Setting fact rule \"retract setting\" when // A Setting with row and column number, and a value USDs: Setting( USDrn: rowNo, USDcn: colNo, USDv: value ) // The matching Cell, with the value already set USDc: Cell( rowNo == USDrn, colNo == USDcn, value == USDv ) // This is the negation of the last pattern in the previous rule. // Now the Setting fact can be safely retracted. not( USDx: Cell( free contains USDv ) and Cell( this == USDc, exCells contains USDx ) ) then // System.out.println( \"done setting cell \" + USDc.toString() ); // Discard the Setter fact. delete( USDs ); // Sudoku.sudoku.consistencyCheck(); end",
"// Detect a set of candidate values with cardinality 1 for some Cell. // This is the value to be set. rule \"single\" when // Currently no setting underway not Setting() // One element in the \"free\" set USDc: Cell( USDrn: rowNo, USDcn: colNo, freeCount == 1 ) then Integer i = USDc.getFreeValue(); if (explain) System.out.println( \"single \" + i + \" at \" + USDc.posAsString() ); // Insert another Setter fact. insert( new Setting( USDrn, USDcn, i ) ); end // Detect a set of candidate values with a value that is the only one // in one of its groups. This is the value to be set. rule \"hidden single\" when // Currently no setting underway not Setting() not Cell( freeCount == 1 ) // Some integer USDi: Integer() // The \"free\" set contains this number USDc: Cell( USDrn: rowNo, USDcn: colNo, freeCount > 1, free contains USDi ) // A cell group contains this cell USDc. USDcg: CellGroup( cells contains USDc ) // No other cell from that group contains USDi. not ( Cell( this != USDc, free contains USDi ) from USDcg.getCells() ) then if (explain) System.out.println( \"hidden single \" + USDi + \" at \" + USDc.posAsString() ); // Insert another Setter fact. insert( new Setting( USDrn, USDcn, USDi ) ); end",
"// A \"naked pair\" is two cells in some cell group with their sets of // permissible values being equal with cardinality 2. These two values // can be removed from all other candidate lists in the group. rule \"naked pair\" when // Currently no setting underway not Setting() not Cell( freeCount == 1 ) // One cell with two candidates USDc1: Cell( freeCount == 2, USDf1: free, USDr1: cellRow, USDrn1: rowNo, USDcn1: colNo, USDb1: cellSqr ) // The containing cell group USDcg: CellGroup( freeCount > 2, cells contains USDc1 ) // Another cell with two candidates, not the one we already have USDc2: Cell( this != USDc1, free == USDf1 /*** , rowNo >= USDrn1, colNo >= USDcn1 ***/ ) from USDcg.cells // Get one of the \"naked pair\". Integer( USDv: intValue ) from USDc1.getFree() // Get some other cell with a candidate equal to one from the pair. USDc3: Cell( this != USDc1 && != USDc2, freeCount > 1, free contains USDv ) from USDcg.cells then if (explain) System.out.println( \"remove \" + USDv + \" from \" + USDc3.posAsString() + \" due to naked pair at \" + USDc1.posAsString() + \" and \" + USDc2.posAsString() ); // Remove the value. modify( USDc3 ){ blockValue( USDv ) } end",
"// If two cells within the same cell group contain candidate sets with more than // two values, with two values being in both of them but in none of the other // cells, then we have a \"hidden pair\". We can remove all other candidates from // these two cells. rule \"hidden pair in row\" when // Currently no setting underway not Setting() not Cell( freeCount == 1 ) // Establish a pair of Integer facts. USDi1: Integer() USDi2: Integer( this > USDi1 ) // Look for a Cell with these two among its candidates. (The upper bound on // the number of candidates avoids a lot of useless work during startup.) USDc1: Cell( USDrn1: rowNo, USDcn1: colNo, freeCount > 2 && < 9, free contains USDi1 && contains USDi2, USDcellRow: cellRow ) // Get another one from the same row, with the same pair among its candidates. USDc2: Cell( this != USDc1, cellRow == USDcellRow, freeCount > 2, free contains USDi1 && contains USDi2 ) // Ascertain that no other cell in the group has one of these two values. not( Cell( this != USDc1 && != USDc2, free contains USDi1 || contains USDi2 ) from USDcellRow.getCells() ) then if( explain) System.out.println( \"hidden pair in row at \" + USDc1.posAsString() + \" and \" + USDc2.posAsString() ); // Set the candidate lists of these two Cells to the \"hidden pair\". modify( USDc1 ){ blockExcept( USDi1, USDi2 ) } modify( USDc2 ){ blockExcept( USDi1, USDi2 ) } end rule \"hidden pair in column\" when not Setting() not Cell( freeCount == 1 ) USDi1: Integer() USDi2: Integer( this > USDi1 ) USDc1: Cell( USDrn1: rowNo, USDcn1: colNo, freeCount > 2 && < 9, free contains USDi1 && contains USDi2, USDcellCol: cellCol ) USDc2: Cell( this != USDc1, cellCol == USDcellCol, freeCount > 2, free contains USDi1 && contains USDi2 ) not( Cell( this != USDc1 && != USDc2, free contains USDi1 || contains USDi2 ) from USDcellCol.getCells() ) then if (explain) System.out.println( \"hidden pair in column at \" + USDc1.posAsString() + \" and \" + USDc2.posAsString() ); modify( USDc1 ){ blockExcept( USDi1, USDi2 ) } modify( USDc2 ){ blockExcept( USDi1, USDi2 ) } end rule \"hidden pair in square\" when not Setting() not Cell( freeCount == 1 ) USDi1: Integer() USDi2: Integer( this > USDi1 ) USDc1: Cell( USDrn1: rowNo, USDcn1: colNo, freeCount > 2 && < 9, free contains USDi1 && contains USDi2, USDcellSqr: cellSqr ) USDc2: Cell( this != USDc1, cellSqr == USDcellSqr, freeCount > 2, free contains USDi1 && contains USDi2 ) not( Cell( this != USDc1 && != USDc2, free contains USDi1 || contains USDi2 ) from USDcellSqr.getCells() ) then if (explain) System.out.println( \"hidden pair in square \" + USDc1.posAsString() + \" and \" + USDc2.posAsString() ); modify( USDc1 ){ blockExcept( USDi1, USDi2 ) } modify( USDc2 ){ blockExcept( USDi1, USDi2 ) } end",
"rule \"X-wings in rows\" when not Setting() not Cell( freeCount == 1 ) USDi: Integer() USDca1: Cell( freeCount > 1, free contains USDi, USDra: cellRow, USDrano: rowNo, USDc1: cellCol, USDc1no: colNo ) USDcb1: Cell( freeCount > 1, free contains USDi, USDrb: cellRow, USDrbno: rowNo > USDrano, cellCol == USDc1 ) not( Cell( this != USDca1 && != USDcb1, free contains USDi ) from USDc1.getCells() ) USDca2: Cell( freeCount > 1, free contains USDi, cellRow == USDra, USDc2: cellCol, USDc2no: colNo > USDc1no ) USDcb2: Cell( freeCount > 1, free contains USDi, cellRow == USDrb, cellCol == USDc2 ) not( Cell( this != USDca2 && != USDcb2, free contains USDi ) from USDc2.getCells() ) USDcx: Cell( rowNo == USDrano || == USDrbno, colNo != USDc1no && != USDc2no, freeCount > 1, free contains USDi ) then if (explain) { System.out.println( \"X-wing with \" + USDi + \" in rows \" + USDca1.posAsString() + \" - \" + USDcb1.posAsString() + USDca2.posAsString() + \" - \" + USDcb2.posAsString() + \", remove from \" + USDcx.posAsString() ); } modify( USDcx ){ blockValue( USDi ) } end rule \"X-wings in columns\" when not Setting() not Cell( freeCount == 1 ) USDi: Integer() USDca1: Cell( freeCount > 1, free contains USDi, USDc1: cellCol, USDc1no: colNo, USDra: cellRow, USDrano: rowNo ) USDca2: Cell( freeCount > 1, free contains USDi, USDc2: cellCol, USDc2no: colNo > USDc1no, cellRow == USDra ) not( Cell( this != USDca1 && != USDca2, free contains USDi ) from USDra.getCells() ) USDcb1: Cell( freeCount > 1, free contains USDi, cellCol == USDc1, USDrb: cellRow, USDrbno: rowNo > USDrano ) USDcb2: Cell( freeCount > 1, free contains USDi, cellCol == USDc2, cellRow == USDrb ) not( Cell( this != USDcb1 && != USDcb2, free contains USDi ) from USDrb.getCells() ) USDcx: Cell( colNo == USDc1no || == USDc2no, rowNo != USDrano && != USDrbno, freeCount > 1, free contains USDi ) then if (explain) { System.out.println( \"X-wing with \" + USDi + \" in columns \" + USDca1.posAsString() + \" - \" + USDca2.posAsString() + USDcb1.posAsString() + \" - \" + USDcb2.posAsString() + \", remove from \" + USDcx.posAsString() ); } modify( USDcx ){ blockValue( USDi ) } end",
"rule \"intersection removal column\" when not Setting() not Cell( freeCount == 1 ) USDi: Integer() // Occurs in a Cell USDc: Cell( free contains USDi, USDcs: cellSqr, USDcc: cellCol ) // Does not occur in another cell of the same square and a different column not Cell( this != USDc, free contains USDi, cellSqr == USDcs, cellCol != USDcc ) // A cell exists in the same column and another square containing this value. USDcx: Cell( freeCount > 1, free contains USDi, cellCol == USDcc, cellSqr != USDcs ) then // Remove the value from that other cell. if (explain) { System.out.println( \"column elimination due to \" + USDc.posAsString() + \": remove \" + USDi + \" from \" + USDcx.posAsString() ); } modify( USDcx ){ blockValue( USDi ) } end rule \"intersection removal row\" when not Setting() not Cell( freeCount == 1 ) USDi: Integer() // Occurs in a Cell USDc: Cell( free contains USDi, USDcs: cellSqr, USDcr: cellRow ) // Does not occur in another cell of the same square and a different row. not Cell( this != USDc, free contains USDi, cellSqr == USDcs, cellRow != USDcr ) // A cell exists in the same row and another square containing this value. USDcx: Cell( freeCount > 1, free contains USDi, cellRow == USDcr, cellSqr != USDcs ) then // Remove the value from that other cell. if (explain) { System.out.println( \"row elimination due to \" + USDc.posAsString() + \": remove \" + USDi + \" from \" + USDcx.posAsString() ); } modify( USDcx ){ blockValue( USDi ) } end",
"rule \"register north east\" ruleflow-group \"register neighbor\" when USDcell: Cell( USDrow : row, USDcol : col ) USDnorthEast : Cell( row == (USDrow - 1), col == ( USDcol + 1 ) ) then insert( new Neighbor( USDcell, USDnorthEast ) ); insert( new Neighbor( USDnorthEast, USDcell ) ); end rule \"register north\" ruleflow-group \"register neighbor\" when USDcell: Cell( USDrow : row, USDcol : col ) USDnorth : Cell( row == (USDrow - 1), col == USDcol ) then insert( new Neighbor( USDcell, USDnorth ) ); insert( new Neighbor( USDnorth, USDcell ) ); end rule \"register north west\" ruleflow-group \"register neighbor\" when USDcell: Cell( USDrow : row, USDcol : col ) USDnorthWest : Cell( row == (USDrow - 1), col == ( USDcol - 1 ) ) then insert( new Neighbor( USDcell, USDnorthWest ) ); insert( new Neighbor( USDnorthWest, USDcell ) ); end rule \"register west\" ruleflow-group \"register neighbor\" when USDcell: Cell( USDrow : row, USDcol : col ) USDwest : Cell( row == USDrow, col == ( USDcol - 1 ) ) then insert( new Neighbor( USDcell, USDwest ) ); insert( new Neighbor( USDwest, USDcell ) ); end",
"rule \"Kill The Lonely\" ruleflow-group \"evaluate\" no-loop when // A live cell has fewer than 2 live neighbors. theCell: Cell( liveNeighbors < 2, cellState == CellState.LIVE, phase == Phase.EVALUATE ) then modify( theCell ){ setPhase( Phase.KILL ); } end rule \"Kill The Overcrowded\" ruleflow-group \"evaluate\" no-loop when // A live cell has more than 3 live neighbors. theCell: Cell( liveNeighbors > 3, cellState == CellState.LIVE, phase == Phase.EVALUATE ) then modify( theCell ){ setPhase( Phase.KILL ); } end rule \"Give Birth\" ruleflow-group \"evaluate\" no-loop when // A dead cell has 3 live neighbors. theCell: Cell( liveNeighbors == 3, cellState == CellState.DEAD, phase == Phase.EVALUATE ) then modify( theCell ){ theCell.setPhase( Phase.BIRTH ); } end",
"rule \"reset calculate\" ruleflow-group \"reset calculate\" when then WorkingMemory wm = drools.getWorkingMemory(); wm.clearRuleFlowGroup( \"calculate\" ); end rule \"kill\" ruleflow-group \"kill\" no-loop when theCell: Cell( phase == Phase.KILL ) then modify( theCell ){ setCellState( CellState.DEAD ), setPhase( Phase.DONE ); } end rule \"birth\" ruleflow-group \"birth\" no-loop when theCell: Cell( phase == Phase.BIRTH ) then modify( theCell ){ setCellState( CellState.LIVE ), setPhase( Phase.DONE ); } end",
"rule \"Calculate Live\" ruleflow-group \"calculate\" lock-on-active when theCell: Cell( cellState == CellState.LIVE ) Neighbor( cell == theCell, USDneighbor : neighbor ) then modify( USDneighbor ){ setLiveNeighbors( USDneighbor.getLiveNeighbors() + 1 ), setPhase( Phase.EVALUATE ); } end rule \"Calculate Dead\" ruleflow-group \"calculate\" lock-on-active when theCell: Cell( cellState == CellState.DEAD ) Neighbor( cell == theCell, USDneighbor : neighbor ) then modify( USDneighbor ){ setLiveNeighbors( USDneighbor.getLiveNeighbors() - 1 ), setPhase( Phase.EVALUATE ); } end",
"ksession.insert( new Location(\"Office\", \"House\") ); ksession.insert( new Location(\"Kitchen\", \"House\") ); ksession.insert( new Location(\"Knife\", \"Kitchen\") ); ksession.insert( new Location(\"Cheese\", \"Kitchen\") ); ksession.insert( new Location(\"Desk\", \"Office\") ); ksession.insert( new Location(\"Chair\", \"Office\") ); ksession.insert( new Location(\"Computer\", \"Desk\") ); ksession.insert( new Location(\"Drawer\", \"Desk\") );",
"go1 Office is in the House --- go2 Drawer is in the House --- go3 --- Key is in the Office --- go4 Chair is in the Office Desk is in the Office Key is in the Office Computer is in the Office Drawer is in the Office --- go5 Chair is in Office Desk is in Office Drawer is in Desk Key is in Drawer Kitchen is in House Cheese is in Kitchen Knife is in Kitchen Computer is in Desk Office is in House Key is in Office Drawer is in House Computer is in House Key is in House Desk is in House Chair is in House Knife is in House Cheese is in House Computer is in Office Drawer is in Office Key is in Desk",
"query isContainedIn( String x, String y ) Location( x, y; ) or ( Location( z, y; ) and isContainedIn( x, z; ) ) end",
"rule \"go\" salience 10 when USDs : String() then System.out.println( USDs ); end rule \"go1\" when String( this == \"go1\" ) isContainedIn(\"Office\", \"House\"; ) then System.out.println( \"Office is in the House\" ); end",
"ksession.insert( \"go1\" ); ksession.fireAllRules();",
"go1 Office is in the House",
"rule \"go2\" when String( this == \"go2\" ) isContainedIn(\"Drawer\", \"House\"; ) then System.out.println( \"Drawer is in the House\" ); end",
"ksession.insert( \"go2\" ); ksession.fireAllRules();",
"go2 Drawer is in the House",
"isContainedIn(x==drawer, z==desk)",
"Location(x==drawer, y==desk)",
"rule \"go3\" when String( this == \"go3\" ) isContainedIn(\"Key\", \"Office\"; ) then System.out.println( \"Key is in the Office\" ); end",
"ksession.insert( \"go3\" ); ksession.fireAllRules();",
"go3",
"ksession.insert( new Location(\"Key\", \"Drawer\") ); ksession.fireAllRules();",
"Key is in the Office",
"rule \"go4\" when String( this == \"go4\" ) isContainedIn(thing, \"Office\"; ) then System.out.println( thing + \"is in the Office\" ); end",
"ksession.insert( \"go4\" ); ksession.fireAllRules();",
"go4 Chair is in the Office Desk is in the Office Key is in the Office Computer is in the Office Drawer is in the Office",
"rule \"go5\" when String( this == \"go5\" ) isContainedIn(thing, location; ) then System.out.println(thing + \" is in \" + location ); end",
"ksession.insert( \"go5\" ); ksession.fireAllRules();",
"go5 Chair is in Office Desk is in Office Drawer is in Desk Key is in Drawer Kitchen is in House Cheese is in Kitchen Knife is in Kitchen Computer is in Desk Office is in House Key is in Office Drawer is in House Computer is in House Key is in House Desk is in House Chair is in House Knife is in House Cheese is in House Computer is in Office Drawer is in Office Key is in Desk"
] | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_decision_services_in_red_hat_process_automation_manager/decision-examples-ide-con_drl-rules |
Chapter 2. Browsing with the API | Chapter 2. Browsing with the API REST APIs give access to resources (data entities) through URI paths. Procedure Go to the automation controller REST API in a web browser at: https://<server name>/api/controller/v2 Click the "v2" link to "current versions" or "available versions" . Automation controller supports version 2 of the API. Perform a GET with just the /api/ endpoint to get the current_version , which is the recommended version. Click the icon on the navigation menu, for documentation on the access methods for that particular API endpoint and what data is returned when using those methods. Use the PUT and POST verbs on the specific API pages by formatting JSON in the various text fields. You can also view changed settings from factory defaults at /api/v2/settings/changed/ endpoint. It reflects changes you made in the API browser, not changed settings that come from static settings files. | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/automation_execution_api_overview/controller-api-browsing-api |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_security_guide/making-open-source-more-inclusive |
7.300. esc | 7.300. esc 7.300.1. RHBA-2013:0735 - esc bug fix update Updated esc packages that fix one bug are now available for Red Hat Enterprise Linux 6. The esc packages contain the Smart Card Manager GUI, which allows user to manage security smart cards. The primary function of the tool is to enroll smart cards, so that they can be used for common cryptographic operations, such as secure e-mail and website access. Bug Fix BZ# 922646 The ESC utility did not start when the latest 17 series release of the XULRunner runtime environment was installed on the system. This update includes necessary changes to ensure that ESC works as expected with the latest version of XULRunner. Users of esc are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/esc |
7.203. ql2500-firmware | 7.203. ql2500-firmware 7.203.1. RHBA-2013:0403 - ql2500-firmware bug fix and enhancement update An updated ql2500-firmware package that fixes multiple bugs and adds various enhancements is now available for Red Hat Enterprise Linux 6. The ql2500-firmware package provides the firmware required to run the QLogic 2500 Series of mass storage adapters. Note This update upgrades the ql2500 firmware to upstream version 5.08.00., which provides a number of bug fixes and enhancements over the version. (BZ#826667) All users of QLogic 2500 Series Fibre Channel adapters are advised to upgrade to this updated package, which fixes these bugs and adds these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/ql2500-firmware |
Chapter 3. Administer MicroProfile in JBoss EAP | Chapter 3. Administer MicroProfile in JBoss EAP 3.1. MicroProfile OpenTracing administration 3.1.1. Enabling MicroProfile Open Tracing Use the following management CLI commands to enable the MicroProfile Open Tracing feature globally for the server instance by adding the subsystem to the server configuration. Procedure Enable the microprofile-opentracing-smallrye subsystem using the following management command: Reload the server for the changes to take effect. 3.1.2. Removing the microprofile-opentracing-smallrye subsystem The microprofile-opentracing-smallrye subsystem is included in the default JBoss EAP 7.4 configuration. This subsystem provides MicroProfile OpenTracing functionality for JBoss EAP 7.4. If you experience system memory or performance degradation with MicroProfile OpenTracing enabled, you might want to disable the microprofile-opentracing-smallrye subsystem. You can use the remove operation in the management CLI to disable the MicroProfile OpenTracing feature globally for a given server. Procedure Remove the microprofile-opentracing-smallrye subsystem. Reload the server for the changes to take effect. 3.1.3. Adding the microprofile-opentracing-smallrye subsystem You can enable the microprofile-opentracing-smallrye subsystem by adding it to the server configuration. Use the add operation in the management CLI to enable the MicroProfile OpenTracing feature globally for a given the server. Procedure Add the subsystem. Reload the server for the changes to take effect. 3.1.4. Installing Jaeger Install Jaeger using docker . Prerequisites docker is installed. Procedure Install Jaeger using docker by issuing the following command in CLI: 3.2. MicroProfile Config configuration 3.2.1. Adding properties in a ConfigSource management resource You can store properties directly in a config-source subsystem as a management resource. Procedure Create a ConfigSource and add a property: 3.2.2. Configuring directories as ConfigSources When a property is stored in a directory as a file, the file-name is the name of a property and the file content is the value of the property. Procedure Create a directory where you want to store the files: Navigate to the directory: Create a file name to store the value for the property name : Add the value of the property to the file: Create a ConfigSource in which the file name is the property and the file contents the value of the property: This results in the following XML configuration: <subsystem xmlns="urn:wildfly:microprofile-config-smallrye:1.0"> <config-source name="file-props"> <dir path="/etc/config/prop-files"/> </config-source> </subsystem> 3.2.3. Obtaining ConfigSource from a ConfigSource class You can create and configure a custom org.eclipse.microprofile.config.spi.ConfigSource implementation class to provide a source for the configuration values. Procedure The following management CLI command creates a ConfigSource for the implementation class named org.example.MyConfigSource that is provided by a JBoss module named org.example . If you want to use a ConfigSource from the org.example module, add the <module name="org.eclipse.microprofile.config.api"/> dependency to the path/to/org/example/main/module.xml file. This command results in the following XML configuration for the microprofile-config-smallrye subsystem. <subsystem xmlns="urn:wildfly:microprofile-config-smallrye:1.0"> <config-source name="my-config-source"> <class name="org.example.MyConfigSource" module="org.example"/> </config-source> </subsystem> Properties provided by the custom org.eclipse.microprofile.config.spi.ConfigSource implementation class are available to any JBoss EAP deployment. 3.2.4. Obtaining ConfigSource configuration from a ConfigSourceProvider class You can create and configure a custom org.eclipse.microprofile.config.spi.ConfigSourceProvider implementation class that registers implementations for multiple ConfigSource instances. Procedure Create a config-source-provider : The command creates a config-source-provider for the implementation class named org.example.MyConfigSourceProvider that is provided by a JBoss Module named org.example . If you want to use a config-source-provider from the org.example module, add the <module name="org.eclipse.microprofile.config.api"/> dependency to the path/to/org/example/main/module.xml file. This command results in the following XML configuration for the microprofile-config-smallrye subsystem: <subsystem xmlns="urn:wildfly:microprofile-config-smallrye:1.0"> <config-source-provider name="my-config-source-provider"> <class name="org.example.MyConfigSourceProvider" module="org.example"/> </config-source-provider> </subsystem> Properties provided by the ConfigSourceProvider implementation are available to any JBoss EAP deployment. Additional resources For information about how to add a global module to the JBoss EAP server, see Define Global Modules in the Configuration Guide for JBoss EAP. 3.3. MicroProfile Fault Tolerance configuration 3.3.1. Adding the MicroProfile Fault Tolerance extension The MicroProfile Fault Tolerance extension is included in standalone-microprofile.xml and standalone-microprofile-ha.xml configurations that are provided as part of JBoss EAP XP. The extension is not included in the standard standalone.xml configuration. To use the extension, you must manually enable it. Prerequisites EAP XP pack is installed. Procedure Add the MicroProfile Fault Tolerance extension using the following management CLI command: Enable the microprofile-fault-tolerance-smallrye subsystem using the following managenent command: Reload the server with the following management command: 3.4. MicroProfile Health configuration 3.4.1. Examining health using the management CLI You can check system health using the management CLI. Procedure Examine health: 3.4.2. Examining health using the management console You can check system health using the management console. A check runtime operation shows the health checks and the global outcome as boolean value. Procedure Navigate to the Runtime tab and select the server. In the Monitor column, click MicroProfile Health View . 3.4.3. Examining health using the HTTP endpoint Health check is automatically deployed to the health context on JBoss EAP, so you can obtain the current health using the HTTP endpoint. The default address for the /health endpoint, accessible from the management interface, is http://127.0.0.1:9990/health . Procedure To obtain the current health of the server using the HTTP endpoint, use the following URL:. Accessing this context displays the health check in JSON format, indicating if the server is healthy. 3.4.4. Enabling authentication for MicroProfile Health You can configure the health context to require authentication for access. Procedure Set the security-enabled attribute to true on the microprofile-health-smallrye subsystem. Reload the server for the changes to take effect. Any subsequent attempt to access the /health endpoint triggers an authentication prompt. 3.4.5. Readiness probes that determine server health and readiness JBoss EAP XP 3.0.0 supports three readiness probes to determine server health and readiness. server-status - returns UP when the server-state is running . boot-errors - returns UP when the probe detects no boot errors. deployment-status - returns UP when the status for all deployments is OK . These readiness probes are enabled by default. You can disable the probes using the MicroProfile Config property mp.health.disable-default-procedures . The following example illustrates the use of the three probes with the check operation: Additional resources MicroProfile Health in JBoss EAP Global status when probes are not defined 3.4.6. Global status when probes are not defined The :empty-readiness-checks-status and :empty-liveness-checks-status management attributes specify the global status when no readiness or liveness probes are defined. These attributes allow applications to report 'DOWN' until their probes verify that the application is ready or live. By default, applications report 'UP'. The :empty-readiness-checks-status attribute specifies the global status for readiness probes if no readiness probes have been defined: The :empty-liveness-checks-status attribute specifies the global status for liveness probes if no liveness probes have been defined: The /health HTTP endpoint and the :check operation that check both readiness and liveness probes also take into account these attributes. You can also modify these attributes as shown in the following example: 3.5. MicroProfile JWT configuration 3.5.1. Enabling microprofile-jwt-smallrye subsystem The MicroProfile JWT integration is provided by the microprofile-jwt-smallrye subsystem and is included in the default configuration. If the subsystem is not present in the default configuration, you can add it as follows. Prerequisites EAP XP is installed. Procedure Enable the MicroProfile JWT smallrye extension in JBoss EAP: Enable the microprofile-jwt-smallrye subsystem: Reload the server: The microprofile-jwt-smallrye subsystem is enabled. 3.6. MicroProfile Metrics administration 3.6.1. Metrics available on the management interface The JBoss EAP subsystem metrics are exposed in Prometheus format. Metrics are automatically available on the JBoss EAP management interface, with the following contexts: /metrics/ - Contains metrics specified in the MicroProfile 3.0 specification. /metrics/vendor - Contains vendor-specific metrics, such as memory pools. /metrics/application - Contains metrics from deployed applications and subsystems that use the MicroProfile Metrics API. The metric names are based on subsystem and attribute names. For example, the subsystem undertow exposes a metric attribute request-count for every servlet in an application deployment. The name of this metric is jboss_undertow_request_count . The prefix jboss identifies JBoss EAP as the source of the metrics. 3.6.2. Examining metrics using the HTTP endpoint Examine the metrics that are available on the JBoss EAP management interface using the HTTP endpoint. Procedure Use the curl command: 3.6.3. Enabling Authentication for the MicroProfile Metrics HTTP Endpoint Configure the metrics context to require users to be authorized to access the context. This configuration extends to all the subcontexts of the metrics context. Procedure Set the security-enabled attribute to true on the microprofile-metrics-smallrye subsystem. Reload the server for the changes to take effect. Any subsequent attempt to access the metrics endpoint results in an authentication prompt. 3.6.4. Obtaining the request count for a web service Obtain the request count for a web service that exposes its request count metric. The following procedure uses helloworld-rs quickstart as the web service for obtaining request count. The quickstart is available at Download the quickstart from: jboss-eap-quickstarts . Prerequsites The web service exposes request count. Procedure Enable statistics for the undertow subsystem: Start the standalone server with statistics enabled: For an already running server, enable the statistics for the undertow subsystem: Deploy the helloworld-rs quickstart: In the root directory of the quickstart, deploy the web application using Maven: Query the HTTP endpoint in the CLI using the curl command and filter for request_count : Expected output: The attribute value returned is 0.0 . Access the quickstart, located at http://localhost:8080/helloworld-rs/ , in a web browser and click any of the links. Query the HTTP endpoint from the CLI again: Expected output: The value is updated to 1.0 . Repeat the last two steps to verify that the request count is updated. 3.7. MicroProfile OpenAPI administration 3.7.1. Enabling MicroProfile OpenAPI The microprofile-openapi-smallrye subsystem is provided in the standalone-microprofile.xml configuration. However, JBoss EAP XP uses the standalone.xml by default. You must include the subsystem in standalone.xml to use it. Alternatively, you can follow the procedure Updating standalone configurations with MicroProfile subsystems and extensions to update the standalone.xml configuration file. Procedure Enable the MicroProfile OpenAPI smallrye extension in JBoss EAP: Enable the microprofile-openapi-smallrye subsystem using the following management command: Reload the server. The microprofile-openapi-smallrye subsystem is enabled. 3.7.2. Requesting an MicroProfile OpenAPI document using Accept HTTP header Request an MicroProfile OpenAPI document, in the JSON format, from a deployment using an Accept HTTP header. By default, the OpenAPI endpoint returns a YAML document. Prerequisites The deployment being queried is configured to return an MicroProfile OpenAPI document. Procedure Issue the following curl command to query the /openapi endpoint of the deployment: Replace http://localhost:8080 with the URL and port of the deployment. The Accept header indicates that the JSON document is to be returned using the application/json string. 3.7.3. Requesting an MicroProfile OpenAPI document using an HTTP parameter Request an MicroProfile OpenAPI document, in the JSON format, from a deployment using a query parameter in an HTTP request. By default, the OpenAPI endpoint returns a YAML document. Prerequisites The deployment being queried is configured to return an MicroProfile OpenAPI document. Procedure Issue the following curl command to query the /openapi endpoint of the deployment: Replace http://localhost:8080 with the URL and port of the deployment. The HTTP parameter format=JSON indicates that JSON document is to be returned. 3.7.4. Configuring JBoss EAP to serve a static OpenAPI document Configure JBoss EAP to serve a static OpenAPI document that describes the REST services for the host. When JBoss EAP is configured to serve a static OpenAPI document, the static OpenAPI document is processed before any Jakarta RESTful Web Services and MicroProfile OpenAPI annotations. In a production environment, disable annotation processing when serving a static document. Disabling annotation processing ensures that an immutable and versioned API contract is available for clients. Procedure Create a directory in the application source tree: APPLICATION_ROOT is the directory containing the pom.xml configuration file for the application. Query the OpenAPI endpoint, redirecting the output to a file: By default, the endpoint serves a YAML document, format=JSON specifies that a JSON document is returned. Configure the application to skip annotation scanning when processing the OpenAPI document model: Rebuild the application: Deploy the application again using the following management CLI commands: Undeploy the application: Deploy the application: JBoss EAP now serves a static OpenAPI document at the OpenAPI endpoint. 3.7.5. Disabling microprofile-openapi-smallrye You can disable the microprofile-openapi-smallrye subsystem in JBoss EAP XP using the management CLI. Procedure Disable the microprofile-openapi-smallrye subsystem: 3.8. Standalone server configuration 3.8.1. Standalone server configuration files The JBoss EAP XP includes additional standalone server configuration files, standalone-microprofile.xml and standalone-microprofile-ha.xml . Standard configuration files that are included with JBoss EAP remain unchanged. Note that JBoss EAP XP 3.0.0 does not support the use of domain.xml files or domain mode. Table 3.1. Standalone configuration files available in JBoss EAP XP Configuration File Purpose Included capabilities Excluded capabilities standalone.xml This is the default configuration that is used when you start your standalone server. Includes information about the server, including subsystems, networking, deployments, socket bindings, and other configurable details. Excludes subsystems necessary for messaging or high availability. standalone-microprofile.xml This configuration file supports applications that use MicroProfile. Includes information about the server, including subsystems, networking, deployments, socket bindings, and other configurable details. Excludes the following capabilities: Jakarta Enterprise Beans Messaging Jakarta EE Batch Jakarta Server Faces Jakarta Enterprise Beans timers standalone-ha.xml Includes default subsystems and adds the modcluster and jgroups subsystems for high availability. Excludes subsystems necessary for messaging. standalone-microprofile-ha.xml This standalone file supports applications that use MicroProfile. Includes the modcluster and jgroups subsystems for high availability in addition to default subsystems. Excludes subsystems necessary for messaging. standalone-full.xml Includes the messaging-activemq and iiop-openjdk subsystems in addition to default subsystems. standalone-full-ha.xml Support for every possible subsystem. Includes subsystems for messaging and high availability in addition to default subsystems. standalone-load-balancer.xml Support for the minimum subsystems necessary to use the built-in mod_cluster front-end load balancer to load balance other JBoss EAP instances. By default, starting JBoss EAP as a standalone server uses the standalone.xml file. To start JBoss EAP with a standalone MicroProfile configuration, use the -c argument. For example, Additional Resources Starting and Stopping JBoss EAP Configuration Data 3.8.2. Updating standalone configurations with MicroProfile subsystems and extensions You can update standard standalone server configuration files with MicroProfile subsystems and extensions using the docs/examples/enable-microprofile.cli script. The enable-microprofile.cli script is intended as an example script for updating standard standalone server configuration files, not custom configurations. The enable-microprofile.cli script modifies the existing standalone server configuration and adds the following MicroProfile subsystems and extensions if they do not exist in the standalone configuration file: microprofile-openapi-smallrye microprofile-jwt-smallrye microprofile-fault-tolerance-smallrye The enable-microprofile.cli script outputs a high-level description of the modifications. The configuration is secured using the elytron subsystem. The security subsystem, if present, is removed from the configuration. Prerequisites JBoss EAP XP is installed. Procedure Run the following CLI script to update the default standalone.xml server configuration file: Select a standalone server configuration other than the default standalone.xml server configuration file using the following command: The specified configuration file now includes MicroProfile subsystems and extensions. | [
"/subsystem=microprofile-opentracing-smallrye:add()",
"reload",
"/subsystem=microprofile-opentracing-smallrye:remove()",
"reload",
"/subsystem=microprofile-opentracing-smallrye:add()",
"reload",
"docker run -d --name jaeger -p 6831:6831/udp -p 5778:5778 -p 14268:14268 -p 16686:16686 jaegertracing/all-in-one:1.16",
"/subsystem=microprofile-config-smallrye/config-source=props:add(properties={\"name\" = \"jim\"})",
"mkdir -p ~/config/prop-files/",
"cd ~/config/prop-files/",
"touch name",
"echo \"jim\" > name",
"/subsystem=microprofile-config-smallrye/config-source=file-props:add(dir={path=~/config/prop-files})",
"<subsystem xmlns=\"urn:wildfly:microprofile-config-smallrye:1.0\"> <config-source name=\"file-props\"> <dir path=\"/etc/config/prop-files\"/> </config-source> </subsystem>",
"/subsystem=microprofile-config-smallrye/config-source=my-config-source:add(class={name=org.example.MyConfigSource, module=org.example})",
"<subsystem xmlns=\"urn:wildfly:microprofile-config-smallrye:1.0\"> <config-source name=\"my-config-source\"> <class name=\"org.example.MyConfigSource\" module=\"org.example\"/> </config-source> </subsystem>",
"/subsystem=microprofile-config-smallrye/config-source-provider=my-config-source-provider:add(class={name=org.example.MyConfigSourceProvider, module=org.example})",
"<subsystem xmlns=\"urn:wildfly:microprofile-config-smallrye:1.0\"> <config-source-provider name=\"my-config-source-provider\"> <class name=\"org.example.MyConfigSourceProvider\" module=\"org.example\"/> </config-source-provider> </subsystem>",
"/extension=org.wildfly.extension.microprofile.fault-tolerance-smallrye:add",
"/subsystem=microprofile-fault-tolerance-smallrye:add",
"reload",
"/subsystem=microprofile-health-smallrye:check { \"outcome\" => \"success\", \"result\" => { \"status\" => \"UP\", \"checks\" => [] } }",
"http:// <host> : <port> /health",
"/subsystem=microprofile-health-smallrye:write-attribute(name=security-enabled,value=true)",
"reload",
"[standalone@localhost:9990 /] /subsystem=microprofile-health-smallrye:check { \"checks\": [ { \"name\": \"empty-readiness-checks\", \"status\": \"UP\" }, { \"name\": \"empty-liveness-checks\", \"status\": \"UP\" }, { \"data\": { \"value\": \"running\" }, \"name\": \"server-state\", \"status\": \"UP\" }, { \"name\": \"deployments-status\", \"status\": \"UP\" }, { \"name\": \"boot-errors\", \"status\": \"UP\" } ], \"status\": \"UP\" }",
"/subsystem=microprofile-health-smallrye:read-attribute(name=empty-readiness-checks-status) { \"outcome\" => \"success\", \"result\" => expression \"USD{env.MP_HEALTH_EMPTY_READINESS_CHECKS_STATUS:UP}\" }",
"/subsystem=microprofile-health-smallrye:read-attribute(name=empty-liveness-checks-status) { \"outcome\" => \"success\", \"result\" => expression \"USD{env.MP_HEALTH_EMPTY_LIVENESS_CHECKS_STATUS:UP}\" }",
"/subsystem=microprofile-health-smallrye:write-attribute(name=empty-readiness-checks-status,value=DOWN) { \"outcome\" => \"success\", \"response-headers\" => { \"operation-requires-reload\" => true, \"process-state\" => \"reload-required\" } }",
"/extension=org.wildfly.extension.microprofile.jwt-smallrye:add",
"/subsystem=microprofile-jwt-smallrye:add",
"reload",
"curl -v http://localhost:9990/metrics | grep -i type",
"/subsystem=microprofile-metrics-smallrye:write-attribute(name=security-enabled,value=true)",
"reload",
"./standalone.sh -Dwildfly.statistics-enabled=true",
"/subsystem=undertow:write-attribute(name=statistics-enabled,value=true)",
"mvn clean install wildfly:deploy",
"curl -v http://localhost:9990/metrics | grep request_count",
"jboss_undertow_request_count_total{server=\"default-server\",http_listener=\"default\",} 0.0",
"curl -v http://localhost:9990/metrics | grep request_count",
"jboss_undertow_request_count_total{server=\"default-server\",http_listener=\"default\",} 1.0",
"/extension=org.wildfly.extension.microprofile.openapi-smallrye:add()",
"/subsystem=microprofile-openapi-smallrye:add()",
"reload",
"curl -v -H'Accept: application/json' http://localhost:8080 /openapi < HTTP/1.1 200 OK {\"openapi\": \"3.0.1\" ... }",
"curl -v http://localhost:8080 /openapi?format=JSON < HTTP/1.1 200 OK",
"mkdir APPLICATION_ROOT /src/main/webapp/META-INF",
"curl http://localhost:8080/openapi?format=JSON > src/main/webapp/META-INF/openapi.json",
"echo \"mp.openapi.scan.disable=true\" > APPLICATION_ROOT /src/main/webapp/META-INF/microprofile-config.properties",
"mvn clean install",
"undeploy microprofile-openapi.war",
"deploy APPLICATION_ROOT /target/microprofile-openapi.war",
"/subsystem=microprofile-openapi-smallrye:remove()",
"EAP_HOME /bin/standalone.sh -c=standalone-microprofile.xml",
"EAP_HOME /bin/jboss-cli.sh --file=docs/examples/enable-microprofile.cli",
"EAP_HOME /bin/jboss-cli.sh --file=docs/examples/enable-microprofile.cli -Dconfig=<standalone-full.xml|standalone-ha.xml|standalone-full-ha.xml>"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/using_jboss_eap_xp_3.0.0/administer_microprofile_in_jboss_eap |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your input on our documentation. Tell us how we can make it better. Providing documentation feedback in Jira Use the Create Issue form to provide feedback on the documentation. The Jira issue will be created in the Red Hat OpenStack Platform Jira project, where you can track the progress of your feedback. Ensure that you are logged in to Jira. If you do not have a Jira account, create an account to submit feedback. Click the following link to open a the Create Issue page: Create Issue Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/transitioning_to_containerized_services/proc_providing-feedback-on-red-hat-documentation |
function::pid2execname | function::pid2execname Name function::pid2execname - The name of the given process identifier. Synopsis Arguments pid Process identifier. Description Return the name of the given process id. | [
"function pid2execname:string(pid:long)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-pid2execname |
Chapter 24. Compiler and Tools | Chapter 24. Compiler and Tools Accurate ethtool Output, see the section called "Accurate ethtool Output" | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.1_release_notes/chap-tp-compiler_and_tools |
Chapter 5. Bookmarking files and locations | Chapter 5. Bookmarking files and locations In GNOME, applications and dialogs that manage files list bookmarks in the left side bar. You can add, remove, and edit the bookmarks. 5.1. Adding a bookmark You can save a reference to a folder by bookmarking it in the Files application. Prerequisite Locate the folder in the Files application. Procedure Add the folder to bookmarks using either of the following methods: By dragging: Drag the folder to the left side bar. Drop it over the New bookmark item. Using a keyboard shortcut: Open the folder. Press Ctrl + D . Using a menu: Open the folder. In the navigation bar at the top of the window, click the name of the folder. Select Add to Bookmarks . Verification Check that the bookmark now appears in the side bar. 5.2. Removing a bookmark You can delete an existing bookmark in the Files application. Procedure Right-click the bookmark in the side bar. Select Remove from the menu. Verification Check that the bookmark no longer appears in the side bar. 5.3. Renaming a bookmark You can rename a bookmark to distinguish it from other bookmarks. If you have bookmarks to several folders that all share the same name, you can tell the bookmarks apart if you rename them. Renaming the bookmark does not rename the folder. Procedure Right-click the bookmark in the side bar. Select Rename... . In the Name field, enter the new name for the bookmark. Click Rename . Verification Check that the side bar lists the bookmark under the new name. 5.4. Adding a bookmark for all users As a system administrator, you can set a bookmark for several users at once so that file shares are easily accessible to all the users. Procedure In the home directory of each existing user, edit the ~ user /.config/gtk-3.0/bookmarks file. In the file, add a Uniform Resource Identifiers (URI) line that identifies the bookmark. For example, the following lines add bookmarks to the /usr/share/doc/ directory and to the GNOME FTP network share: Optional: To also add the bookmarks for every newly created user on the system: Create the /etc/skel/.config/gtk-3.0/bookmarks file. Enter the bookmark URI lines in the file. | [
"file:///usr/share/doc/ ftp://ftp.gnome.org/"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/getting_started_with_the_gnome_desktop_environment/bookmarking-files-and-locations_getting-started-with-the-gnome-desktop-environment |
Chapter 2. Setting up credentials for Event-Driven Ansible controller | Chapter 2. Setting up credentials for Event-Driven Ansible controller Credentials are used by Event-Driven Ansible for authentication when launching rulebooks. 2.1. Setting up credentials Create a credential to use with a private repository (GitHub or GitLab) or a private container registry. Important If you are using a GitHub or GitLab repository, use the basic auth method. Both SCM servers are officially supported. You can use any SCM provider that supports basic auth . Procedure Log in to the Event-Driven Ansible controller Dashboard. From the navigation panel, select Resources Credentials . Click Create credential . Insert the following: Name Insert the name. Description This field is optional. Credential type The options available are a GitHub personal access token, a GitLab personal access token, or a container registry. Username Insert the username. Token Insert a token that allows you to authenticate to your destination. Note If you are using a container registry, the token field can be a token or a password, depending on the registry provider. If you are using the Ansible Automation Platform hub registry, insert the password for that in the token field. Click Create credential . After saving the credential, the credentials details page is displayed. From there or the Credentials list view, you can edit or delete it. 2.2. Credentials list view On the Credentials page, you can view the list of created credentials that you have created along with the Type of credential. From the menu bar, you can search for credentials in the Name field. You also have the following options in the menu bar: Choose which columns are shown in the list view by clicking Manage columns . Choose between a List view or a Card view , by clicking the icons. 2.3. Editing a credential Procedure Edit the credential by using one of these methods: From the Credentials list view, click the Edit credential icon to the desired credential. From the Credentials list view, select the name of the credential, click Edit credential . Edit the appropriate details and click Save credential . 2.4. Deleting a credential Procedure Delete the credential by using one of these methods: From the Credentials list view, click the More Actions icon ... to the desired credential and click Delete credential . From the Credentials list view, select the name of the credential, click the More Actions icon ... to Edit credential , and click Delete credential . In the pop-up window, select Yes, I confirm that I want to delete this credential . Click Delete credential . You can delete multiple credentials at a time by selecting the checkbox to each credential and clicking the More Actions icon ... in the menu bar and then clicking Delete selected credentials . | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/event-driven_ansible_controller_user_guide/eda-credentials |
Chapter 18. Enabling passkey authentication in IdM environment | Chapter 18. Enabling passkey authentication in IdM environment The Fast IDentity Online 2 (FIDO2) standard is based on public key cryptography and adds the option of a passwordless flow with PIN or biometrics. The passkey authentication in the IdM environment uses FIDO2 compatible devices supported by the libfido2 library. The passkey authentication method provides an additional security layer to comply with regulatory standards by including passwordless and multi-factor authentication (MFA) that require a PIN or a fingerprint. It uses a combination of special hardware and software, such as passkey device and passkey enablement in an Identity Management (IdM) environment, to strengthen the security in the environment where data protection plays a key role. If your system is connected to a network with the IdM environment, the passkey authentication method issues a Kerberos ticket automatically, which enables single sign-on (SSO) for an IdM user. You can use passkey to authenticate through the graphical interface to your operating system. If your system allows you to authenticate with passkey and password, you can skip passkey authentication and authenticate with the password by pressing Space on your keyboard followed by the Enter key. If you use GNOME Desktop Manager (GDM), you can press Enter to bypass the passkey authentication. Note that, currently, the passkey authentication in the IdM environment does not support FIDO2 attestation mechanism, which allows for the identification of the particular passkey device. The following procedures provide instructions on managing and configuring passkey authentication in an IdM environment. 18.1. Prerequisites You have a passkey device. Install the fido2-tools package: Set the PIN for the passkey device: Connect the passkey device to the USB port. List the connected passkey devices: Set the PIN for your passkey device by following the command prompts. You have installed the sssd-passkey package. 18.2. Registering a passkey device As a user you can configure authentication with a passkey device. A passkey device is compatible with any FIDO2 specification device, such as YubiKey 5 NFC. To configure this authentication method, follow the instructions below. Prerequisites The PIN for the passkey device is set. Passkey authentication is enabled for an IdM user: Use the ipa user-mod with the same --user-auth-type=passkey parameter for an existing IdM user. Access to the physical machine to which the user wants to authenticate. Procedure Insert the passkey device in the USB port. Register the passkey for the IdM user: Follow the application prompts: Enter the PIN for the passkey device. Touch the device to verify your identity. If you are using a biometric device, ensure to use the same finger with which you registered the device. It is good practice for users to configure multiple passkey devices as a backup that allows authentication from multiple locations or devices. To ensure the Kerberos ticket is issued during authentication, do not configure more than 12 passkey devices for a user. Verification Log in to the system with the username you have configured to use passkey authentication. The system prompts you to insert the passkey device: Insert the passkey device into the USB port and enter your PIN when prompted: Confirm the Kerberos ticket is issued: Note, to skip passkey authentication, enter any character in the prompt or enter an empty PIN if user authentication is enabled. The system redirects you to password based authentication. 18.3. Authentication policies Use authentication policies to configure the available online and local authentication methods. Authentication with online connection Uses all online authentication methods that the service provides on the server side. For IdM, AD, or Kerberos services, the default authentication method is Kerberos. Authentication without online connection Uses authentication methods that are available for a user. You can tune the authentication method with the local_auth_policy option. Use the local_auth_policy option in the /etc/sssd/sssd.conf file to configure the available online and offline authentication methods. By default, the authentication is performed only with the methods that the server side of the service supports. You can tune the policy with the following values: The match value enables the matching of offline and online states. For example, the IdM server supports online passkey authentication and match enables offline and online authentications for the passkey method. The only value offers only offline methods and ignores the online methods. The enable and disable values explicitly define the methods for offline authentication. For example, enable:passkey enables only passkey for offline authentication. The following configuration example allows local users to authenticate locally using smart card authentication: The local_auth_policy option applies to the passkey and smart card authentication methods. 18.4. Retrieving an IdM ticket-granting ticket as a passkey user To retrieve a Kerberos ticket-granting ticket (TGT) as a passkey user, request an anonymous Kerberos ticket and enable Flexible Authentication via Secure Tunneling (FAST) channel to provide a secure connection between the Kerberos client and Kerberos Distribution Center (KDC). Prerequisites Your IdM client and IdM servers use RHEL 9.1 or later. Your IdM client and IdM servers use SSSD 2.7.0 or later. You registered your passkey device and configured your authentication policies. Procedure Initialize the credentials cache by running the following command: Note that this command creates the armor.ccache file that you need to point to whenever you request a new Kerberos ticket. Request a Kerberos ticket by running the command: Verification Display your Kerberos ticket information: The pa_type = 153 indicates passkey authentication. | [
"dnf install fido2-tools",
"fido2-token -L",
"fido2-token -C passkey_device",
"ipa user-add user01 --first=user --last=01 --user-auth-type=passkey",
"ipa user-add-passkey user01 --register",
"Insert your passkey device, then press ENTER.",
"Enter PIN: Creating home directory for [email protected] .",
"klist Default principal: [email protected]",
"[domain/shadowutils] id_provider = proxy proxy_lib_name = files auth_provider = none local_auth_policy = only",
"kinit -n @IDM.EXAMPLE.COM -c FILE:armor.ccache",
"kinit -T FILE:armor.ccache <username>@IDM.EXAMPLE.COM Enter your PIN:",
"klist -C Ticket cache: KCM:0:58420 Default principal: <username>@IDM.EXAMPLE.COM Valid starting Expires Service principal 05/09/22 07:48:23 05/10/22 07:03:07 krbtgt/[email protected] config: fast_avail(krbtgt/[email protected]) = yes 08/17/2022 20:22:45 08/18/2022 20:22:43 krbtgt/[email protected] config: pa_type(krbtgt/[email protected]) = 153"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_idm_users_groups_hosts_and_access_control_rules/assembly_enabling-passkey-authentication_managing-users-groups-hosts |
6.3. Adjusting Automatic Updates | 6.3. Adjusting Automatic Updates Red Hat Enterprise Linux is configured to apply all updates on a daily schedule. If you want to change how your system installs updates, you must do so via Software Update Preferences . You can change the schedule, the type of updates to apply, or to notify you of available updates. In GNOME, you can find controls for your updates at: System Preferences Software Updates . In KDE, it is located at: Applications Settings Software Updates . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-security_guide-software_maintenance-plan_and_configure_security_updates-adjusting_automatic_updates |
Chapter 9. Installing an Instance with ECC System Certificates | Chapter 9. Installing an Instance with ECC System Certificates Elliptic curve cryptography (ECC) may be preferred over RSA-style encryption in some cases, as it allows it to use much shorter key lengths and makes it faster to generate certificates. CAs which are ECC-enabled can issue both RSA and ECC certificates, using their ECC signing certificate. Certificate System includes native support for ECC features; the support is enabled by default starting from NSS 3.16. It is also possible to load and use a third-party PKCS #11 module, such as a hardware security module (HSM). To use the ECC module, it must be loaded before the subsystem instance is configured. Important Third-party ECC modules must have an SELinux policy configured for them, or SELinux needs to be changed from enforcing mode to permissive mode to allow the module to function. Otherwise, any subsystem operations which require the ECC module will fail. 9.1. Loading a Third-Party ECC Module Loading a third-party ECC module follows the same principles as loading HSMs supported by Certificate System, which is described in Chapter 8, Using Hardware Security Modules for Subsystem Security Databases . See this chapter for more information. | null | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/ecc-enabled |
Data Grid downloads | Data Grid downloads Access the Data Grid Software Downloads on the Red Hat customer portal. Note You must have a Red Hat account to access and download Data Grid software. | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/data_grid_code_tutorials/rhdg-downloads_datagrid |
Preface | Preface Important To function properly, GNOME requires your system to support 3D acceleration . This includes bare metal systems, as well as hypervisor solutions such as VMWare . If GNOME does not start or performs poorly on your VMWare virtual machine (VM), see Why does the GUI fail to start on my VMware virtual machine? (Red Hat Knowledgebase) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/administering_the_system_using_the_gnome_desktop_environment/pr01 |
Preface | Preface Red Hat OpenShift Data Foundation supports deployment on existing Red Hat OpenShift Container Platform (RHOCP) vSphere clusters in connected or disconnected environments along with out-of-the-box support for proxy environments. To deploy OpenShift Data Foundation, start with the requirements in the Preparing to deploy OpenShift Data Foundation chapter and then follow any one of the below deployment process for your environment: Internal mode Deploy using dynamic storage devices Deploy using local storage devices Deploy standalone Multicloud Object Gateway External mode Deploying OpenShift Data Foundation in external mode | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/deploying_openshift_data_foundation_on_vmware_vsphere/peface-vmware |
Chapter 13. NetworkPolicy [networking.k8s.io/v1] | Chapter 13. NetworkPolicy [networking.k8s.io/v1] Description NetworkPolicy describes what network traffic is allowed for a set of Pods Type object 13.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object NetworkPolicySpec provides the specification of a NetworkPolicy status object NetworkPolicyStatus describe the current state of the NetworkPolicy. 13.1.1. .spec Description NetworkPolicySpec provides the specification of a NetworkPolicy Type object Required podSelector Property Type Description egress array List of egress rules to be applied to the selected pods. Outgoing traffic is allowed if there are no NetworkPolicies selecting the pod (and cluster policy otherwise allows the traffic), OR if the traffic matches at least one egress rule across all of the NetworkPolicy objects whose podSelector matches the pod. If this field is empty then this NetworkPolicy limits all outgoing traffic (and serves solely to ensure that the pods it selects are isolated by default). This field is beta-level in 1.8 egress[] object NetworkPolicyEgressRule describes a particular set of traffic that is allowed out of pods matched by a NetworkPolicySpec's podSelector. The traffic must match both ports and to. This type is beta-level in 1.8 ingress array List of ingress rules to be applied to the selected pods. Traffic is allowed to a pod if there are no NetworkPolicies selecting the pod (and cluster policy otherwise allows the traffic), OR if the traffic source is the pod's local node, OR if the traffic matches at least one ingress rule across all of the NetworkPolicy objects whose podSelector matches the pod. If this field is empty then this NetworkPolicy does not allow any traffic (and serves solely to ensure that the pods it selects are isolated by default) ingress[] object NetworkPolicyIngressRule describes a particular set of traffic that is allowed to the pods matched by a NetworkPolicySpec's podSelector. The traffic must match both ports and from. podSelector LabelSelector Selects the pods to which this NetworkPolicy object applies. The array of ingress rules is applied to any pods selected by this field. Multiple network policies can select the same set of pods. In this case, the ingress rules for each are combined additively. This field is NOT optional and follows standard label selector semantics. An empty podSelector matches all pods in this namespace. policyTypes array (string) List of rule types that the NetworkPolicy relates to. Valid options are ["Ingress"], ["Egress"], or ["Ingress", "Egress"]. If this field is not specified, it will default based on the existence of Ingress or Egress rules; policies that contain an Egress section are assumed to affect Egress, and all policies (whether or not they contain an Ingress section) are assumed to affect Ingress. If you want to write an egress-only policy, you must explicitly specify policyTypes [ "Egress" ]. Likewise, if you want to write a policy that specifies that no egress is allowed, you must specify a policyTypes value that include "Egress" (since such a policy would not include an Egress section and would otherwise default to just [ "Ingress" ]). This field is beta-level in 1.8 13.1.2. .spec.egress Description List of egress rules to be applied to the selected pods. Outgoing traffic is allowed if there are no NetworkPolicies selecting the pod (and cluster policy otherwise allows the traffic), OR if the traffic matches at least one egress rule across all of the NetworkPolicy objects whose podSelector matches the pod. If this field is empty then this NetworkPolicy limits all outgoing traffic (and serves solely to ensure that the pods it selects are isolated by default). This field is beta-level in 1.8 Type array 13.1.3. .spec.egress[] Description NetworkPolicyEgressRule describes a particular set of traffic that is allowed out of pods matched by a NetworkPolicySpec's podSelector. The traffic must match both ports and to. This type is beta-level in 1.8 Type object Property Type Description ports array List of destination ports for outgoing traffic. Each item in this list is combined using a logical OR. If this field is empty or missing, this rule matches all ports (traffic not restricted by port). If this field is present and contains at least one item, then this rule allows traffic only if the traffic matches at least one port in the list. ports[] object NetworkPolicyPort describes a port to allow traffic on to array List of destinations for outgoing traffic of pods selected for this rule. Items in this list are combined using a logical OR operation. If this field is empty or missing, this rule matches all destinations (traffic not restricted by destination). If this field is present and contains at least one item, this rule allows traffic only if the traffic matches at least one item in the to list. to[] object NetworkPolicyPeer describes a peer to allow traffic to/from. Only certain combinations of fields are allowed 13.1.4. .spec.egress[].ports Description List of destination ports for outgoing traffic. Each item in this list is combined using a logical OR. If this field is empty or missing, this rule matches all ports (traffic not restricted by port). If this field is present and contains at least one item, then this rule allows traffic only if the traffic matches at least one port in the list. Type array 13.1.5. .spec.egress[].ports[] Description NetworkPolicyPort describes a port to allow traffic on Type object Property Type Description endPort integer If set, indicates that the range of ports from port to endPort, inclusive, should be allowed by the policy. This field cannot be defined if the port field is not defined or if the port field is defined as a named (string) port. The endPort must be equal or greater than port. port IntOrString The port on the given protocol. This can either be a numerical or named port on a pod. If this field is not provided, this matches all port names and numbers. If present, only traffic on the specified protocol AND port will be matched. protocol string The protocol (TCP, UDP, or SCTP) which traffic must match. If not specified, this field defaults to TCP. 13.1.6. .spec.egress[].to Description List of destinations for outgoing traffic of pods selected for this rule. Items in this list are combined using a logical OR operation. If this field is empty or missing, this rule matches all destinations (traffic not restricted by destination). If this field is present and contains at least one item, this rule allows traffic only if the traffic matches at least one item in the to list. Type array 13.1.7. .spec.egress[].to[] Description NetworkPolicyPeer describes a peer to allow traffic to/from. Only certain combinations of fields are allowed Type object Property Type Description ipBlock object IPBlock describes a particular CIDR (Ex. "192.168.1.0/24","2001:db8::/64") that is allowed to the pods matched by a NetworkPolicySpec's podSelector. The except entry describes CIDRs that should not be included within this rule. namespaceSelector LabelSelector Selects Namespaces using cluster-scoped labels. This field follows standard label selector semantics; if present but empty, it selects all namespaces. If PodSelector is also set, then the NetworkPolicyPeer as a whole selects the Pods matching PodSelector in the Namespaces selected by NamespaceSelector. Otherwise it selects all Pods in the Namespaces selected by NamespaceSelector. podSelector LabelSelector This is a label selector which selects Pods. This field follows standard label selector semantics; if present but empty, it selects all pods. If NamespaceSelector is also set, then the NetworkPolicyPeer as a whole selects the Pods matching PodSelector in the Namespaces selected by NamespaceSelector. Otherwise it selects the Pods matching PodSelector in the policy's own Namespace. 13.1.8. .spec.egress[].to[].ipBlock Description IPBlock describes a particular CIDR (Ex. "192.168.1.0/24","2001:db8::/64") that is allowed to the pods matched by a NetworkPolicySpec's podSelector. The except entry describes CIDRs that should not be included within this rule. Type object Required cidr Property Type Description cidr string CIDR is a string representing the IP Block Valid examples are "192.168.1.0/24" or "2001:db8::/64" except array (string) Except is a slice of CIDRs that should not be included within an IP Block Valid examples are "192.168.1.0/24" or "2001:db8::/64" Except values will be rejected if they are outside the CIDR range 13.1.9. .spec.ingress Description List of ingress rules to be applied to the selected pods. Traffic is allowed to a pod if there are no NetworkPolicies selecting the pod (and cluster policy otherwise allows the traffic), OR if the traffic source is the pod's local node, OR if the traffic matches at least one ingress rule across all of the NetworkPolicy objects whose podSelector matches the pod. If this field is empty then this NetworkPolicy does not allow any traffic (and serves solely to ensure that the pods it selects are isolated by default) Type array 13.1.10. .spec.ingress[] Description NetworkPolicyIngressRule describes a particular set of traffic that is allowed to the pods matched by a NetworkPolicySpec's podSelector. The traffic must match both ports and from. Type object Property Type Description from array List of sources which should be able to access the pods selected for this rule. Items in this list are combined using a logical OR operation. If this field is empty or missing, this rule matches all sources (traffic not restricted by source). If this field is present and contains at least one item, this rule allows traffic only if the traffic matches at least one item in the from list. from[] object NetworkPolicyPeer describes a peer to allow traffic to/from. Only certain combinations of fields are allowed ports array List of ports which should be made accessible on the pods selected for this rule. Each item in this list is combined using a logical OR. If this field is empty or missing, this rule matches all ports (traffic not restricted by port). If this field is present and contains at least one item, then this rule allows traffic only if the traffic matches at least one port in the list. ports[] object NetworkPolicyPort describes a port to allow traffic on 13.1.11. .spec.ingress[].from Description List of sources which should be able to access the pods selected for this rule. Items in this list are combined using a logical OR operation. If this field is empty or missing, this rule matches all sources (traffic not restricted by source). If this field is present and contains at least one item, this rule allows traffic only if the traffic matches at least one item in the from list. Type array 13.1.12. .spec.ingress[].from[] Description NetworkPolicyPeer describes a peer to allow traffic to/from. Only certain combinations of fields are allowed Type object Property Type Description ipBlock object IPBlock describes a particular CIDR (Ex. "192.168.1.0/24","2001:db8::/64") that is allowed to the pods matched by a NetworkPolicySpec's podSelector. The except entry describes CIDRs that should not be included within this rule. namespaceSelector LabelSelector Selects Namespaces using cluster-scoped labels. This field follows standard label selector semantics; if present but empty, it selects all namespaces. If PodSelector is also set, then the NetworkPolicyPeer as a whole selects the Pods matching PodSelector in the Namespaces selected by NamespaceSelector. Otherwise it selects all Pods in the Namespaces selected by NamespaceSelector. podSelector LabelSelector This is a label selector which selects Pods. This field follows standard label selector semantics; if present but empty, it selects all pods. If NamespaceSelector is also set, then the NetworkPolicyPeer as a whole selects the Pods matching PodSelector in the Namespaces selected by NamespaceSelector. Otherwise it selects the Pods matching PodSelector in the policy's own Namespace. 13.1.13. .spec.ingress[].from[].ipBlock Description IPBlock describes a particular CIDR (Ex. "192.168.1.0/24","2001:db8::/64") that is allowed to the pods matched by a NetworkPolicySpec's podSelector. The except entry describes CIDRs that should not be included within this rule. Type object Required cidr Property Type Description cidr string CIDR is a string representing the IP Block Valid examples are "192.168.1.0/24" or "2001:db8::/64" except array (string) Except is a slice of CIDRs that should not be included within an IP Block Valid examples are "192.168.1.0/24" or "2001:db8::/64" Except values will be rejected if they are outside the CIDR range 13.1.14. .spec.ingress[].ports Description List of ports which should be made accessible on the pods selected for this rule. Each item in this list is combined using a logical OR. If this field is empty or missing, this rule matches all ports (traffic not restricted by port). If this field is present and contains at least one item, then this rule allows traffic only if the traffic matches at least one port in the list. Type array 13.1.15. .spec.ingress[].ports[] Description NetworkPolicyPort describes a port to allow traffic on Type object Property Type Description endPort integer If set, indicates that the range of ports from port to endPort, inclusive, should be allowed by the policy. This field cannot be defined if the port field is not defined or if the port field is defined as a named (string) port. The endPort must be equal or greater than port. port IntOrString The port on the given protocol. This can either be a numerical or named port on a pod. If this field is not provided, this matches all port names and numbers. If present, only traffic on the specified protocol AND port will be matched. protocol string The protocol (TCP, UDP, or SCTP) which traffic must match. If not specified, this field defaults to TCP. 13.1.16. .status Description NetworkPolicyStatus describe the current state of the NetworkPolicy. Type object Property Type Description conditions array (Condition) Conditions holds an array of metav1.Condition that describe the state of the NetworkPolicy. Current service state 13.2. API endpoints The following API endpoints are available: /apis/networking.k8s.io/v1/networkpolicies GET : list or watch objects of kind NetworkPolicy /apis/networking.k8s.io/v1/watch/networkpolicies GET : watch individual changes to a list of NetworkPolicy. deprecated: use the 'watch' parameter with a list operation instead. /apis/networking.k8s.io/v1/namespaces/{namespace}/networkpolicies DELETE : delete collection of NetworkPolicy GET : list or watch objects of kind NetworkPolicy POST : create a NetworkPolicy /apis/networking.k8s.io/v1/watch/namespaces/{namespace}/networkpolicies GET : watch individual changes to a list of NetworkPolicy. deprecated: use the 'watch' parameter with a list operation instead. /apis/networking.k8s.io/v1/namespaces/{namespace}/networkpolicies/{name} DELETE : delete a NetworkPolicy GET : read the specified NetworkPolicy PATCH : partially update the specified NetworkPolicy PUT : replace the specified NetworkPolicy /apis/networking.k8s.io/v1/watch/namespaces/{namespace}/networkpolicies/{name} GET : watch changes to an object of kind NetworkPolicy. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/networking.k8s.io/v1/namespaces/{namespace}/networkpolicies/{name}/status GET : read status of the specified NetworkPolicy PATCH : partially update status of the specified NetworkPolicy PUT : replace status of the specified NetworkPolicy 13.2.1. /apis/networking.k8s.io/v1/networkpolicies Table 13.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind NetworkPolicy Table 13.2. HTTP responses HTTP code Reponse body 200 - OK NetworkPolicyList schema 401 - Unauthorized Empty 13.2.2. /apis/networking.k8s.io/v1/watch/networkpolicies Table 13.3. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of NetworkPolicy. deprecated: use the 'watch' parameter with a list operation instead. Table 13.4. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 13.2.3. /apis/networking.k8s.io/v1/namespaces/{namespace}/networkpolicies Table 13.5. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 13.6. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of NetworkPolicy Table 13.7. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 13.8. Body parameters Parameter Type Description body DeleteOptions schema Table 13.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind NetworkPolicy Table 13.10. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 13.11. HTTP responses HTTP code Reponse body 200 - OK NetworkPolicyList schema 401 - Unauthorized Empty HTTP method POST Description create a NetworkPolicy Table 13.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.13. Body parameters Parameter Type Description body NetworkPolicy schema Table 13.14. HTTP responses HTTP code Reponse body 200 - OK NetworkPolicy schema 201 - Created NetworkPolicy schema 202 - Accepted NetworkPolicy schema 401 - Unauthorized Empty 13.2.4. /apis/networking.k8s.io/v1/watch/namespaces/{namespace}/networkpolicies Table 13.15. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 13.16. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of NetworkPolicy. deprecated: use the 'watch' parameter with a list operation instead. Table 13.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 13.2.5. /apis/networking.k8s.io/v1/namespaces/{namespace}/networkpolicies/{name} Table 13.18. Global path parameters Parameter Type Description name string name of the NetworkPolicy namespace string object name and auth scope, such as for teams and projects Table 13.19. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a NetworkPolicy Table 13.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 13.21. Body parameters Parameter Type Description body DeleteOptions schema Table 13.22. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified NetworkPolicy Table 13.23. HTTP responses HTTP code Reponse body 200 - OK NetworkPolicy schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified NetworkPolicy Table 13.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 13.25. Body parameters Parameter Type Description body Patch schema Table 13.26. HTTP responses HTTP code Reponse body 200 - OK NetworkPolicy schema 201 - Created NetworkPolicy schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified NetworkPolicy Table 13.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.28. Body parameters Parameter Type Description body NetworkPolicy schema Table 13.29. HTTP responses HTTP code Reponse body 200 - OK NetworkPolicy schema 201 - Created NetworkPolicy schema 401 - Unauthorized Empty 13.2.6. /apis/networking.k8s.io/v1/watch/namespaces/{namespace}/networkpolicies/{name} Table 13.30. Global path parameters Parameter Type Description name string name of the NetworkPolicy namespace string object name and auth scope, such as for teams and projects Table 13.31. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind NetworkPolicy. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 13.32. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 13.2.7. /apis/networking.k8s.io/v1/namespaces/{namespace}/networkpolicies/{name}/status Table 13.33. Global path parameters Parameter Type Description name string name of the NetworkPolicy namespace string object name and auth scope, such as for teams and projects Table 13.34. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified NetworkPolicy Table 13.35. HTTP responses HTTP code Reponse body 200 - OK NetworkPolicy schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified NetworkPolicy Table 13.36. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 13.37. Body parameters Parameter Type Description body Patch schema Table 13.38. HTTP responses HTTP code Reponse body 200 - OK NetworkPolicy schema 201 - Created NetworkPolicy schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified NetworkPolicy Table 13.39. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.40. Body parameters Parameter Type Description body NetworkPolicy schema Table 13.41. HTTP responses HTTP code Reponse body 200 - OK NetworkPolicy schema 201 - Created NetworkPolicy schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/network_apis/networkpolicy-networking-k8s-io-v1 |
6.2. RHEA-2013:0484 - new packages: hypervkvpd | 6.2. RHEA-2013:0484 - new packages: hypervkvpd New hypervkvpd packages are now available for Red Hat Enterprise Linux 6. The hypervkvpd packages contain hypervkvpd, the guest Hyper-V Key-Value Pair (KVP) daemon. Using VMbus, hypervkvpd passes basic information to the host. The information includes guest IP address, fully qualified domain name, operating system name, and operating system release number. An IP injection functionality is also provided which allows you to change the IP address of a guest from the host via the hypervkvpd daemon. This enhancement update adds the hypervkvpd packages to Red Hat Enterprise Linux 6. For more information about inclusion of, and guest installation support for, Microsoft Hyper-V drivers, refer to the Red Hat Enterprise Linux 6.4 Release Notes. (BZ#850674) All users who require hypervkvpd are advised to install these new packages. After installing the hypervkvpd packages, rebooting all guest machines is recommended, otherwise the Microsoft Windows server with Hyper-V might not be able to get information from these guest machines. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/rhea-2013-0484 |
Eclipse Vert.x 4.3 Migration Guide | Eclipse Vert.x 4.3 Migration Guide Red Hat build of Eclipse Vert.x 4.3 For use with Eclipse Vert.x 4.3 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_build_of_eclipse_vert.x/4.3/html/eclipse_vert.x_4.3_migration_guide/index |
Chapter 3. Using SAML to secure applications and services | Chapter 3. Using SAML to secure applications and services This section describes how you can secure applications and services with SAML using either Red Hat build of Keycloak client adapters or generic SAML provider libraries. 3.1. Red Hat build of Keycloak Java adapters Red Hat build of Keycloak comes with a range of different adapters for Java application. Selecting the correct adapter depends on the target platform. 3.1.1. Red Hat JBoss Enterprise Application Platform 3.1.1.1. 8.0 Beta Red Hat build of Keycloak provides a SAML adapter for Red Hat Enterprise Application Platform 8.0 Beta. However, the documentation is not currently available, and will be added in the near future. 3.1.1.2. 6.4 and 7.x Existing applications deployed to Red Hat JBoss Enterprise Application Platform 6.4 and 7.x can leverage adapters from Red Hat Single Sign-On 7.6 in combination with the Red Hat build of Keycloak server. For more information, see the Red Hat Single Sign-On documentation . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/securing_applications_and_services_guide/using_saml_to_secure_applications_and_services |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.