title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
2.3. Compatible Hardware | 2.3. Compatible Hardware For information about hardware that is compatible with Red Hat Cluster Suite components (for example, supported fence devices, storage devices, and Fibre Channel switches), refer to the hardware configuration guidelines at http://www.redhat.com/cluster_suite/hardware/ . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_suite_overview/s1-hw-compat-cso |
Chapter 9. Batch Processing | Chapter 9. Batch Processing Batch processing allows multiple operation requests to be grouped in a sequence and executed together as a unit. If any of the operation requests in the sequence fail, the entire group of operations is rolled back. Note Batch mode does not support conditional statements. Enter batch mode with the batch management CLI command. Batch mode is indicated by the hash symbol ( # ) in the prompt. Add operation requests to the batch. Once in batch mode, enter operation requests as normal. The operation requests are added to the batch in the order they are entered. You can edit and reorder batch commands. You can also store a batch for processing at a later time. See Batch Mode Commands for a full list of commands available for working with batches. Run the batch. Once the entire sequence of operation requests is entered, run the batch with the run-batch command. The entered sequence of operation requests is completed as a batch and prints the result to the terminal: The batch executed successfully. Batch Commands in External Files Frequently-run batch commands can be stored in an external text file and can be loaded either by passing the full path to the file as an argument to the batch command or executed directly by being passed as an argument to the run-batch command. You can create a batch command file by using a text editor and placing each command on its own line. The following command will load the myscript.txt file in batch mode. The commands from this file can then be edited or removed. New commands can be inserted. Changes made in this batch session do not persist to the myscript.txt file. The following will immediately run the batch commands stored in the file myscript.txt The entered sequence of operation requests is completed as a batch. | [
"batch",
"run-batch",
"batch --file=myscript.txt",
"run-batch --file=myscript.txt"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/management_cli_guide/batch_processing |
Chapter 6. Configuring action log storage for Elasticsearch and Splunk | Chapter 6. Configuring action log storage for Elasticsearch and Splunk By default, usage logs are stored in the Red Hat Quay database and exposed through the web UI on organization and repository levels. Appropriate administrative privileges are required to see log entries. For deployments with a large amount of logged operations, you can store the usage logs in Elasticsearch and Splunk instead of the Red Hat Quay database backend. 6.1. Configuring action log storage for Elasticsearch Note To configure action log storage for Elasticsearch, you must provide your own Elasticsearch stack; it is not included with Red Hat Quay as a customizable component. Enabling Elasticsearch logging can be done during Red Hat Quay deployment or post-deployment by updating your config.yaml file. When configured, usage log access continues to be provided through the web UI for repositories and organizations. Use the following procedure to configure action log storage for Elasticsearch: Procedure Obtain an Elasticsearch account. Update your Red Hat Quay config.yaml file to include the following information: # ... LOGS_MODEL: elasticsearch 1 LOGS_MODEL_CONFIG: producer: elasticsearch 2 elasticsearch_config: host: http://<host.elasticsearch.example>:<port> 3 port: 9200 4 access_key: <access_key> 5 secret_key: <secret_key> 6 use_ssl: True 7 index_prefix: <logentry> 8 aws_region: <us-east-1> 9 # ... 1 The method for handling log data. 2 Choose either Elasticsearch or Kinesis to direct logs to an intermediate Kinesis stream on AWS. You need to set up your own pipeline to send logs from Kinesis to Elasticsearch, for example, Logstash. 3 The hostname or IP address of the system providing the Elasticsearch service. 4 The port number providing the Elasticsearch service on the host you just entered. Note that the port must be accessible from all systems running the Red Hat Quay registry. The default is TCP port 9200 . 5 The access key needed to gain access to the Elasticsearch service, if required. 6 The secret key needed to gain access to the Elasticsearch service, if required. 7 Whether to use SSL/TLS for Elasticsearch. Defaults to True . 8 Choose a prefix to attach to log entries. 9 If you are running on AWS, set the AWS region (otherwise, leave it blank). Optional. If you are using Kinesis as your logs producer, you must include the following fields in your config.yaml file: kinesis_stream_config: stream_name: <kinesis_stream_name> 1 access_key: <aws_access_key> 2 secret_key: <aws_secret_key> 3 aws_region: <aws_region> 4 1 The name of the Kinesis stream. 2 The name of the AWS access key needed to gain access to the Kinesis stream, if required. 3 The name of the AWS secret key needed to gain access to the Kinesis stream, if required. 4 The Amazon Web Services (AWS) region. Save your config.yaml file and restart your Red Hat Quay deployment. 6.2. Configuring action log storage for Splunk Splunk is an alternative to Elasticsearch that can provide log analyses for your Red Hat Quay data. Enabling Splunk logging can be done during Red Hat Quay deployment or post-deployment using the configuration tool. Configuration includes both the option to forward action logs directly to Splunk or to the Splunk HTTP Event Collector (HEC). Use the following procedures to enable Splunk for your Red Hat Quay deployment. 6.2.1. Installing and creating a username for Splunk Use the following procedure to install and create Splunk credentials. Procedure Create a Splunk account by navigating to Splunk and entering the required credentials. Navigate to the Splunk Enterprise Free Trial page, select your platform and installation package, and then click Download Now . Install the Splunk software on your machine. When prompted, create a username, for example, splunk_admin and password. After creating a username and password, a localhost URL will be provided for your Splunk deployment, for example, http://<sample_url>.remote.csb:8000/ . Open the URL in your preferred browser. Log in with the username and password you created during installation. You are directed to the Splunk UI. 6.2.2. Generating a Splunk token Use one of the following procedures to create a bearer token for Splunk. 6.2.2.1. Generating a Splunk token using the Splunk UI Use the following procedure to create a bearer token for Splunk using the Splunk UI. Prerequisites You have installed Splunk and created a username. Procedure On the Splunk UI, navigate to Settings Tokens . Click Enable Token Authentication . Ensure that Token Authentication is enabled by clicking Token Settings and selecting Token Authentication if necessary. Optional: Set the expiration time for your token. This defaults at 30 days. Click Save . Click New Token . Enter information for User and Audience . Optional: Set the Expiration and Not Before information. Click Create . Your token appears in the Token box. Copy the token immediately. Important If you close out of the box before copying the token, you must create a new token. The token in its entirety is not available after closing the New Token window. 6.2.2.2. Generating a Splunk token using the CLI Use the following procedure to create a bearer token for Splunk using the CLI. Prerequisites You have installed Splunk and created a username. Procedure In your CLI, enter the following CURL command to enable token authentication, passing in your Splunk username and password: USD curl -k -u <username>:<password> -X POST <scheme>://<host>:<port>/services/admin/token-auth/tokens_auth -d disabled=false Create a token by entering the following CURL command, passing in your Splunk username and password. USD curl -k -u <username>:<password> -X POST <scheme>://<host>:<port>/services/authorization/tokens?output_mode=json --data name=<username> --data audience=Users --data-urlencode expires_on=+30d Save the generated bearer token. 6.2.3. Configuring Red Hat Quay to use Splunk Use the following procedure to configure Red Hat Quay to use Splunk or the Splunk HTTP Event Collector (HEC). Prerequisites You have installed Splunk and created a username. You have generated a Splunk bearer token. Procedure Configure Red Hat Quay to use Splunk or the Splunk HTTP Event Collector (HEC). If opting to use Splunk, open your Red Hat Quay config.yaml file and add the following configuration fields: # ... LOGS_MODEL: splunk LOGS_MODEL_CONFIG: producer: splunk splunk_config: host: http://<user_name>.remote.csb 1 port: 8089 2 bearer_token: <bearer_token> 3 url_scheme: <http/https> 4 verify_ssl: False 5 index_prefix: <splunk_log_index_name> 6 ssl_ca_path: <location_to_ssl-ca-cert.pem> 7 # ... 1 String. The Splunk cluster endpoint. 2 Integer. The Splunk management cluster endpoint port. Differs from the Splunk GUI hosted port. Can be found on the Splunk UI under Settings Server Settings General Settings . 3 String. The generated bearer token for Splunk. 4 String. The URL scheme for access the Splunk service. If Splunk is configured to use TLS/SSL, this must be https . 5 Boolean. Whether to enable TLS/SSL. Defaults to true . 6 String. The Splunk index prefix. Can be a new, or used, index. Can be created from the Splunk UI. 7 String. The relative container path to a single .pem file containing a certificate authority (CA) for TLS/SSL validation. If opting to use Splunk HEC, open your Red Hat Quay config.yaml file and add the following configuration fields: # ... LOGS_MODEL: splunk LOGS_MODEL_CONFIG: producer: splunk_hec 1 splunk_hec_config: 2 host: prd-p-aaaaaq.splunkcloud.com 3 port: 8088 4 hec_token: 12345678-1234-1234-1234-1234567890ab 5 url_scheme: https 6 verify_ssl: False 7 index: quay 8 splunk_host: quay-dev 9 splunk_sourcetype: quay_logs 10 # ... 1 Specify splunk_hec when configuring Splunk HEC. 2 Logs model configuration for Splunk HTTP event collector action logs configuration. 3 The Splunk cluster endpoint. 4 Splunk management cluster endpoint port. 5 HEC token for Splunk. 6 The URL scheme for access the Splunk service. If Splunk is behind SSL/TLS, must be https . 7 Boolean. Enable (true) or disable (false) SSL/TLS verification for HTTPS connections. 8 The Splunk index to use. 9 The host name to log this event. 10 The name of the Splunk sourcetype to use. If you are configuring ssl_ca_path , you must configure the SSL/TLS certificate so that Red Hat Quay will trust it. If you are using a standalone deployment of Red Hat Quay, SSL/TLS certificates can be provided by placing the certificate file inside of the extra_ca_certs directory, or inside of the relative container path and specified by ssl_ca_path . If you are using the Red Hat Quay Operator, create a config bundle secret, including the certificate authority (CA) of the Splunk server. For example: USD oc create secret generic --from-file config.yaml=./config_390.yaml --from-file extra_ca_cert_splunkserver.crt=./splunkserver.crt config-bundle-secret Specify the conf/stack/extra_ca_certs/splunkserver.crt file in your config.yaml . For example: # ... LOGS_MODEL: splunk LOGS_MODEL_CONFIG: producer: splunk splunk_config: host: ec2-12-345-67-891.us-east-2.compute.amazonaws.com port: 8089 bearer_token: eyJra url_scheme: https verify_ssl: true index_prefix: quay123456 ssl_ca_path: conf/stack/splunkserver.crt # ... 6.2.4. Creating an action log Use the following procedure to create a user account that can forward action logs to Splunk. Important You must use the Splunk UI to view Red Hat Quay action logs. At this time, viewing Splunk action logs on the Red Hat Quay Usage Logs page is unsupported, and returns the following message: Method not implemented. Splunk does not support log lookups . Prerequisites You have installed Splunk and created a username. You have generated a Splunk bearer token. You have configured your Red Hat Quay config.yaml file to enable Splunk. Procedure Log in to your Red Hat Quay deployment. Click on the name of the organization that you will use to create an action log for Splunk. In the navigation pane, click Robot Accounts Create Robot Account . When prompted, enter a name for the robot account, for example spunkrobotaccount , then click Create robot account . On your browser, open the Splunk UI. Click Search and Reporting . In the search bar, enter the name of your index, for example, <splunk_log_index_name> and press Enter . The search results populate on the Splunk UI. Logs are forwarded in JSON format. A response might look similar to the following: { "log_data": { "kind": "authentication", 1 "account": "quayuser123", 2 "performer": "John Doe", 3 "repository": "projectQuay", 4 "ip": "192.168.1.100", 5 "metadata_json": {...}, 6 "datetime": "2024-02-06T12:30:45Z" 7 } } 1 Specifies the type of log event. In this example, authentication indicates that the log entry relates to an authentication event. 2 The user account involved in the event. 3 The individual who performed the action. 4 The repository associated with the event. 5 The IP address from which the action was performed. 6 Might contain additional metadata related to the event. 7 The timestamp of when the event occurred. 6.3. Understanding usage logs By default, usage logs are stored in the Red Hat Quay database. They are exposed through the web UI, on the organization and repository levels, and in the Superuser Admin Panel . Database logs capture a wide ranges of events in Red Hat Quay, such as the changing of account plans, user actions, and general operations. Log entries include information such as the action performed ( kind_id ), the user who performed the action ( account_id or performer_id ), the timestamp ( datetime ), and other relevant data associated with the action ( metadata_json ). 6.3.1. Viewing database logs The following procedure shows you how to view repository logs that are stored in a PostgreSQL database. Prerequisites You have administrative privileges. You have installed the psql CLI tool. Procedure Enter the following command to log in to your Red Hat Quay PostgreSQL database: USD psql -h <quay-server.example.com> -p 5432 -U <user_name> -d <database_name> Example output psql (16.1, server 13.7) Type "help" for help. Optional. Enter the following command to display the tables list of your PostgreSQL database: quay=> \dt Example output List of relations Schema | Name | Type | Owner --------+----------------------------+-------+---------- public | logentry | table | quayuser public | logentry2 | table | quayuser public | logentry3 | table | quayuser public | logentrykind | table | quayuser ... You can enter the following command to return a list of repository_ids that are required to return log information: quay=> SELECT id, name FROM repository; Example output id | name ----+--------------------- 3 | new_repository_name 6 | api-repo 7 | busybox ... Enter the following command to use the logentry3 relation to show log information about one of your repositories: SELECT * FROM logentry3 WHERE repository_id = <repository_id>; Example output id | kind_id | account_id | performer_id | repository_id | datetime | ip | metadata_json 59 | 14 | 2 | 1 | 6 | 2024-05-13 15:51:01.897189 | 192.168.1.130 | {"repo": "api-repo", "namespace": "test-org"} In the above example, the following information is returned: { "log_data": { "id": 59 1 "kind_id": "14", 2 "account_id": "2", 3 "performer_id": "1", 4 "repository_id": "6", 5 "ip": "192.168.1.100", 6 "metadata_json": {"repo": "api-repo", "namespace": "test-org"} 7 "datetime": "2024-05-13 15:51:01.897189" 8 } } 1 The unique identifier for the log entry. 2 The action that was done. In this example, it was 14 . The key, or table, in the following section shows you that this kind_id is related to the creation of a repository. 3 The account that performed the action. 4 The performer of the action. 5 The repository that the action was done on. In this example, 6 correlates to the api-repo that was discovered in Step 3. 6 The IP address where the action was performed. 7 Metadata information, including the name of the repository and its namespace. 8 The time when the action was performed. 6.3.2. Log entry kind_ids The following table represents the kind_ids associated with Red Hat Quay actions. kind_id Action Description 1 account_change_cc Change of credit card information. 2 account_change_password Change of account password. 3 account_change_plan Change of account plan. 4 account_convert Account conversion. 5 add_repo_accesstoken Adding an access token to a repository. 6 add_repo_notification Adding a notification to a repository. 7 add_repo_permission Adding permissions to a repository. 8 add_repo_webhook Adding a webhook to a repository. 9 build_dockerfile Building a Dockerfile. 10 change_repo_permission Changing permissions of a repository. 11 change_repo_visibility Changing the visibility of a repository. 12 create_application Creating an application. 13 create_prototype_permission Creating permissions for a prototype. 14 create_repo Creating a repository. 15 create_robot Creating a robot (service account or bot). 16 create_tag Creating a tag. 17 delete_application Deleting an application. 18 delete_prototype_permission Deleting permissions for a prototype. 19 delete_repo Deleting a repository. 20 delete_repo_accesstoken Deleting an access token from a repository. 21 delete_repo_notification Deleting a notification from a repository. 22 delete_repo_permission Deleting permissions from a repository. 23 delete_repo_trigger Deleting a repository trigger. 24 delete_repo_webhook Deleting a webhook from a repository. 25 delete_robot Deleting a robot. 26 delete_tag Deleting a tag. 27 manifest_label_add Adding a label to a manifest. 28 manifest_label_delete Deleting a label from a manifest. 29 modify_prototype_permission Modifying permissions for a prototype. 30 move_tag Moving a tag. 31 org_add_team_member Adding a member to a team. 32 org_create_team Creating a team within an organization. 33 org_delete_team Deleting a team within an organization. 34 org_delete_team_member_invite Deleting a team member invitation. 35 org_invite_team_member Inviting a member to a team in an organization. 36 org_remove_team_member Removing a member from a team. 37 org_set_team_description Setting the description of a team. 38 org_set_team_role Setting the role of a team. 39 org_team_member_invite_accepted Acceptance of a team member invitation. 40 org_team_member_invite_declined Declining of a team member invitation. 41 pull_repo Pull from a repository. 42 push_repo Push to a repository. 43 regenerate_robot_token Regenerating a robot token. 44 repo_verb Generic repository action (specifics might be defined elsewhere). 45 reset_application_client_secret Resetting the client secret of an application. 46 revert_tag Reverting a tag. 47 service_key_approve Approving a service key. 48 service_key_create Creating a service key. 49 service_key_delete Deleting a service key. 50 service_key_extend Extending a service key. 51 service_key_modify Modifying a service key. 52 service_key_rotate Rotating a service key. 53 setup_repo_trigger Setting up a repository trigger. 54 set_repo_description Setting the description of a repository. 55 take_ownership Taking ownership of a resource. 56 update_application Updating an application. 57 change_repo_trust Changing the trust level of a repository. 58 reset_repo_notification Resetting repository notifications. 59 change_tag_expiration Changing the expiration date of a tag. 60 create_app_specific_token Creating an application-specific token. 61 revoke_app_specific_token Revoking an application-specific token. 62 toggle_repo_trigger Toggling a repository trigger on or off. 63 repo_mirror_enabled Enabling repository mirroring. 64 repo_mirror_disabled Disabling repository mirroring. 65 repo_mirror_config_changed Changing the configuration of repository mirroring. 66 repo_mirror_sync_started Starting a repository mirror sync. 67 repo_mirror_sync_failed Repository mirror sync failed. 68 repo_mirror_sync_success Repository mirror sync succeeded. 69 repo_mirror_sync_now_requested Immediate repository mirror sync requested. 70 repo_mirror_sync_tag_success Repository mirror tag sync succeeded. 71 repo_mirror_sync_tag_failed Repository mirror tag sync failed. 72 repo_mirror_sync_test_success Repository mirror sync test succeeded. 73 repo_mirror_sync_test_failed Repository mirror sync test failed. 74 repo_mirror_sync_test_started Repository mirror sync test started. 75 change_repo_state Changing the state of a repository. 76 create_proxy_cache_config Creating proxy cache configuration. 77 delete_proxy_cache_config Deleting proxy cache configuration. 78 start_build_trigger Starting a build trigger. 79 cancel_build Cancelling a build. 80 org_create Creating an organization. 81 org_delete Deleting an organization. 82 org_change_email Changing organization email. 83 org_change_invoicing Changing organization invoicing. 84 org_change_tag_expiration Changing organization tag expiration. 85 org_change_name Changing organization name. 86 user_create Creating a user. 87 user_delete Deleting a user. 88 user_disable Disabling a user. 89 user_enable Enabling a user. 90 user_change_email Changing user email. 91 user_change_password Changing user password. 92 user_change_name Changing user name. 93 user_change_invoicing Changing user invoicing. 94 user_change_tag_expiration Changing user tag expiration. 95 user_change_metadata Changing user metadata. 96 user_generate_client_key Generating a client key for a user. 97 login_success Successful login. 98 logout_success Successful logout. 99 permanently_delete_tag Permanently deleting a tag. 100 autoprune_tag_delete Auto-pruning tag deletion. 101 create_namespace_autoprune_policy Creating namespace auto-prune policy. 102 update_namespace_autoprune_policy Updating namespace auto-prune policy. 103 delete_namespace_autoprune_policy Deleting namespace auto-prune policy. 104 login_failure Failed login attempt. | [
"LOGS_MODEL: elasticsearch 1 LOGS_MODEL_CONFIG: producer: elasticsearch 2 elasticsearch_config: host: http://<host.elasticsearch.example>:<port> 3 port: 9200 4 access_key: <access_key> 5 secret_key: <secret_key> 6 use_ssl: True 7 index_prefix: <logentry> 8 aws_region: <us-east-1> 9",
"kinesis_stream_config: stream_name: <kinesis_stream_name> 1 access_key: <aws_access_key> 2 secret_key: <aws_secret_key> 3 aws_region: <aws_region> 4",
"curl -k -u <username>:<password> -X POST <scheme>://<host>:<port>/services/admin/token-auth/tokens_auth -d disabled=false",
"curl -k -u <username>:<password> -X POST <scheme>://<host>:<port>/services/authorization/tokens?output_mode=json --data name=<username> --data audience=Users --data-urlencode expires_on=+30d",
"LOGS_MODEL: splunk LOGS_MODEL_CONFIG: producer: splunk splunk_config: host: http://<user_name>.remote.csb 1 port: 8089 2 bearer_token: <bearer_token> 3 url_scheme: <http/https> 4 verify_ssl: False 5 index_prefix: <splunk_log_index_name> 6 ssl_ca_path: <location_to_ssl-ca-cert.pem> 7",
"LOGS_MODEL: splunk LOGS_MODEL_CONFIG: producer: splunk_hec 1 splunk_hec_config: 2 host: prd-p-aaaaaq.splunkcloud.com 3 port: 8088 4 hec_token: 12345678-1234-1234-1234-1234567890ab 5 url_scheme: https 6 verify_ssl: False 7 index: quay 8 splunk_host: quay-dev 9 splunk_sourcetype: quay_logs 10",
"oc create secret generic --from-file config.yaml=./config_390.yaml --from-file extra_ca_cert_splunkserver.crt=./splunkserver.crt config-bundle-secret",
"LOGS_MODEL: splunk LOGS_MODEL_CONFIG: producer: splunk splunk_config: host: ec2-12-345-67-891.us-east-2.compute.amazonaws.com port: 8089 bearer_token: eyJra url_scheme: https verify_ssl: true index_prefix: quay123456 ssl_ca_path: conf/stack/splunkserver.crt",
"{ \"log_data\": { \"kind\": \"authentication\", 1 \"account\": \"quayuser123\", 2 \"performer\": \"John Doe\", 3 \"repository\": \"projectQuay\", 4 \"ip\": \"192.168.1.100\", 5 \"metadata_json\": {...}, 6 \"datetime\": \"2024-02-06T12:30:45Z\" 7 } }",
"psql -h <quay-server.example.com> -p 5432 -U <user_name> -d <database_name>",
"psql (16.1, server 13.7) Type \"help\" for help.",
"quay=> \\dt",
"List of relations Schema | Name | Type | Owner --------+----------------------------+-------+---------- public | logentry | table | quayuser public | logentry2 | table | quayuser public | logentry3 | table | quayuser public | logentrykind | table | quayuser",
"quay=> SELECT id, name FROM repository;",
"id | name ----+--------------------- 3 | new_repository_name 6 | api-repo 7 | busybox",
"SELECT * FROM logentry3 WHERE repository_id = <repository_id>;",
"id | kind_id | account_id | performer_id | repository_id | datetime | ip | metadata_json 59 | 14 | 2 | 1 | 6 | 2024-05-13 15:51:01.897189 | 192.168.1.130 | {\"repo\": \"api-repo\", \"namespace\": \"test-org\"}",
"{ \"log_data\": { \"id\": 59 1 \"kind_id\": \"14\", 2 \"account_id\": \"2\", 3 \"performer_id\": \"1\", 4 \"repository_id\": \"6\", 5 \"ip\": \"192.168.1.100\", 6 \"metadata_json\": {\"repo\": \"api-repo\", \"namespace\": \"test-org\"} 7 \"datetime\": \"2024-05-13 15:51:01.897189\" 8 } }"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/manage_red_hat_quay/proc_manage-log-storage |
5.135. kernel | 5.135. kernel 5.135.1. RHSA-2013-1783 - Important: kernel security and bug fix update Updated kernel packages that fix three security issues and several bugs are now available for Red Hat Enterprise Linux 6.3 Extended Update Support. The Red Hat Security Response Team has rated this update as having important security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links associated with each description below. The kernel packages contain the Linux kernel, the core of any Linux operating system. Security Fixes CVE-2012-4508 , Important A race condition was found in the way asynchronous I/O and fallocate() interacted when using the ext4 file system. A local, unprivileged user could use this flaw to expose random data from an extent whose data blocks have not yet been written, and thus contain data from a deleted file. CVE-2013-4299 , Moderate An information leak flaw was found in the way the Linux kernel's device mapper subsystem, under certain conditions, interpreted data written to snapshot block devices. An attacker could use this flaw to read data from disk blocks in free space, which are normally inaccessible. CVE-2013-2851 , Low A format string flaw was found in the Linux kernel's block layer. A privileged, local user could potentially use this flaw to escalate their privileges to kernel level (ring0). Red Hat would like to thank Theodore Ts'o for reporting CVE-2012-4508, Fujitsu for reporting CVE-2013-4299, and Kees Cook for reporting CVE-2013-2851. Upstream acknowledges Dmitry Monakhov as the original reporter of CVE-2012-4508. Bug Fixes BZ#1016105 The crypto_larval_lookup() function could return a larval, an in-between state when a cryptographic algorithm is being registered, even if it did not create one. This could cause a larval to be terminated twice, and result in a kernel panic. This occurred for example when the NFS service was running in FIPS mode, and attempted to use the MD5 hashing algorithm even though FIPS mode has this algorithm blacklisted. A condition has been added to the crypto_larval_lookup() function to check whether a larval was created before returning it. BZ#1017505, BZ#1017506 A change in the port auto-selection code allowed sharing of ports with no conflicts, extending its usage. Consequently, when binding a socket with the SO_REUSEADDR socket option enabled, the bind(2) function could allocate an ephemeral port that was already used. A subsequent connection attempt failed in such a case with the EADDRNOTAVAIL error code. This update applies a patch that modifies the port auto-selection code so that bind(2) now selects a non-conflict port even with the SO_REUSEADDR option enabled. BZ#1017903 When the Audit subsystem was under heavy load, it could loop infinitely in the audit_log_start() function instead of failing over to the error recovery code. This could cause soft lockups in the kernel. With this update, the timeout condition in the audit_log_start() function has been modified to properly fail over when necessary. BZ#1020527 Previously, power-limit notification interrupts were enabled by default on the system. This could lead to degradation of system performance or even render the system unusable on certain platforms, such as Dell PowerEdge servers. A patch has been applied to disable power-limit notification interrupts by default and a new kernel command line parameter "int_pln_enable" has been added to allow users observing these events using the existing system counters. Power-limit notification messages are also no longer displayed on the console. The affected platforms no longer suffer from degraded system performance due to this problem. BZ#1023349 Previously, when the user added an IPv6 route for local delivery, the route did not work and packets could not be sent. A patch has been applied to limit the neighbor entry creation only for input flow, thus fixing this bug. As a result, IPv6 routes for local delivery now work as expected. BZ#1028592 A bug in the kernel's file system code allowed the d_splice_alias() function to create a new dentry for a directory with an already-existing non-DISCONNECTED dentry. As a consequence, a thread accessing the directory could attempt to take the i_mutex on that directory twice, resulting in a deadlock situation. To resolve this problem, d_splice_alias() has been modified so that in the problematic cases, it reuses an existing dentry instead of creating a new dentry. BZ#1029423 The kernel's thread helper previously used larvals of the request threads without holding a reference count. This could result in a NULL pointer dereference and subsequent kernel panic if the helper thread completed after the larval had been destroyed upon the request thread exiting. With this update, the helper thread holds a reference count on the request threads larvals so that a NULL pointer dereference is now avoided. BZ#1029901 Due to a bug in the SELinux Makefile, a kernel compilation could fail when the "-j" option was specified to perform the compilation with multiple parallel jobs. This happened because SELinux expected the existence of an automatically generated file, "flask.h", prior to the compiling of some dependent files. The Makefile has been corrected and the "flask.h" dependency now applies to all objects from the "selinux-y" list. The parallel compilation of the kernel now succeeds as expected. All kernel users are advised to upgrade to these updated packages, which contain backported patches to correct these issues. The system must be rebooted for this update to take effect. 5.135.2. RHBA-2013:1104 - kernel bug fix update Updated kernel packages that fix several bugs are now available for Red Hat Enterprise Linux 6 Extended Update Support. The kernel packages contain the Linux kernel, the core of any Linux operating system. Bug Fixes BZ#969341 When adding a virtual PCI device, such as virtio disk, virtio net, e1000 or rtl8139, to a KVM guest, the kacpid thread reprograms the hot plug parameters of all devices on the PCI bus to which the new device is being added. When reprogramming the hot plug parameters of a VGA or QXL graphics device, the graphics device emulation requests flushing of the guest's shadow page tables. Previously, if the guest had a huge and complex set of shadow page tables, the flushing operation took a significant amount of time and the guest could appear to be unresponsive for several minutes. This resulted in exceeding the threshold of the "soft lockup" watchdog and the "BUG: soft lockup" events were logged by both, the guest and host kernel. This update applies a series of patches that deal with this problem. The KVM's Memory Management Unit (MMU) now avoids creating multiple page table roots in connection with processors that support Extended Page Tables (EPT). This prevents the guest's shadow page tables from becoming too complex on machines with EPT support. MMU now also flushes only large memory mappings, which alleviates the situation on machines where the processor does not support EPT. Additionally, a free memory accounting race that could prevent KVM MMU from freeing memory pages has been fixed. BZ#972599 When the Active Item List (AIL) becomes empty, the xfsaild daemon is moved to a task sleep state that depends on the timeout value returned by the xfsaild_push() function. The latest changes modified xfsaild_push() to return a 10-ms value when the AIL is empty, which sets xfsaild into the uninterruptible sleep state (D state) and artificially increased system load average. This update applies a patch that fixes this problem by setting the timeout value to the allowed maximum, 50 ms. This moves xfsaild to the interruptible sleep state (S state), avoiding the impact on load average. BZ#975577 A previously-applied patch introduced a bug in the ipoib_cm_destroy_tx() function, which allowed a CM object to be moved between lists without any supported locking. Under a heavy system load, this could cause the system to crash. With this update, proper locking of the CM objects has been re-introduced to fix the race condition, and the system no longer crashes under a heavy load. BZ#976695 * The schedule_ipi() function is called in the hardware interrupt context and it raises the SCHED_SOFTIRQ software interrupts to perform system load balancing. Software interrupts in Linux are either performed on return from a hardware interrupt or are handled by the ksoftirqd daemon if the interrupts cannot be processed normally. Previously, the context of the schedule_ipi() function was not marked as a hardware interrupt so while performing schedule_ipi(), the ksoftirqd daemon could have been triggered. When triggered, the daemon attempted to balance the system load. However at that time, the load balancing had already been performed by the SCHED_SOFTIRQ software interrupt so the ksoftirqd daemon attempted to balance the already-balanced system, which led to excessive consumption of CPU time. The problem has been resolved by adding irq_enter() and irq_exit() function calls to schedule IPI handlers, which assures that context of softirq_ipi() is correctly marked as a hardware interrupt and the ksoftirqd daemon is no longer triggered when the SCHED_SOFTIRQ interrupt has been raised. BZ#977667 A race condition between the read_swap_cache_async() and get_swap_page() functions in the Memory management (mm) code could lead to a deadlock situation. The deadlock could occur only on systems that deployed swap partitions on devices supporting block DISCARD and TRIM operations if kernel preemption was disabled (the !CONFIG_PREEMPT parameter). If the read_swap_cache_async() function was given a SWAP_HAS_CACHE entry that did not have a page in the swap cache yet, a DISCARD operation was performed in the scan_swap_map() function. Consequently, completion of an I/O operation was scheduled on the same CPU's working queue the read_swap_cache_async() was running on. This caused the thread in read_swap_cache_async() to loop indefinitely around its "-EEXIST" case, rendering the system unresponsive. The problem has been fixed by adding an explicit cond_resched() call to read_swap_cache_async(), which allows other tasks to run on the affected CPU, and thus avoiding the deadlock. Users should upgrade to these updated packages, which contain backported patches to correct these bugs. The system must be rebooted for this update to take effect. 5.135.3. RHSA-2013:0928 - Important: kernel security and bug fix update Updated kernel packages that fix several security issues and bugs are now available for Red Hat Enterprise Linux 6 Extended Update Support. The Red Hat Security Response Team has rated this update as having important security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links associated with each description below. The kernel packages contain the Linux kernel, the core of any Linux operating system. Security Fixes CVE-2013-0311 , Important A flaw was found in the way the vhost kernel module handled descriptors that spanned multiple regions. A privileged guest user in a KVM (Kernel-based Virtual Machine) guest could use this flaw to crash the host or, potentially, escalate their privileges on the host. CVE-2013-1773 , Important A buffer overflow flaw was found in the way UTF-8 characters were converted to UTF-16 in the utf8s_to_utf16s() function of the Linux kernel's FAT file system implementation. A local user able to mount a FAT file system with the "utf8=1" option could use this flaw to crash the system or, potentially, to escalate their privileges. CVE-2013-1796 , Important A flaw was found in the way KVM handled guest time updates when the buffer the guest registered by writing to the MSR_KVM_SYSTEM_TIME machine state register (MSR) crossed a page boundary. A privileged guest user could use this flaw to crash the host or, potentially, escalate their privileges, allowing them to execute arbitrary code at the host kernel level. CVE-2013-1797 , Important A potential use-after-free flaw was found in the way KVM handled guest time updates when the GPA (guest physical address) the guest registered by writing to the MSR_KVM_SYSTEM_TIME machine state register (MSR) fell into a movable or removable memory region of the hosting user-space process (by default, QEMU-KVM) on the host. If that memory region is deregistered from KVM using KVM_SET_USER_MEMORY_REGION and the allocated virtual memory reused, a privileged guest user could potentially use this flaw to escalate their privileges on the host. CVE-2013-1798 , Important A flaw was found in the way KVM emulated IOAPIC (I/O Advanced Programmable Interrupt Controller). A missing validation check in the ioapic_read_indirect() function could allow a privileged guest user to crash the host, or read a substantial portion of host kernel memory. CVE-2012-4542 , Moderate It was found that the default SCSI command filter does not accommodate commands that overlap across device classes. A privileged guest user could potentially use this flaw to write arbitrary data to a LUN that is passed-through as read-only. CVE-2013-1767 , Low A use-after-free flaw was found in the tmpfs implementation. A local user able to mount and unmount a tmpfs file system could use this flaw to cause a denial of service or, potentially, escalate their privileges. CVE-2013-1848 , Low A format string flaw was found in the ext3_msg() function in the Linux kernel's ext3 file system implementation. A local user who is able to mount an ext3 file system could use this flaw to cause a denial of service or, potentially, escalate their privileges. Red Hat would like to thank Andrew Honig of Google for reporting the CVE-2013-1796, CVE-2013-1797, and CVE-2013-1798 issues. The CVE-2012-4542 issue was discovered by Paolo Bonzini of Red Hat. Bug Fixes BZ# 952612 When pNFS (parallel NFS) code was in use, a file locking process could enter a deadlock while trying to recover form a server reboot. This update introduces a new locking mechanism that avoids the deadlock situation in this scenario. BZ# 955503 The be2iscsi driver previously leaked memory in the driver's control path when mapping tasks.This update fixes the memory leak by freeing all resources related to a task when the task was completed. Also, the driver did not release a task after responding to the received NOP-IN acknowledgment with a valid Target Transfer Tag (TTT). Consequently, the driver run out of tasks available for the session and no more iscsi commands could be issued. A patch has been applied to fix this problem by releasing the task. BZ# 956295 The virtual file system (VFS) code had a race condition between the unlink and link system calls that allowed creating hard links to deleted (unlinked) files. This could, under certain circumstances, cause inode corruption that eventually resulted in a file system shutdown. The problem was observed in Red Hat Storage during rsync operations on replicated Gluster volumes that resulted in an XFS shutdown. A testing condition has been added to the VFS code, preventing hard links to deleted files from being created. BZ# 956933 A bug in the lpfc driver allowed re-enabling of an interrupt from the interrupt context so the interrupt handler was able to re-enter the interrupt context. The interrupt context re-entrance problem led to kernel stack corruption which consequently resulted in a kernel panic. This update provides a patch addressing the re-entrance problem so the kernel stack corruption and the subsequent kernel panic can no longer occur under these circumstances. BZ# 960410 Previously, when open(2) system calls were processed, the GETATTR routine did not check to see if valid attributes were also returned. As a result, the open() call succeeded with invalid attributes instead of failing in such a case. This update adds the missing check, and the open() call succeeds only when valid attributes are returned. BZ# 960416 Previously, an NFS RPC task could enter a deadlock and become unresponsive if it was waiting for an NFSv4 state serialization lock to become available and the session slot was held by the NFSv4 server. This update fixes this problem along with the possible race condition in the pNFS return-on-close code. The NFSv4 client has also been modified to not accepting delegated OPEN operations if a delegation recall is in effect. The client now also reports NFSv4 servers that try to return a delegation when the client is using the CLAIM_DELEGATE_CUR open mode. BZ# 960419 Previously, the fsync(2) system call incorrectly returned the EIO (Input/Output) error instead of the ENOSPC (No space left on device) error. This was caused by incorrect error handling in the page cache. This problem has been fixed and the correct error value is now returned. BZ# 960424 In the RPC code, when a network socket backed up due to high network traffic, a timer was set causing a retransmission, which in turn could cause even larger amount of network traffic to be generated. To prevent this problem, the RPC code now waits for the socket to empty instead of setting the timer. BZ# 962367 A rare race condition between the "devloss" timeout and discovery state machine could trigger a bug in the lpfc driver that nested two levels of spin locks in reverse order. The reverse order of spin locks led to a deadlock situation and the system became unresponsive. With this update, a patch addressing the deadlock problem has been applied and the system no longer hangs in this situation. BZ# 964960 When attempting to deploy a virtual machine on a hypervisor with multiple NICs and macvtap devices, a kernel panic could occur. This happened because the macvtap driver did not gracefully handle a situation when the macvlan_port.vlans list was empty and returned a NULL pointer. This update applies a series of patches which fix this problem using a read-copy-update (RCU) mechanism and by preventing the driver from returning a NULL pointer if the list is empty. The kernel no longer panics in this scenario. Users should upgrade to these updated packages, which contain backported patches to correct these issues. The system must be rebooted for this update to take effect. 5.135.4. RHBA-2013:0768 - kernel bug fix update Updated kernel packages that fix several bugs are now available for Red Hat Enterprise Linux 6 Extended Update Support. The kernel packages contain the Linux kernel, the core of any Linux operating system. Bug Fixes BZ# 911266 The Intel 5520 and 5500 chipsets do not properly handle remapping of MSI and MSI-X interrupts. If the interrupt remapping feature is enabled on the system with such a chipset, various problems and service disruption could occur (for example, a NIC could stop receiving frames), and the "kernel: do_IRQ: 7.71 No irq handler for vector (irq -1)" error message appears in the system logs. As a workaround to this problem, it has been recommended to disable the interrupt remapping feature in the BIOS on such systems, and many vendors have updated their BIOS to disable interrupt remapping by default. However, the problem is still being reported by users without proper BIOS level with this feature properly turned off. Therefore, this update modifies the kernel to check if the interrupt remapping feature is enabled on these systems and to provide users with a warning message advising them to turn off the feature and update the BIOS. BZ# 920264 The NFS code implements the "silly rename" operation to handle an open file that is held by a process while another process attempts to remove it. The "silly rename" operation works according to the "delete on last close" semantics so the inode of the file is not released until the last process that opens the file closes it. A update of the NFS code broke the mechanics that prevented an NFS client from deleting a silly-renamed entry. This affected the "delete on last close" semantics and silly-renamed files could be deleted by any process while the files were open for I/O by another process. As a consequence, the process reading the file failed with the "ESTALE" error code. This update modifies the way the NFS code handles dentries of silly-renamed files and silly-renamed files can not be deleted until the last process that has the file open for I/O closes it. BZ# 920267 The NFSv4 code uses byte range locks to simulate the flock() function, which is used to apply or remove an exclusive advisory lock on an open file. However, using the NFSv4 byte range locks precludes a possibility to open a file with read-only permissions and subsequently to apply an exclusive advisory lock on the file. A patch broke a mechanism used to verify the mode of the open file. As a consequence, the system became unresponsive and the system logs filled with a "kernel: nfs4_reclaim_open_state: Lock reclaim failed!" error message if the file was open with read-only permissions and an attempt to apply an exclusive advisory lock was made. This update modifies the NFSv4 code to check the mode of the open file before attempting to apply the exclusive advisory lock. The "-EBADF" error code is returned if the type of the lock does not match the file mode. BZ# 921960 When running a high thread workload of small-sized files on an XFS file system, the system could become unresponsive or a kernel panic could occur. This occurred because the xfsaild daemon had a subtle code path that led to lock recursion on the xfsaild lock when a buffer in the AIL was already locked and an attempt was made to force the log to unlock it. This patch removes the dangerous code path and queues the log force to be invoked from a safe locking context with respect to xfsaild. This patch also fixes the race condition between buffer locking and buffer pinned state that exposed the original problem by rechecking the state of the buffer after a lock failure. The system no longer hangs and the kernel no longer panics in this scenario. BZ# 923850 Previously, the NFS Lock Manager (NLM) did not resend blocking lock requests after NFSv3 server reboot recovery. As a consequence, when an application was running on a NFSv3 mount and requested a blocking lock, the application received an "-ENOLCK" error. This patch ensures that NLM always resends blocking lock requests after the grace period has expired. BZ# 924838 A bug in the anon_vma lock in the mprotect() function could cause virtual memory area (vma) corruption. The bug has been fixed so that virtual memory area corruption no longer occurs in this scenario. All users are advised to upgrade to these updated packages, which fix these bugs. The system must be rebooted for this update to take effect. 5.135.5. RHSA-2012:1366 - Important: kernel security and bug fix update Updated kernel packages that fix one security issue and several bugs are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having moderate security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links associated with each description below. The kernel packages contain the Linux kernel, the core of any Linux operating system. Security Fixes CVE-2012-3412 , Important A flaw was found in the way socket buffers (skb) requiring TSO (TCP segment offloading) were handled by the sfc driver. If the skb did not fit within the minimum-size of the transmission queue, the network card could repeatedly reset itself. A remote attacker could use this flaw to cause a denial of service. Red Hat would like to thank Ben Hutchings of Solarflare for reporting this issue. Bug Fixes BZ# 856316 In Fibre Channel fabrics with large zones, the automatic port rescan on incoming Extended Link Service (ELS) frames and any adapter recovery could cause high traffic, in particular if many Linux instances shared a host bus adapter (HBA), which is common on IBM System z architecture. This could lead to various failures; for example, names server requests, port or adapter recovery could fail. With this update, ports are re-scanned only when setting an adapter online or on manual user-triggered writes to the sysfs attribute "port_rescan". BZ# 856686 Under certain circumstances, a system crash could result in data loss on XFS file systems. If files were created immediately before the file system was left to idle for a long period of time and then the system crashed, those files could appear as zero-length once the file system was remounted. This occurred even if a sync or fsync was run on the files. This was because XFS was not correctly idling the journal, and therefore it incorrectly replayed the inode allocation transactions upon mounting after the system crash, which zeroed the file size. This problem has been fixed by re-instating the periodic journal idling logic to ensure that all metadata is flushed within 30 seconds of modification, and the journal is updated to prevent incorrect recovery operations from occurring. BZ# 856703 On architectures with the 64-bit cputime_t type, it was possible to trigger the "divide by zero" error, namely, on long-lived processes. A patch has been applied to address this problem, and the "divide by zero" error no longer occurs under these circumstances. BZ# 857012 The kernel provided by the Red Hat Enterprise Linux 6.3 release included an unintentional kernel ABI (kABI) breakage with regards to the "contig_page_data" symbol. Unfortunately, this breakage did not cause the checksums to change. As a result, drivers using this symbol could silently corrupt memory on the kernel. This update reverts the behavior. Note Any driver compiled with the "contig_page_data" symbol during the early release of Red Hat Enterprise Linux 6.3 needs to be recompiled again for this kernel. BZ# 857334 A race condition could occur between page table sharing and virtual memory area (VMA) teardown. As a consequence, multiple "bad pmd" message warnings were displayed and "kernel BUG at mm/filemap.c:129" was reported while shutting down applications that share memory segments backed by huge pages. With this update, the VM_MAYSHARE macro is explicitly cleaned during the unmap_hugepage_range() call under the i_mmap_lock. This makes VMA ineligible for sharing and avoids the race condition. After using shared segments backed by huge pages, applications like databases and caches shut down correctly, with no crash. BZ# 857854 A kernel panic could occur when using the be2net driver. This was because the Bottom Half (BF) was enabled even if the Interrupt ReQuest (IRQ) was already disabled. With this update, the BF is disabled in callers of the be_process_mcc() function and the kernel no longer crashes in this scenario. Note Note that, in certain cases, it is possible to experience the network card being unresponsive after installing this update. A future update will correct this problem. BZ# 858284 The Stream Control Transmission Protocol (SCTP) ipv6 source address selection logic did not take the preferred source address into consideration. With this update, the source address is chosen from the routing table by taking this aspect into consideration. This brings the SCTP source address selection on par with IPv4. BZ# 858285 Prior to this update, it was not possible to set IPv6 source addresses in routes as it was possible with IPv4. With this update, users can select the preferred source address for a specific IPv6 route with the "src" option of the "ip -6 route" command. All users of kernel should upgrade to these updated packages, which contain backported patches to correct these issues. The system must be rebooted for this update to take effect. 5.135.6. RHSA-2012:1304 - Moderate: kernel security and bug fix update Updated kernel packages that fix multiple security issues and several bugs are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having moderate security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links associated with each description below. The kernel packages contain the Linux kernel, the core of any Linux operating system. Security Fixes CVE-2012-2313 , Low A flaw was found in the way the Linux kernel's dl2k driver, used by certain D-Link Gigabit Ethernet adapters, restricted IOCTLs. A local, unprivileged user could use this flaw to issue potentially harmful IOCTLs, which could cause Ethernet adapters using the dl2k driver to malfunction (for example, losing network connectivity). CVE-2012-2384 , Moderate An integer overflow flaw was found in the i915_gem_do_execbuffer() function in the Intel i915 driver in the Linux kernel. A local, unprivileged user could use this flaw to cause a denial of service. This issue only affected 32-bit systems. CVE-2012-2390 , Moderate A memory leak flaw was found in the way the Linux kernel's memory subsystem handled resource clean up in the mmap() failure path when the MAP_HUGETLB flag was set. A local, unprivileged user could use this flaw to cause a denial of service. CVE-2012-3430 , Low A flaw was found in the way the msg_namelen variable in the rds_recvmsg() function of the Linux kernel's Reliable Datagram Sockets (RDS) protocol implementation was initialized. A local, unprivileged user could use this flaw to leak kernel stack memory to user-space. CVE-2012-3552 , Moderate A race condition was found in the way access to inet->opt ip_options was synchronized in the Linux kernel's TCP/IP protocol suite implementation. Depending on the network facing applications running on the system, a remote attacker could possibly trigger this flaw to cause a denial of service. A local, unprivileged user could use this flaw to cause a denial of service regardless of the applications the system runs. Red Hat would like to thank Hafid Lin for reporting CVE-2012-3552, and Stephan Mueller for reporting CVE-2012-2313. The CVE-2012-3430 issue was discovered by the Red Hat InfiniBand team. Bug Fixes BZ# 812962 Previously, after a crash, preparing to switch to the kdump kernel could in rare cases race with IRQ migration, causing a deadlock of the ioapic_lock variable. As a consequence, kdump became unresponsive. The race condition has been fixed, and switching to kdump no longer causes hangs in this scenario. BZ# 842757 The xmit packet size was previously 64K, exceeding the hardware capability of the be2net card because the size did not account for the Ethernet header. The adapter was therefore unable to process xmit requests exceeding this size, produced error messages and could become unresponsive. To prevent these problems, GSO (Generic Segmentation Offload) maximum size has been reduced to account for the Ethernet header. BZ# 842982 When the netconsole module was configured over bridge and the "service network restart" command was executed, a deadlock could occur, resulting in a kernel panic. This was caused by recursive rtnl locking by both bridge and netconsole code during network interface unregistration. With this update, the rtnl lock usage is fixed, and the kernel no longer crashes in this scenario. BZ# 842984 When using virtualization with the netconsole module configured over the main system bridge, guests could not be added to the bridge, because TAP interfaces did not support netpoll. This update adds support of netpoll to the TUN/TAP interfaces so that bridge devices in virtualization setups can use netconsole. BZ# 843102 Signed-unsigned values comparison could under certain circumstances lead to a superfluous reshed_task() routine to be called, causing several unnecessary cycles in the scheduler. This problem has been fixed, preventing the unnecessary cycles in the scheduler. BZ# 845464 If RAID1 or RAID10 was used under LVM or some other stacking block device, it was possible to enter a deadlock during a resync or recovery operation. Consequently, md RAID devices could become unresponsive on certain workloads. This update avoids the deadlock so that md RAID devices work as expected under these circumstances. BZ# 846216 Previously, soft interrupt requests (IRQs) under the bond_alb_xmit() function were locked even when the function contained soft IRQs that were disabled. This could cause a system to become unresponsive or terminate unexpectedly. With this update, such IRQs are no longer disabled, and the system no longer hangs or crashes in this scenario. BZ# 846832 Previously, the TCP socket bound to NFS server contained a stale skb_hints socket buffer. Consequently, kernel could terminate unexpectedly. A patch has been provided to address this issue and skb_hints is now properly cleared from the socket, thus preventing this bug. BZ# 846836 A race condition could occur due to incorrect locking scheme in the code for software RAID. Consequently, this could cause the mkfs utility to become unresponsive when creating an ext4 file system on software RAID5. This update introduces a locking scheme in the handle_stripe() function, which ensures that the race condition no longer occurs. BZ# 846838 When a device is added to the system at runtime, the AMD IOMMU driver initializes the necessary data structures to handle translation for it. Previously, however, the per-device dma_ops structure types were not changed to point to the AMD IOMMU driver, so mapping was not performed and direct memory access (DMA) ended with the IO_PAGE_FAULT message. This consequently led to networking problems. With this update, the structure types point correctly to the AMD IOMMU driver, and networking works as expected when the AMD IOMMU driver is used. BZ# 846839 Due to an error in the dm-mirror driver, when using LVM mirrors on disks with discard support (typically SSD disks), repairing such disks caused the system to terminate unexpectedly. The error in the driver has been fixed and repairing disks with discard support is now successful. BZ# 847042 On Intel systems with Pause Loop Exiting (PLE), or AMD systems with Pause Filtering (PF), it was possible for larger multi-CPU KVM guests to experience slowdowns and soft lock-ups. Due to a boundary condition in kvm_vcpu_on_spin, all the VCPUs could try to yield to VCPU0, causing contention on the run queue lock of the physical CPU where the guest's VCPU0 is running. This update eliminates the boundary condition in kvm_vcpu_on_spin. BZ# 847045 Previously, using the e1000e driver could lead to a kernel panic. This was caused by a NULL pointer dereference that occurred if the adapter was being closed and reset simultaneously. The source code of the driver has been modified to address this problem, and kernel no longer crashes in this scenario. BZ# 847727 On PowerPC architecture, the "top" utility displayed incorrect values for the CPU idle time, delays and workload. This was caused by a update that used jiffies for the I/O wait and idle time, but the change did not take into account that jiffies and CPU time are represented by different units. These differences are now taken into account, and the "top" utility displays correct values on PowerPC architecture. BZ# 847945 Due to a missing return statement, the nfs_attr_use_mounted_on_file() function returned a wrong value. As a consequence, redundant ESTALE errors could potentially be returned. This update adds the proper return statement to nfs_attr_use_mounted_on_file(), thus preventing this bug. Note This bug only affected NFS version 4 file systems. BZ# 849051 A deadlock sometimes occurred between the dlm_controld daemon closing a lowcomms connection through the configfs file system and the dlm_send process looking up the address for a new connection in configfs. With this update, the node addresses are saved within the lowcomms code so that the lowcomms work queue does not need to use configfs to get a node address. BZ# 849551 Performance of O_DSYNC on the GFS2 file system was affected when only data (not metadata such as file size) was dirtied as a result of a write system call. This was because O_DSYNC writes were always behaving in the same way as O_SYNC. With this update, O_DSYNC writes only write back data, if the inode's metadata is not dirty. This leads to a considerable performance improvement in this case. Note that this problem does not affect data integrity. The same issue also applies to the pairing of write and fdatasync calls. BZ# 851444 If a mirror or redirection action is configured to cause packets to go to another device, the classifier holds a reference count. However, it was previously assuming that the administrator cleaned up all redirections before removing. Packets were therefore dropped if the mirrored device was not present, and connectivity to the host could be lost. To prevent such problems, a notifier and cleanup are now run during the unregister action. Packets are not dropped if the a mirrored device is not present. BZ# 851445 The kernel contains a rule to blacklist direct memory access (DMA) modes for "2GB ATA Flash Disk" devices. However, this device ID string did not contain a space at the beginning of the name. Due to this, the rule failed to match the device and failed to disable DMA modes. With this update, the string correctly reads " 2GB ATA Flash Disk", and the rule can be matched as expected. All users of kernel should upgrade to these updated packages, which contain backported patches to correct these issues. The system must be rebooted for this update to take effect. 5.135.7. RHBA-2012:1104 - kernel bug fix update Updated kernel packages that fix four bugs are now available for Red Hat Enterprise Linux 6. The kernel packages contain the Linux kernel, the core of any Linux operating system. Bug Fixes BZ# 836904 Previously, futex operations on read-only (RO) memory maps did not work correctly. This broke workloads that had one or more reader processes performing the FUTEX_WAIT operation on a futex within a read-only shared file mapping and a writer process that had a writable mapping performing the FUTEX_WAKE operation. With this update, the FUTEX_WAKE operation is performed with a RO MAP_PRIVATE mapping, and is successfully awaken if another process updates the region of the underlying mapped file. BZ# 837218 When removing a bonding module, the bonding driver uses code separate from the net device operations to clean up the VLAN code. Recent changes to the kernel introduced a bug which caused a kernel panic if the vlan module was removed after the bonding module had been removed. To fix this problem, the VLAN group removal operations found in the bonding kill_vid path are now duplicated in alternate paths which are used when removing a bonding module. BZ# 837227 The bonding method for adding VLAN Identifiers (VIDs) did not always add the VID to a slave VLAN group. When the NETIF_F_HW_VLAN_FILTER flag was not set on a slave, the bonding module could not add new VIDs to it. This could cause networking problems and the system to be unreachable even if NIC messages did not indicate any problems. This update changes the bond VID add path to always add a new VID to the slaves (if the VID does not exist). This ensures that networking problems no longer occur in this scenario. BZ# 837843 Previously, reference counting was imbalanced in the slave add and remove paths for bonding. If a network interface controller (NIC) did not support the NETIF_F_HW_VLAN_FILTER flag, the bond_add_vlans_on_slave() and bond_del_vlans_on_slave() functions did not work properly, which could lead to a kernel panic if the VLAN module was removed while running. The underlying source code for adding and removing a slave and a VLAN has been revised and now also contains a common path, so that kernel crashes no kernel no longer occur in the described scenario. All users of kernel are advised to upgrade to these updated packages, which fix these bugs. The system must be rebooted for this update to take effect. 5.135.8. RHSA-2013:0223 - Moderate: kernel security and bug fix update Updated kernel packages that fix three security issues and multiple bugs are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having moderate security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links associated with each description below. The kernel packages contain the Linux kernel, the core of any Linux operating system. Security Fixes CVE-2012-4398 , Moderate It was found that a deadlock could occur in the Out of Memory (OOM) killer. A process could trigger this deadlock by consuming a large amount of memory, and then causing request_module() to be called. A local, unprivileged user could use this flaw to cause a denial of service (excessive memory consumption). CVE-2012-4461 , Moderate A flaw was found in the way the KVM (Kernel-based Virtual Machine) subsystem handled guests attempting to run with the X86_CR4_OSXSAVE CPU feature flag set. On hosts without the XSAVE CPU feature, a local, unprivileged user could use this flaw to crash the host system. (The "grep --color xsave /proc/cpuinfo" command can be used to verify if your system has the XSAVE CPU feature.) CVE-2012-4530 , Low A memory disclosure flaw was found in the way the load_script() function in the binfmt_script binary format handler handled excessive recursions. A local, unprivileged user could use this flaw to leak kernel stack memory to user-space by executing specially-crafted scripts. Red Hat would like to thank Tetsuo Handa for reporting CVE-2012-4398, and Jon Howell for reporting CVE-2012-4461. Bug Fixes BZ# 846840 When an NFSv4 client received a read delegation, a race between the OPEN and DELEGRETURN operation could occur. If the DELEGRETURN operation was processed first, the NFSv4 client treated the delegation returned by the following OPEN as a new delegation. Also, the NFSv4 client did not correctly handle errors caused by requests that used a bad or revoked delegation state ID. As a result, applications running on the client could receive spurious EIO errors. This update applies a series of patches that fix the NFSv4 code so an NFSv4 client recovers correctly in the described situations instead of returning errors to applications. BZ# 865305 Filesystem in Userspace (FUSE) did not implement scatter-gather direct I/O optimally. Consequently, the kernel had to process an extensive number of FUSE requests, which had a negative impact on system performance. This update applies a set of patches which improves internal request management for other features, such as readahead. FUSE direct I/O overhead has been significantly reduced to minimize negative effects on system performance. BZ# 876090 In case of a regular CPU hot plug event, the kernel does not keep the original cpuset configuration and can reallocate running tasks to active CPUs. Previously, the kernel treated switching between suspend and resume modes as a regular CPU hot plug event, which could have a significant negative impact on system performance in certain environments such as SMP KVM virtualization. When resuming an SMP KVM guest from suspend mode, the libvirtd daemon and all its child processes were pinned to a single CPU (the boot CPU) so that all VMs used only the single CPU. This update applies a set of patches which ensure that the kernel does not modify cpusets during suspend and resume operations. The system is now resumed in the exact state before suspending without any performance decrease. BZ# 878774 Previously, the kernel had no way to distinguish between a device I/O failure due to a transport problem and a failure as a result of command timeout expiration. I/O errors always resulted in a device being set offline and the device had to be brought online manually even though the I/O failure occured due to a transport problem. With this update, the SCSI driver has been modified and a new SDEV_TRANSPORT_OFFLINE state has been added to help distinguish transport problems from another I/O failure causes. Transport errors are now handled differently and storage devices can now recover from these failures without user intervention. BZ# 880085 Previously, the IP over Infiniband (IPoIB) driver maintained state information about neighbors on the network by attaching it to the core network's neighbor structure. However, due to a race condition between the freeing of the core network neighbor struct and the freeing of the IPoIB network struct, a use after free condition could happen, resulting in either a kernel oops or 4 or 8 bytes of kernel memory being zeroed when it was not supposed to be. These patches decouple the IPoIB neighbor struct from the core networking stack's neighbor struct so that there is no race between the freeing of one and the freeing of the other. BZ# 880928 When a new rpc_task is created, the code takes a reference to rpc_cred and sets the task->tk_cred pointer to it. After the call completes, the resources held by the rpc_task are freed. Previously, however, after the rpc_cred was released, the pointer to it was not zeroed out. This led to an rpc_cred reference count underflow, and consequently to a kernel panic. With this update, the pointer to rpc_cred is correctly zeroed out, which prevents a kernel panic from occurring in this scenario. BZ# 884422 Previously, the HP Smart Array driver (hpsa) used the target reset functionality. However, HP Smart Array logical drives do not support the target reset functionality. Therefore, if the target reset failed, the logical drive was taken offline with a file system error. The hpsa driver has been updated to use the LUN reset functionality instead of target reset, which is supported by these drives. BZ# 886618 The bonding driver previously did not honor the maximum Generic Segmentation Offload (GSO) length of packets and segments requested by the underlying network interface. This caused the firmware of the underlying NIC, such as be2net, to become unresponsive. This update modifies the bonding driver to set up the lowest gso_max_size and gso_max_segs values of network devices while attaching and detaching the devices as slaves. The network drivers no longer hangs and network traffic now proceeds as expected in setups using a bonding interface. BZ# 886760 Previously, the interrupt handlers of the qla2xxx driver could clear pending interrupts right after the IRQ lines were attached during system start-up. Consequently, the kernel could miss the interrupt that reported completion of the link initialization, and the qla2xxx driver then failed to detect all attached LUNs. With this update, the qla2xxx driver has been modified to no longer clear interrupt bits after attaching the IRQ lines. The driver now correctly detects all attached LUNs as expected. BZ# 888215 When TCP segment offloading (TSO) or jumbo packets are used on the Broadcom BCM5719 network interface controller (NIC) with multiple TX rings, small packets can be starved for resources by the simple round-robin hardware scheduling of these TX rings, thus causing lower network performance. To ensure reasonable network performance for all NICs, multiple TX rings are now disabled by default. BZ# 888818 Due to insufficient handling of a dead Input/Output Controller (IOC), the mpt2sas driver could fail Enhanced I/O Error Handling (EEH) recovery for certain PCI bus failures on 64-bit IBM PowerPC machines. With this update, when a dead IOC is detected, EEH recovery routine has more time to resolve the failure and the controller in a non-operational state is removed. BZ# 891580 A possible race between the n_tty_read() and reset_buffer_flags() functions could result in a NULL pointer dereference in the n_tty_read() function under certain circumstances. As a consequence, a kernel panic could have been triggered when interrupting a current task on a serial console. This update modifies the tty driver to use a spin lock to prevent functions from a parallel access to variables. A NULL pointer dereference causing a kernel panic can no longer occur in this scenario. All users should upgrade to these updated packages, which contain backported patches to correct these issues and bugs. The system must be rebooted for this update to take effect. 5.135.9. RHSA-2012:1064 - Important: kernel security and bug fix update Updated kernel packages that fix two security issues and several bugs are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having important security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE links associated with each description below. The kernel packages contain the Linux kernel, the core of any Linux operating system. Security Fixes CVE-2012-2744 , Important A NULL pointer dereference flaw was found in the nf_ct_frag6_reasm() function in the Linux kernel's netfilter IPv6 connection tracking implementation. A remote attacker could use this flaw to send specially-crafted packets to a target system that is using IPv6 and also has the nf_conntrack_ipv6 kernel module loaded, causing it to crash. CVE-2012-2745 , Moderate A flaw was found in the way the Linux kernel's key management facility handled replacement session keyrings on process forks. A local, unprivileged user could use this flaw to cause a denial of service. Red Hat would like to thank an anonymous contributor working with the Beyond Security SecuriTeam Secure Disclosure program for reporting CVE-2012-2744. Bug Fixes BZ# 832359 Previously introduced firmware files required for new Realtek chipsets contained an invalid prefix ("rtl_nic_") in the file names, for example "/lib/firmware/rtl_nic/rtl_nic_rtl8168d-1.fw". This update corrects these file names. For example, the aforementioned file is now correctly named "/lib/firmware/rtl_nic/rtl8168d-1.fw". BZ# 832363 This update blacklists the ADMA428M revision of the 2GB ATA Flash Disk device. This is due to data corruption occurring on the said device when the Ultra-DMA 66 transfer mode is used. When the "libata.force=5:pio0,6:pio0" kernel parameter is set, the aforementioned device works as expected. BZ# 832365 On Red Hat Enterprise Linux 6, mounting an NFS export from a Windows 2012 server failed due to the fact that the Windows server contains support for the minor version 1 (v4.1) of the NFS version 4 protocol only, along with support for versions 2 and 3. The lack of the minor version 0 (v4.0) support caused Red Hat Enterprise Linux 6 clients to fail instead of rolling back to version 3 as expected. This update fixes this bug and mounting an NFS export works as expected. BZ# 833034 On ext4 file systems, when fallocate() failed to allocate blocks due to the ENOSPC condition (no space left on device) for a file larger than 4 GB, the size of the file became corrupted and, consequently, caused file system corruption. This was due to a missing cast operator in the "ext4_fallocate()" function. With this update, the underlying source code has been modified to address this issue, and file system corruption no longer occurs. Users should upgrade to these updated packages, which contain backported patches to correct these issues. The system must be rebooted for this update to take effect. 5.135.10. RHBA-2012:1199 - kernel bug fix update Updated kernel packages that fix two bugs are now available for Red Hat Enterprise Linux 6. The kernel packages contain the Linux kernel, the core of any Linux operating system. When an NTP server asserts the STA_INS flag (Leap Second Insert), the kernel starts an hrtimer (high-resolution timer) with a countdown clock. This hrtimer expires at end of the current month, midnight UTC, and inserts a second into the kernel timekeeping structures. A scheduled leap second occurred on June 30 2012 midnight UTC. Bug Fixes BZ# 840950 Previously in the kernel, when the leap second hrtimer was started, it was possible that the kernel livelocked on the xtime_lock variable. This update fixes the problem by using a mixture of separate subsystem locks (timekeeping and ntp) and removing the xtime_lock variable, thus avoiding the livelock scenarios that could occur in the kernel. BZ# 847366 After the leap second was inserted, applications calling system calls that used futexes consumed almost 100% of available CPU time. This occurred because the kernel's timekeeping structure update did not properly update these futexes. The futexes repeatedly expired, re-armed, and then expired immediately again. This update fixes the problem by properly updating the futex expiration times by calling the clock_was_set_delayed() function, an interrupt-safe method of the clock_was_set() function. All users are advised to upgrade to these updated packages, which fix these bugs. The system must be rebooted for this update to take effect. 5.135.11. RHSA-2012:1156 - Moderate: kernel security and bug fix update Updated kernel packages that fix two security issues and several bugs are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having moderate security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links associated with each description below. The kernel packages contain the Linux kernel, the core of any Linux operating system. Security Fixes CVE-2012-2383 , Moderate An integer overflow flaw was found in the i915_gem_execbuffer2() function in the Intel i915 driver in the Linux kernel. A local, unprivileged user could use this flaw to cause a denial of service. This issue only affected 32-bit systems. CVE-2011-1078 , Low A missing initialization flaw was found in the sco_sock_getsockopt_old() function in the Linux kernel's Bluetooth implementation. A local, unprivileged user could use this flaw to cause an information leak. Red Hat would like to thank Vasiliy Kulikov of Openwall for reporting the CVE-2011-1078 issue. Bug Fixes BZ# 832360 A bug in the writeback livelock avoidance scheme could result in some dirty data not being written to disk during a sync operation. In particular, this could occasionally occur at unmount time, when previously written file data was not synced, and was unavailable after the file system was remounted. Patches have been applied to address this issue, and all dirty file data is now synced to disk at unmount time. BZ# 838821 During the update of the be2net driver between the Red Hat Enterprise Linux 6.1 and 6.2, the NETIF_F_GRO flag was incorrectly removed, and the GRO (Generic Receive Offload) feature was therefore disabled by default. In OpenVZ kernels based on Red Hat Enterprise Linux 6.2, this led to a significant traffic decrease. To prevent this problem, the NETIF_F_GRO flag has been included in the underlying source code. BZ# 840023 Previously, the size of the multicast IGMP (Internet Group Management Protocol) snooping hash table for a bridge was limited to 256 entries even though the maximum is 512. This was due to the hash table size being incorrectly compared to the maximum hash table size, hash_max, and the following message could have been produced by the kernel: With this update, the hash table value is correctly compared to the hash_max value, and the error message no longer occurs under these circumstances. BZ# 840052 In the ext4 file system, splitting an unwritten extent while using Direct I/O could fail to mark the modified extent as dirty, resulting in multiple extents claiming to map the same block. This could lead to the kernel or fsck reporting errors due to multiply claimed blocks being detected in certain inodes. In the ext4_split_unwritten_extents() function used for Direct I/O, the buffer which contains the modified extent is now properly marked as dirty in all cases. Errors due to multiply claimed blocks in inodes should no longer occur for applications using Direct I/O. BZ# 840156 With certain switch peers and firmware, excessive link flaps could occur due to the way DCBX (Data Center Bridging Exchange) was handled. To prevent link flaps, changes were made to examine the capabilities in more detail and only initialize hardware if the capabilities have changed. BZ# 841406 The CONFIG_CFG80211_WEXT configuration option previously defined in the KConfig of the ipw2200 driver was removed with a recent update. This led to a build failure of the driver. The driver no longer depends on the CONFIG_CFG80211_WEXT option, so it can build successfully. BZ# 841411 Migrating virtual machines from Intel hosts that supported the VMX "Unrestricted Guest" feature to older hosts without this feature could result in kvm returning the "unhandled exit 80000021" error for guests in real mode. The underlying source code has been modified so that migration completes successfully on hosts where "Unrestricted Guest" is disabled or not supported. BZ# 841579 update changed the /proc/stat code to use the get_cpu_idle_time_us() and get_cpu_iowait_time_us() macros if dynamic ticks are enabled in the kernel. This could lead to problems on IBM System z architecture that defines the arch_idle_time() macro. For example, executing the "vmstat" command could fail with "Floating point exception" followed by a core dump. The underlying source code has been modified so that the arch_idle_time() macro is used for idle and iowait times, which prevents the mentioned problem. BZ# 842429 Bond masters and slaves now have separate VLAN groups. As such, if a slave device incurred a network event that resulted in a failover, the VLAN device could process this event erroneously. With this update, when a VLAN is attached to a master device, it ignores events generated by slave devicec so that the VLANs do not go down until the bond master does. Users should upgrade to these updated packages, which contain backported patches to correct these issues. The system must be rebooted for this update to take effect. 5.135.12. RHSA-2012:1580 - Moderate: kernel security, bug fix and enhancement update Updated kernel packages that fix multiple security issues, numerous bugs, and add one enhancement are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having moderate security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links associated with each description below. The kernel packages contain the Linux kernel, the core of any Linux operating system. Security Fixes CVE-2012-2375 , Moderate It was found that the RHSA-2012:0862 update did not correctly fix the CVE-2011-4131 issue. A malicious Network File System version 4 (NFSv4) server could return a crafted reply to a GETACL request, causing a denial of service on the client. CVE-2012-4565 , Moderate A divide-by-zero flaw was found in the TCP Illinois congestion control algorithm implementation in the Linux kernel. If the TCP Illinois congestion control algorithm were in use (the sysctl net.ipv4.tcp_congestion_control variable set to "illinois"), a local, unprivileged user could trigger this flaw and cause a denial of service. CVE-2012-5517 , Moderate A NULL pointer dereference flaw was found in the way a new node's hot added memory was propagated to other nodes' zonelists. By utilizing this newly added memory from one of the remaining nodes, a local, unprivileged user could use this flaw to cause a denial of service. CVE-2012-2100 , Low It was found that the initial release of Red Hat Enterprise Linux 6 did not correctly fix the CVE-2009-4307 issue, a divide-by-zero flaw in the ext4 file system code. A local, unprivileged user with the ability to mount an ext4 file system could use this flaw to cause a denial of service. CVE-2012-4444 , Low A flaw was found in the way the Linux kernel's IPv6 implementation handled overlapping, fragmented IPv6 packets. A remote attacker could potentially use this flaw to bypass protection mechanisms (such as a firewall or intrusion detection system (IDS)) when sending network packets to a target system. Red Hat would like to thank Antonios Atlasis working with Beyond Security's SecuriTeam Secure Disclosure program and Loganaden Velvindron of AFRINIC for reporting CVE-2012-4444. The CVE-2012-2375 issue was discovered by Jian Li of Red Hat, and CVE-2012-4565 was discovered by Rodrigo Freire of Red Hat. Bug Fixes BZ# 853950 The kernel allows high priority real time tasks, such as tasks scheduled with the SCHED_FIFO policy, to be throttled. Previously, the CPU stop tasks were scheduled as high priority real time tasks and could be thus throttled accordingly. However, the replenishment timer, which is responsible for clearing a throttle flag on tasks, could be pending on the just disabled CPU. This could lead to a situation that the throttled tasks were never scheduled to run. Consequently, if any of such tasks was needed to complete the CPU disabling, the system became unresponsive. This update introduces a new scheduler class, which gives a task the highest possible system priority and such a task cannot be throttled. The stop-task scheduling class is now used for the CPU stop tasks, and the system shutdown completes as expected in the scenario described. BZ# 864826 A kernel panic occurred when the size of a block device was changed and an I/O operation was issued at the same time. This was because the direct and non-direct I/O code was written with the assumption that the block size would not change. This update introduces a new read-write lock, bd_block_size_semaphore. The lock is taken for read during I/O operations and for write when changing the block size of a device. As a result, block size cannot be changed while I/O is being submitted. This prevents the kernel from crashing in the described scenario. BZ# 866470 A kernel update introduced a bug that caused RAID0 and linear arrays larger than 4 TB to be truncated to 4 TB when using 0.90 metadata. The underlying source code has been modified so that 0.90 RAID0 and linear arrays larger than 4 TB are no longer truncated in the md RAID layer. BZ# 866795 The mlx4 driver must program the mlx4 card so that it is able to resolve which MAC addresses to listen to, including multicast addresses. Therefore, the mlx4 card keeps a list of trusted MAC addresses. The driver used to perform updates to this list on the card by emptying the entire list and then programming in all of the addresses. Thus, whenever a user added or removed a multicast address or put the card into or out of promiscuous mode, the card's entire address list was re-written. This introduced a race condition, which resulted in a packet loss if a packet came in on an address the card should be listening to, but had not yet been reprogrammed to listen to. With this update, the driver no longer rewrites the entire list of trusted MAC addresses on the card but maintains a list of addresses that are currently programmed into the card. On address addition, only the new address is added to the end of the list, and on removal, only the to-be-removed address is removed from the list. The mlx4 card no longer experiences the described race condition and packets are no longer dropped in this scenario. BZ# 871854 If there are no active threads using a semaphore, blocked threads should be unblocked. Previously, the R/W semaphore code looked for a semaphore counter as a whole to reach zero - which is incorrect because at least one thread is usually queued on the semaphore and the counter is marked to reflect this. As a consequence, the system could become unresponsive when an application used direct I/O on the XFS file system. With this update, only the count of active semaphores is checked, thus preventing the hang in this scenario. BZ# 874022 Due to an off-by-one error in a test condition in the bnx2x_start_xmit and bnx2x_tx_int functions, the TX queue of a NIC could, under some circumstances, be prevented from being resumed. Consequently, NICs using the bnx2x driver, such as Broadcom NetXtreme II 10G network devices, went offline. To bring the NIC back online, the bnx2x module had to be reloaded. This update corrects the test condition in the mentioned functions and the NICs using the bnx2x driver work as expected in the described scenario. BZ# 876088 If an abort request times out to the virtual Fibre Channel adapter, the ibmvfc driver initiates a reset of the adapter. Previously, however, the ibmvfc driver incorrectly returned success to the eh_abort handler and then sent a response to the same command, which led to a kernel oops on IBM System p machines. This update ensures that both the abort request and the request being aborted are completed prior to exiting the en_abort handler, and the kernel oops no longer occurs in this scenario. BZ# 876101 The hugetlbfs file system implementation was missing a proper lock protection of enqueued huge pages at the gather_surplus_pages() function. Consequently, the hstate.hugepages_freelist list became corrupted, which caused a kernel panic. This update adjusts the code so that the used spinlock protection now assures atomicity and safety of enqueued huge pages when handling hstate.hugepages_freelist. The kernel no longer panics in this scenario. BZ# 876487 A larger command descriptor block (CDB) is allocated for devices using Data Integrity Field (DIF) type 2 protection. The CDB was being freed in the sd_done() function, which resulted in a kernel panic if the command had to be retried in certain error recovery cases. With this update, the larger CDB is now freed in the sd_unprep_fn() function instead. This prevents the kernel panic from occurring. BZ# 876491 The implementation of socket buffers (SKBs) allocation for a NIC was node-aware, that is, memory was allocated on the node closest to the NIC. This increased performance of the system because all DMA transfer was handled locally. This was a good solution for networks with a lower frame transmitting rate where CPUs of the local node handled all the traffic of the single NIC well. However, when using 10Gb Ethernet devices, CPUs of one node usually do not handle all the traffic of a single NIC efficiently enough. Therefore, system performance was poor even though the DMA transfer was handled by the node local to the NIC. This update modifies the kernel to allow SKBs to be allocated on a node that runs applications receiving the traffic. This ensures that the NIC's traffic is handled by as many CPUs as needed, and since SKBs are accessed very frequently after allocation, the kernel can now operate much more efficiently even though the DMA can be transferred cross-node. BZ# 876493 When performing PCI device assignment on AMD systems, a virtual machine using the assigned device could not be able to boot, as the device had failed the assignment, leaving the device in an unusable state. This was due to an improper range check that omitted the last PCI device in a PCI subsystem or tree. The check has been fixed to include the full range of PCI devices in a PCI subsystem or tree. This bug fix avoids boot failures of a virtual machine when the last device in a PCI subsystem is assigned to a virtual machine on an AMD host system. BZ# 876496 The mmap_rnd() function is expected to return a value in the [0x00000000 .. 0x000FF000] range on 32-bit x86 systems. This behavior is used to randomize the base load address of shared libraries by a bug fix resolving the CVE-2012-1568 issue. However, due to a signedness bug, the mmap_rnd() function could return values outside of the intended scope. Consequently, the shared libraries base address could be less than one megabyte. This could cause binaries that use the MAP_FIXED mappings in the first megabyte of the process address space (typically, programs using vm86 functionality) to work incorrectly. This update modifies the mmap_rnd() function to no longer cast values returned by the get_random_int() function to the long data type. The aforementioned binaries now work correctly in this scenario. BZ# 876499 Previously, XFS could, under certain circumstances, incorrectly read metadata from the journal during XFS log recovery. As a consequence, XFS log recovery terminated with an error message and prevented the file system from being mounted. This problem could result in a loss of data if the user forcibly "zeroed" the log to allow the file system to be mounted. This update ensures that metadata is read correctly from the log so that journal recovery completes successfully and the file system mounts as expected. BZ# 876549 Some BIOS firmware versions could leave the "Frame Start Delay" bits of the PIPECONF register in test mode on selected Intel chipsets. Consequently, video output on certain Lenovo laptop series, such as T41x or T42x, became corrupted (for example, the screen appeared to be split and shifted to the right) after upgrading VBIOS from version 2130 to 2132. This update corrects the problem by resetting the "Frame Start Delay" bits for the normal operation use in the DRM driver. Video output of the previously affected Lenovo models is now correct. Enhancement BZ# 877950 The INET socket interface has been modified to send a warning message when the ip_options structure is allocated directly by a third-party module using the kmalloc() function. Users should upgrade to these updated kernel packages, which contain backported patches to correct these issues, fix these bugs and add this enhancement. The system must be rebooted for this update to take effect. 5.135.13. RHSA-2012:1426 - Moderate: kernel security and bug fix update Updated kernel packages that fix multiple security issues and several bugs are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having moderate security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links associated with each description below. The kernel packages contain the Linux kernel, the core of any Linux operating system. Security Fixes CVE-2012-2133 , Moderate A use-after-free flaw was found in the Linux kernel's memory management subsystem in the way quota handling for huge pages was performed. A local, unprivileged user could use this flaw to cause a denial of service or, potentially, escalate their privileges. CVE-2012-3511 , Moderate A use-after-free flaw was found in the madvise() system call implementation in the Linux kernel. A local, unprivileged user could use this flaw to cause a denial of service or, potentially, escalate their privileges. CVE-2012-1568 , Low It was found that when running a 32-bit binary that uses a large number of shared libraries, one of the libraries would always be loaded at a predictable address in memory. An attacker could use this flaw to bypass the Address Space Layout Randomization (ASLR) security feature. CVE-2012-3400 , Low Buffer overflow flaws were found in the udf_load_logicalvol() function in the Universal Disk Format (UDF) file system implementation in the Linux kernel. An attacker with physical access to a system could use these flaws to cause a denial of service or escalate their privileges. Red Hat would like to thank Shachar Raindel for reporting CVE-2012-2133. Bug Fixes BZ# 865713 Previously, the I/O watchdog feature was disabled when Intel Enhanced Host Controller Interface (EHCI) devices were detected. This could cause incorrect detection of USB devices upon addition or removal. Also, in some cases, even though such devices were detected properly, they were non-functional. The I/O watchdog feature can now be enabled on the kernel command line, which improves hardware detection on underlying systems. BZ# 864821 The usb_device_read() routine used the bus->root_hub pointer to determine whether or not the root hub was registered. However, this test was invalid because the pointer was set before the root hub was registered and remained set even after the root hub was unregistered and deallocated. As a result, the usb_device_read() routine accessed freed memory, causing a kernel panic; for example, on USB device removal. With this update, the hcs->rh_registered flag - which is set and cleared at the appropriate times - is used in the test, and the kernel panic no longer occurs in this scenario. BZ# 853257 Previously, when a server attempted to shut down a socket, the svc_tcp_sendto() function set the XPT_CLOSE variable if the entire reply failed to be transmitted. However, before XPT_CLOSE could be acted upon, other threads could send further replies before the socket was really shut down. Consequently, data corruption could occur in the RPC record marker. With this update, send operations on a closed socket are stopped immediately, thus preventing this bug. BZ# 853943 Previously, a race condition existed whereby device open could race with device removal (for example when hot-removing a storage device), potentially leading to a kernel panic. This was due a use-after-free error in the block device open patch, which has been corrected by not referencing the "disk" pointer after it has been passed to the module_put() function. BZ# 854476 Sometimes, the crypto allocation code could become unresponsive for 60 seconds or multiples thereof due to an incorrect notification mechanism. This could cause applications, like openswan, to become unresponsive. The notification mechanism has been improved to avoid such hangs. BZ# 856106 Traffic to the NFS server could trigger a kernel oops in the svc_tcp_clear_pages() function. The source code has been modified, and the kernel oops no longer occurs in this scenario. BZ# 860784 When a device was registered to a bus, a race condition could occur between the device being added to the list of devices of the bus and binding the device to a driver. As a result, the device could already be bound to a driver which led to a warning and incorrect reference counting, and consequently to a kernel panic on device removal. To avoid the race condition, this update adds a check to identify an already bound device. BZ# 865308 When I/O is issued through blk_execute_rq(), the blk_execute_rq_nowait() routine is called to perform various tasks. At first, the routine checks for a dead queue. Previously, however, if a dead queue was detected, the blk_execute_rq_nowait() function did not invoke the done() callback function. This resulted in blk_execute_rq() being unresponsive when waiting for completion, which had never been issued. To avoid such hangs, the rq->end_io pointer is initialized to the done() callback before the queue state is verified. BZ# 860942 The Out of Memory (OOM) killer killed processes outside of a memory cgroup when one or more processes inside that memory cgroup exceeded the "memory.limit_in_bytes" value. This was because when a copy-on-write fault happened on a Transparent Huge Page (THP), the 2 MB THP caused the cgroup to exceed the memory.limit_in_bytes value but the individual 4 KB page was not exceeded. With this update, the 2 MB THP is correctly split into 4 KB pages when the memory.limit_in_bytes value is exceeded. The OOM kill is delivered within the memory cgroup; tasks outside the memory cgroups are no longer killed by the OOM killer. BZ# 857055 An unnecessary check for the RXCW.CW bit could cause the Intel e1000e NIC (Network Interface Controller) to not work properly. The check has been removed so that the Intel e1000e NIC now works as expected. BZ# 860640 A kernel oops could occur due to a NULL pointer dereference upon USB device removal. The NULL pointer dereference has been fixed and the kernel no longer crashes in this scenario. BZ# 864827 Previously, a use-after-free bug in the usbhid code caused a NULL pointer dereference. Consequent kernel memory corruption resulted in a kernel panic and could cause data loss. This update adds a NULL check to avoid these problems. BZ# 841667 USB Request Blocks (URBs) coming from user space were not allowed to have transfer buffers larger than an arbitrary maximum. This could lead to various problems; for example, attempting to redirect certain USB mass-storage devices could fail. To avoid such problems, programs are now allowed to submit URBs of any size; if there is not sufficient contiguous memory available, the submission fails with an ENOMEM error. In addition, to prevent programs from submitting a lot of small URBs and so using all the DMA-able kernel memory, this update also replaces the old limits on individual transfer buffers with a single global limit of 16MB on the total amount of memory in use by USB file system (usbfs). BZ# 841824 A USB Human Interface Device (HID) can be disconnected at any time. If this happened right before or while the hiddev_ioctl() call was in progress, hiddev_ioctl() attempted to access the invalid hiddev->hid pointer. When the HID device was disconnected, the hiddev_disconnect() function called the hid_device_release() function, which frees the hid_device structure type, but did not set the hiddev->hid pointer to NULL. If the deallocated memory region was re-used by the kernel, a kernel panic or memory corruption could occur. The hiddev->exist flag is now checked while holding the existancelock and hid_device is used only if such a device exists. As a result, the kernel no longer crashes in this scenario. BZ# 863147 The MAC address stored in the driver's private structure is of the unsigned character data type but parameters of the strlcpy() function are of the signed character data type. This conversion of data types led to change in the value. This changed value was passed to the upper layer and garbage characters were displayed when running the "iscsiadm -m iface" command. Consequently, the garbage characters in the MAC address led to boot failures of iSCSI devices. MAC addresses are now formatted using the sysfs_format_mac() function rather than strlcpy(), which prevents the described problems. BZ# 861953 It is possible to receive data on multiple transports. Previously, however, data could be selectively acknowledged (SACKed) on a transport that had never received any data. This was against the SHOULD requirement in section 6.4 of the RFC 2960 standard. To comply with this standard, bundling of SACK operations is restricted to only those transports which have moved the ctsn of the association forward since the last sack. As a result, only outbound SACKs on a transport that has received a chunk since the last SACK are bundled. BZ# 861390 Bugs in the lpfs driver caused disruptive logical unit resets during fabric fault testing. The underlying source code has been modified so that the problem no longer occurs. BZ# 852450 Previously, bnx2x devices did not disable links with a large number of RX errors and overruns, and such links could still be detected as active. This prevented the bonding driver from failing over to a working link. This update restores remote-fault detection, which periodically checks for remote faults on the MAC layer. In case the physical link appears to be up but an error occurs, the link is disabled. Once the error is cleared, the link is brought up again. BZ# 860787 Various race conditions that led to indefinite log reservation hangs due to xfsaild "idle" mode occurred in XFS file system. This could lead to certain tasks being unresponsive; for example, the cp utility could become unresponsive on heavy workload. This update improves the Active Item List (AIL) pushing logic in xfsaild. Also, the log reservation algorithm and interactions with xfsaild have been improved. As a result, the aforementioned problems no longer occur in this scenario. BZ# 858955 On dual port Mellanox hardware, the mlx4 driver was adding promiscuous mode to the correct port, but when attempting to remove promiscuous mode from a port, it always tried to remove it from port one. It was therefore impossible to remove promiscuous mode from the second port, and promiscuous mode was incorrectly removed from port one even if it was not intended. With this update, the driver now properly attempts to remove promiscuous mode from port two when needed. BZ# 858956 Mellanox hardware keeps a separate list of Ethernet hardware addresses it listens to depending on whether the Ethernet hardware address is unicast or multicast. Previously, the mlx4 driver was incorrectly adding multicast addresses to the unicast list. This caused unstable behavior in terms of whether or not the hardware would have actually listened to the addresses requested. This update fixes the problem by always putting multicast addresses on the multicast list and vice versa. BZ# 859326 If a dirty GFS2 inode was being deleted but was in use by another node, its metadata was not written out before GFS2 checked for dirty buffers in the gfs2_ail_flush() function. GFS2 was relying on the inode_go_sync() function to write out the metadata when the other node tried to free the file. However, this never happened because GFS2 failed the error check. With this update, the inode is written out before calling the gfs2_ail_flush() function. If a process has the PF_MEMALLOC flag set, it does not start a new transaction to update the access time when it writes out the inode. The inode is marked as dirty to make sure that the access time is updated later unless the inode is being freed. BZ# 859436 In a release of Red Hat Enterprise Linux, the new Mellanox packet steering architecture had been intentionally left out of the Red Hat kernel. With Red Hat Enterprise Linux 6.3, the new Mellanox packet steering architecture was merged into Red Hat Mellanox driver. One merge detail was missing, and as a result, the multicast promiscuous flag on an interface was not checked during an interface reset to see if the flag was on prior to the reset and should be re-enabled after the reset. This update fixes the problem, so if an adapter is reset and the multicast promiscuous flag was set prior to the reset, the flag is now still set after the reset. BZ# 860165 Previously, the default minimum entitled capacity of a virtual processor was 10%. This update changes the PowerPC architecture vector to support a lower minimum virtual processor capacity of 1%. BZ# 858954 Previously, a cgroup or its hierarchy could only be modified under the cgroup_mutex master lock. This introduced a locking dependency on cred_guard_mutex from cgroup_mutex and completed a circular dependency, which involved cgroup_mutex, namespace_sem and workqueue, and led to a deadlock. As a consequence, many processes were unresponsive, and the system could be eventually unusable. This update introduces a new mutex, cgroup_root_mutex, which protects cgroup root modifications and is now used by mount options readers instead of the master lock. This breaks the circular dependency and avoids the deadlock. All users of kernel should upgrade to these updated packages, which contain backported patches to correct these issues. The system must be rebooted for this update to take effect. 5.135.14. RHSA-2012:0862 - Moderate: Red Hat Enterprise Linux 6.3 kernel security, bug fix, and enhancement update Updated kernel packages that fix two security issues, address several hundred bugs, and add numerous enhancements are now available as part of the ongoing support and maintenance of Red Hat Enterprise Linux version 6. This is the third regular update. The Red Hat Security Response Team has rated this update as having moderate security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links associated with each description below. The kernel packages contain the Linux kernel, the core of any Linux operating system. Security Fixes CVE-2011-1083 , Moderate A flaw was found in the way the Linux kernel's Event Poll (epoll) subsystem handled large, nested epoll structures. A local, unprivileged user could use this flaw to cause a denial of service. CVE-2011-4131 , Moderate A malicious Network File System version 4 (NFSv4) server could return a crafted reply to a GETACL request, causing a denial of service on the client. Red Hat would like to thank Nelson Elhage for reporting CVE-2011-1083, and Andy Adamson for reporting CVE-2011-4131. Bug Fixes BZ# 824025 Hotplugging SATA disks did not work properly and the system experienced various issues when hotplugging such devices. This update fixes several hotplugging issues in the kernel and SAS hotplugging now works as expected. BZ# 782374 Due to a bug in the hid_reset() function, a deadlock could occur when a Dell iDRAC controller was reset. Consequently, its USB keyboard or mouse device became unresponsive. A patch that fixes the underlying code has been provided to address this bug and the hangs no longer occur in the described scenario. BZ# 781531 The AMD IOMMU driver used wrong shift direction in the alloc_new_range() function. Consequently, the system could terminate unexpectedly or become unresponsive. This update fixes the code and crashes and hangs no longer occur in the described scenario. BZ# 781524 Previously, the AMD IOMMU (input/output memory management unit) driver could use the MSI address range for DMA (direct memory access) addresses. As a consequence, DMA could fail and spurious interrupts would occur if this address range was used. With this update, the MSI address range is reserved to prevent the driver from allocating wrong addresses and DMA is now assured to work as expected in the described scenario. BZ# 773705 Windows clients never send write requests larger than 64 KB but the default size for write requests in Common Internet File System (CIFS) was set to a much larger value. Consequently, write requests larger than 64 KB caused various problems on certain third-party servers. This update lowers the default size for write requests to prevent this bug. The user can override this value to a larger one to get better performance. BZ# 773522 Due to a race condition between the notify_on_release() function and task movement between cpuset or memory cgroup directories, a system deadlock could occur. With this update, the cgroup_wq cgroup has been created and both async_rebuild_domains() and check_for_release() functions used for task movements use it, thus fixing this bug. BZ# 773517 Due to invalid calculations of the vruntime variable along with task movement between cgroups, moving tasks between cgroups could cause very long scheduling delays. This update fixes this problem by setting the cfs_rq and curr parameters after holding the rq->lock lock. BZ# 784671 The kernel code checks for conflicts when an application requests a specific port. If there is no conflict, the request is granted. However, the port auto-selection done by the kernel failed when all ports were bound, even if there was an available port with no conflicts. With this update, the port auto-selection code has been fixed to properly use ports with no conflicts. BZ# 784758 A bug in the try_to_wake_up() function could cause status change from TASK_DEAD to TASK_RUNNING in a race condition with an SMI (system management interrupt) or a guest environment of a virtual machine. As a consequence, the exited task was scheduled again and a kernel panic occurred. This update fixes the race condition in the do_exit() function and the panic no longer occurs in the described scenario. BZ# 785891 Previously, if more than a certain number of qdiscs (Classless Queuing Disciplines) using the autohandle mechanism were allocated a soft lock-up error occurred. This update fixes the maximum loop count and adds the cond_resched() call in the loop, thus fixing this bug. BZ# 785959 Prior to this update, the find_busiest_group() function used sched_group->cpu_power in the denominator of a fraction with a value of 0. Consequently, a kernel panic occurred. This update prevents the divide by zero in the kernel and the panic no longer occurs. BZ# 772874 In the Common Internet File System (CIFS), the oplock break jobs and async callback handlers both use the SLOW-WORK workqueue, which has a finite pool of threads. Previously, these oplock break jobs could end up taking all the running queues waiting for a page lock which blocks the callback required to free this page lock from being completed. This update separates the oplock break jobs into a separate workqueue VERY-SLOW-WORK, allowing the callbacks to be completed successfully and preventing the deadlock. BZ# 772317 Previously, network drivers that had Large Receive Offload (LRO) enabled by default caused the system to run slow, lose frame, and eventually prevent communication, when using software bridging. With this update, LRO is automatically disabled by the kernel on systems with a bridged configuration, thus preventing this bug. BZ# 772237 When transmitting a fragmented socket buffer (SKB), the qlge driver fills a descriptor with fragment addresses, after DMA-mapping them. On systems with pages larger than 8 KB and less than eight fragments per SKB, a macro defined the size of the OAL (Outbound Address List) list as 0. For SKBs with more than eight fragments, this would start overwriting the list of addresses already mapped and would make the driver fail to properly unmap the right addresses on architectures with pages larger than 8 KB. With this update, the size of external list for TX address descriptors have been fixed and qlge no longer fails in the described scenario. BZ# 772136 Prior to this update, the wrong size was being calculated for the vfinfo structure. Consequently, networking drivers that created a large number of virtual functions caused warning messages to appear when loading and unloading modules. Backported patches from upstream have been provided to resolve this issue, thus fixing this bug. BZ# 771251 The fcoe_transport_destroy path uses a work queue to destroy the specified FCoE interface. Previously, the destroy_work work queue item blocked another single-threaded work queue. Consequently, a deadlock between queues occurred and the system became unresponsive. With this update, fcoe_transport_destroy has been modified and is now a synchronous operation, allowing to break the deadlock dependency. As a result, destroy operations are now able to complete properly, thus fixing this bug. BZ# 786518 On a system that created and deleted lots of dynamic devices, the 31-bit Linux ifindex object failed to fit in the 16-bit macvtap minor range, resulting in unusable macvtap devices. The problem primarily occurred in a libvirt-controlled environment when many virtual machines were started or restarted, and caused libvirt to report the following message: Error starting domain: cannot open macvtap tap device /dev/tap222364: No such device or address With this update, the macvtap's minor device number allocation has been modified so that virtual machines can now be started and restarted as expected in the described scenario. BZ# 770023 A bug in the splice code has caused the file position on the write side of the sendfile() system call to be incorrectly set to the read side file position. This could result in the data being written to an incorrect offset. Now, sendfile() has been modified to correctly use the current file position for the write side file descriptor, thus fixing this bug. Note that in the following common sendfile() scenarios, this bug does not occur: when both read and write file positions are identical and when the file position is not important (e.g. if the write side is a socket). BZ# 769626 Prior to this update, Active State Power Management (ASPM) was not properly disabled, and this interfered with the correct operation of the hpsa driver. Certain HP BIOS versions do not report a proper disable bit, and when the kernel fails to read this bit, the kernel defaults to enabling ASPM. Consequently, certain servers equipped with a HP Smart Array controller were unable to boot unless the pcie_aspm=off option was specified on the kernel command line. A backported patch has been provided to address this problem, ASPM is now properly disabled, and the system now boots up properly in the described scenario. BZ# 769590 Due to a race condition, running the "ifenslave -d bond0 eth0" command to remove the slave interface from the bonding device could cause the system to crash when a networking packet was being received at the same time. With this update, the race condition has been fixed and the system no longer crashes under these circumstances. BZ# 769007 In certain circumstances, the qla2xxx driver was unable to discover fibre channel (FC) tape devices because the ADISC ELS request failed. This update adds the new module parameter, ql2xasynclogin, to address this issue. When this parameter is set to "0", FC tape devices are discovered properly. BZ# 786960 When running AF_IUCV socket programs with IUCV transport, an IUCV SEVER call was missing in the callback of a receiving IUCV SEVER interrupt. Under certain circumstances, this could prevent z/VM from removing the corresponding IUCV-path completely. This update adds the IUCV SEVER call to the callback, thus fixing this bug. In addition, internal socket states have been merged, thus simplifying the AF_IUCV code. BZ# 767753 When the nohz=off kernel parameter was set, kernel could not enter any CPU C-state. With this update, the underlying code has been fixed and transitions to CPU idle states now work as expected. BZ# 766861 Under heavy memory and file system load, the "mapping->nrpages == 0" assertion could occur in the end_writeback() function. As a consequence, a kernel panic could occur. This update provides a reliable check for mapping->nrpages that prevent the described assertion, thus fixing this bug. BZ# 765720 An insufficiently designed calculation in the CPU accelerator in kernel caused an arithmetic overflow in the sched_clock() function when system uptime exceeded 208.5 days. This overflow led to a kernel panic on the systems using the Time Stamp Counter (TSC) or Virtual Machine Interface (VMI) clock source. This update corrects the aforementioned calculation so that this arithmetic overflow and kernel panic can no longer occur under these circumstances. BZ# 765673 Previously, the cfq_cic_link() function had a race condition. When some processes, which shared ioc issue I/O to the same block device simultaneously, cfq_cic_link() sometimes returned the -EEXIST error code. Consequently, one of the processes started to wait indefinitely. A patch has been provided to address this issue and the cfq_cic_lookup() call is now retried in the described scenario, thus fixing this bug. BZ# 667925 Previously, the SFQ qdisc packet scheduler class had no bind_tcf() method. Consequently, if a filter was added with the classid parameter to SFQ, a kernel panic occurred due to a null pointer dereference. With this update, the dummy ".unbind_tcf" and ".put" qdisc class options have been added to conform with the behaviour of other schedulers, thus fixing this bug. BZ# 787762 Previously, an incorrect portion of memory was freed when unmapping a DMA (Direct Memory Access) area used by the mlx4 driver. Consequently, a DMA leak occurred after removing a network device that used the driver. This update ensures that the mlx4 driver unmaps the correct portion of memory. As a result, the memory is freed correctly and no DMA leak occurs. BZ# 787771 Previously, when a memory allocation failure occurred, the mlx4 driver did not free the previously allocated memory correctly. Consequently, hotplug removal of devices using the mlx4 driver could not be performed. With this update, a memory allocation failure still occurs when the device MTU (Maximal Transfer Unit) is set to 9000, but hotplug removal the device is possible afer the failure. BZ# 759613 Due to a regression, the updated vmxnet3 driver used the ndo_set_features() method instead of various methods of the ethtool utility. Consequently, it was not possible to make changes to vmxnet3-based network adapters in Red Hat Enterprise Linux 6.2. This update restores the ability of the driver to properly set features, such as csum or TSO (TCP Segmentation Offload), via ethtool. BZ# 759318 Previously, when a MegaRAID 9265/9285 or 9360/9380 controller got a timeout in the megaraid_sas driver, the invalid SCp.ptr pointer could be called from the megasas_reset_timer() function. As a consequence, a kernel panic could occur. An upstream patch has been provided to address this issue and the pointer is now always set correctly. BZ# 790673 The vmxnet3 driver in Red Hat Enterprise Linux 6.2 introduced a regression. Due to an optimization, in which at least 54 bytes of a frame were copied to a contiguous buffer, shorter frames were dropped as the frame did not have 54 bytes available to copy. With this update, transfer size for a buffer is limited to 54 bytes or the frame size, whichever is smaller, and short frames are no longer dropped in the described scenario. BZ# 755885 Previously, when isolating pages for migration, the migration started at the start of a zone while the free scanner started at the end of the zone. Migration avoids entering a new zone by never going beyond what the free scanner scanned. In very rare cases, nodes overlapped and the migration isolated pages without the LRU lock held, which triggered errors in reclaim or during page freeing. With this update, the isolate_migratepages() function makes a check to ensure that it never isolates pages from a zone it does not hold the LRU lock for, thus fixing this bug. BZ# 755380 Due to regression, an attempt to open a directory that did not have a cached dentry failed and the EISDIR error code was returned. The same operation succeeded if a cached dentry existed. This update modifies the nfs_atomic_lookup() function to allow fallbacks to normal look-up in the described scenario. BZ# 754356 Due to a race condition, the mac80211 framework could deauthenticate with an access point (AP) while still scheduling authentication retries with the same AP. If such an authentication attempt timed out, a warning message was returned to kernel log files. With this update, when deauthenticating, pending authentication retry attempts are checked and cancelled if found, thus fixing this bug. BZ# 692767 Index allocation in the virtio-blk module was based on a monotonically increasing variable "index". Consequently, released indexes were not reused and after a period of time, no new ones were available. Now, virtio-blk uses the ida API to allocate indexes, thus preventing this bug. BZ# 795441 When expired user credentials were used in the RENEW() calls, the calls failed. Consequently, all access to the NFS share on the client became unresponsive. With this update, the machine credentials are used with these calls instead, thus preventing this bug most of the time. If no machine credentials are available, user credentials are used as before. BZ# 753301 Previously, an unnecessary assertion could trigger depending on the value of the xpt_pool field. As a consequence, a node could terminate unexpectedly. The xpt_pool field was in fact unnecessary and this update removes it from the sunrpc code, thus preventing this bug. BZ# 753237 Prior to this update, the align_va_addr kernel parameter was ignored if secondary CPUs were initialized. This happened because the parameter settings were overridden during the initialization of secondary CPUs. Also, the align_va_addr parameter documentation contained incorrect parameter arguments. With this update, the underlying code has been modified to prevent the overriding and the documentation has been updated. This update also removes the unused code introduced by the patch for BZ# 739456 . BZ# 796277 Concurrent look-up operations of the same inode that was not in the per-AG (Allocation Group) inode cache caused a race condition, triggering warning messages to be returned in the unlock_new_inode() function. Although this bug could only be exposed by NFS or the xfsdump utility, it could lead to inode corruption, inode list corruption, or other related problems. With this update, the XFS_INEW flag is set before inserting the inode into the radix tree. Now, any concurrent look-up operation finds the new inode with XFS_INEW set and the operation is then forced to wait until XFS_INEW is removed, thus fixing this bug. BZ# 753030 Socket callbacks use the svc_xprt_enqueue() function to add sockets to the pool->sp_sockets list. In normal operation, a server thread will later take the socket off that list. Previously, on the nfsd daemon shutdown, still-running svc_xprt_enqueue() could re-add an socket to the sp_sockets list just before it was deleted. Consequently, system could terminate unexpectedly by memory corruption in the sunrpc module. With this update, the XPT_BUSY flag is put on every socket and svc_xprt_enqueue() now checks this flag, thus preventing this bug. BZ# 816034 Red Hat Enterprise Virtualization Hypervisor became unresponsive and failed to shut down or restart with the following message: This happened after configuring the NetConsole functionality with no bridge on top of a bond due to a mistake in the linking to the device structure. With this update, the linking has been fixed and the device binding is processed correctly in this scenario. BZ# 752528 When the md_raid1_unplug_device() function was called while holding a spinlock, under certain device failure conditions, it was possible for the lock to be requested again, deeper in the call chain, and causing a deadlock. With this update, md_raid1_unplug_device() is no longer called while holding a spinlock, thus fixing this bug. BZ# 797731 Previously, a bonding device had always the UFO (UDP Fragmentation Offload) feature enabled even when no slave interfaces supported UFO. Consequently, the tracepath command could not return correct path MTU. With this update, UFO is no longer configured for bonding interfaces by default if the underlying hardware does not support it, thus fixing this bug. BZ# 703555 When trying to send a kdump file to a remote system via the tg3 driver, the tg3 NIC (network interface controller) could not establish the connection and the file could not be sent. The kdump kernel leaves the MSI-X interrupts enabled as set by the crashed kernel, however, the kdump kernel only enables one CPU and this could cause the interrupt delivery to the tg3 driver to fail. With this update, tg3 enables only a single MSI-X interrupt in the kdump kernel to match the overall environment, thus preventing this bug. BZ# 751087 On a system with an idle network interface card (NIC) controlled by the e1000e driver, when the card transmitted up to four descriptors, which delayed the write-back and nothing else, the run of the watchdog driver about two seconds later forced a check for a transmit hang in the hardware, which found the old entry in the TX ring. Consequently, a false "Detected Hardware Unit Hang" message was issued to the log. With this update, when the hang is detected, the descriptor is flushed and the hang check is run again, which fixes this bug. BZ# 750237 Previously, the idle_balance() function dropped or retook the rq->lock parameter, leaving the task vulnerable to the set_tsk_need_resched() function. Now, the parameter is cleared in setup_thread_stack() after a return from balancing and no successfully descheduled or never scheduled task has it set, thus fixing this bug. BZ# 750166 Previously, the doorbell register was being unconditionally swapped. If the Blue Frame option was enabled, the register was incorrectly written to the descriptor in the little endian format. Consequently, certain adapters could not communicate over a configured IP address. With this update, the doorbell register is not swapped unconditionally, rather, it is always converted to big endian before it is written to the descriptor, thus fixing this bug. BZ# 705698 The CFQ (Completely Fair Queuing) scheduler does idling on sequential processes. With changes to the IOeventFD feature, traffic pattern at CFQ changed and CFQ considered everything a thread was doing sequential I/O operations. Consequently, CFQ did not allow preemption across threads in Qemu. This update increases the preemption threshold and the idling is now limited in the described scenario without the loss of throughput. BZ# 798984 When short audio periods were configured, the ALSA PCM midlevel code, shared by all sound cards, could cause audio glitches and other problems. This update adds a time check for double acknowledged interrupts and improves stability of the snd-aloop kernel module, thus fixing this bug. BZ# 748559 Previously, the utime and stime values in the /proc/<pid>/stat file of a multi-threaded process could wrongly decrease when one of its threads exited. A backported patch has been provided to maintain monotonicity of utime and stime in the described scenario, thus fixing this bug. BZ# 800555 During tests with active I/O on 256 LUNs (logical unit numbers) over FCoE, a large number SCSI mid layer error messages were returned. As a consequence, the system became unresponsive. This bug has been fixed by limiting the source of the error messages and the hangs no longer occur in the described scenario. BZ# 714902 Previously, the compaction code assumed that memory on all cluster nodes is aligned to the same page-block size when isolating a cluster node for migration. However, when running a cluster on IBM System x3850 X5 machines with two MAX 5 memory expansion drawers, memory is not properly aligned. Therefore, the isolate_migratepages() function could pass an invalid Page Frame Number (PFN) to the pfn_to_page() function, which resulted in a kernel panic. With this update, the compaction code has been modified so that the isolate_migratepages() function now calls the pfn_valid function to validate PFN when necessary, and the kernel no longer panics in this scenario described. BZ# 801730 The ctx->vif identifier is dereferenced in different parts of the iwlwifi code. When it was set to null before requesting hardware reset, the kernel could terminate unexpectedly. An upstream patch has been provided to address this issue and the crashes no longer occur in the described scenario. BZ# 717179 Previously, a CPU could service the idle load balancer kick request from another CPU, even without receiving the IPI. Consequently, multiple __smp_call_function_single() calls were done on the same call_single_data structure, leading to a deadlock. To kick a CPU, the scheduler already has the reschedule vector reserved. Now, the kick_process mechanism is used instead of using the generic smp_call_function mechanism to kick off the nohz idle load balancing and avoid the deadlock. BZ# 746484 A software bug related to Context Caching existed in the Intel IOMMU support module. On some newer Intel systems, the Context Cache mode has changed from hardware versions, potentially exposing a Context coherency race. The bug was exposed when performing a series of hot plug and unplug operations of a Virtual Function network device which was immediately configured into the network stack, i.e., successfully performed dynamic host configuration protocol (DHCP). When the coherency race occurred, the assigned device would not work properly in the guest virtual machine. With this update, the Context coherency is corrected and the race and potentially resulting device assignment failure no longer occurs. BZ# 746169 Due to a running cursor blink timer, when attempting to hibernate certain types of laptops, the i915 kernel driver could corrupt memory. Consequently, the kernel could crash unexpectedly. An upstream patch has been provided to make the i915 kernel driver use the correct console suspend API and the hibernate function now works as expected. BZ# 720611 Previously, the eth_type_trans() function was called with the VLAN device type set. If a VLAN device contained a MAC address different from the original device, an incorrect packet type was assigned to the host. Consequently, if the VLAN devices were set up on a bonding interface in Adaptive Load Balancing (ALB) mode, the TCP connection could not be established. With this update, the eth_type_trans() function is called with the original device, ensuring that the connection is established as expected. BZ# 806081 The slave member of "struct aggregator" does not necessarily point to a slave which is part of the aggregator. It points to the slave structure containing the aggregator structure, while completely different slaves (or no slaves at all) may be part of the aggregator. Due to a regression, the agg_device_up() function wrongly used agg->slave to find the state of the aggregator. Consequently, wrong active aggregator was reported to the /proc/net/bonding/bond0 file. With this update, agg->lag_ports->slave is used in the described scenario instead, thus fixing this bug. BZ# 806119 Due to the netdevice handler for FCoE (Fibre Channel over Ethernet) and the exit path blocking the keventd work queue, the destroy operation on an NPIV (N_Port ID Virtualization) FCoE port led to a deadlock interdependency and caused the system to become unresponsive. With this update, the destroy_work item has been moved to its own work queue and is now executed in the context of the user space process requesting the destroy, thus preventing this bug. BZ# 739811 Previously, when pages were being migrated via NFS with an active requests on them, if a particular inode ended up deleted, then the VFS called the truncate_inode_pages() function. That function tried to take the page lock, but it was already locked when migrate_page() was called. As a consequence, a deadlock occurred in the code. This bug has been fixed and the migration request is now refused if the PagePrivate parameter is already set, indicating that the page is already associated with an active read or write request. BZ# 808487 Previously, requests for large data blocks with the ZSECSENDCPRB ioctl() system call failed due to an invalid parameter. A misleading error code was returned, concealing the real problem. With this update, the parameter for the ZSECSENDCPRB request code constant is validated with the correct maximum value. Now, if the parameter length is not valid, the EINVAL error code is returned, thus fixing this bug. BZ# 809928 Due to incorrect use of the list_for_each_entry_safe() macro, the enumeration of remote procedure calls (RPCs) priority wait queue tasks stored in the tk_wait.links list failed. As a consequence, the rpc_wake_up() and rpc_wake_up_status() functions failed to wake up all tasks. This caused the system to become unresponsive and could significantly decrease system performance. Now, the list_for_each_entry_safe() macro is no longer used in rpc_wake_up(), ensuring reasonable system performance. BZ# 812259 Various problems were discovered in the iwlwifi driver happening in the 5 GHz band. Consequently, roaming between access points (AP) on 2.4 GHz and 5 GHz did not work properly. This update adds a new option to the driver that disables the 5 GHz band support. BZ# 810299 Previously, secondary, tertiary, and other IP addresses added to bond interfaces could overwrite the bond->master_ip and vlan_ip values. Consequently, a wrong IP address could be occasionally used, the MII (Media Independent Interface) status of the backup slave interface went down, and the bonding master interfaces were switching. This update removes the master_ip and vlan_ip elements from the bonding and vlan_entry structures, respectively. Instead, devices are directly queried for the optimal source IP address for ARP requests, thus fixing this bug. BZ# 727700 An anomaly in the memory map created by the mbind() function caused a segmentation fault in Hotspot Java Virtual Machines with the NUMA-aware Parallel Scavenge garbage collector. A backported upstream patch that fixes mbind() has been provided and the crashes no longer occur in the described scenario. BZ# 812108 Previously, with a transparent proxy configured and under high load, the kernel could start to drop packets, return error messages such as "ip_rt_bug: addr1 -> addr2, ?", and, under rare circumstances, terminate unexpectedly. This update provides patches addressing these issues and the described problems no longer occur. BZ# 811815 The kdump utility does not support Xen para-virtualized (PV) drivers on Hardware Virtualized Machine (HVM) guests in Red Hat Enterprise Linux 6. Therefore, kdump failed to start if the guest had loaded PV drivers. This update modifies underlying code to allow kdump to start without PV drivers on HVM guests configured with PV drivers. BZ# 735105 When running a userspace program, such as the Ceph client, on the ext4 file system, a race condition between the sync/flush thread and the xattr-set thread could occur. This was caused by an incorrectly-set state flag on an inode. As a consequence, memory for the file system was incorrectly allocated, which resulted in file system corruption. With this update, the ext4 code has been modified to prevent this race condition from occurring and file systems are no longer corrupted under these circumstances. BZ# 728852 An unwanted interrupt was generated when a PCI driver switched the interrupt mechanism from the Message Signaled Interrupt (MSI or MSI-X) to the INTx emulation while shutting down a device. Due to this, an interrupt handler was called repeatedly, and the system became unresponsive. On certain systems, the interrupt handler of Intelligent Platform Management Interface (IPMI) was called while shutting down a device on the way to reboot the system after running kdump. In such a case, soft lockups were performed repeatedly and the shutdown process never finished. With this update, the user can choose not to use MSI or MSI-X for the PCI Express Native Hotplug driver. The switching between the interrupt mechanisms is no longer performed so that the unwanted interrupt is not generated. BZ# 731917 The time-out period in the qla2x00_fw_ready() function was hard-coded to 20 seconds. This period was too short for new QLogic host bus adapters (HBAs) for Fibre Channel over Ethernet (FCoE). Consequently, some logical unit numbers (LUNs) were missing after a reboot. With this update, the time-out period has been set to 60 seconds so that the modprobe utility is able to recheck the driver module, thus fixing this bug. BZ# 730045 Previously, the idmapper utility pre-allocated space for all user and group names on an NFS client in advance. Consequently, page allocation failure could occur, preventing a proper mount of a directory. With this update, the allocation of the names is done dynamically when needed, the size of the allocation table is now greatly reduced, and the allocation failures no longer occur. BZ# 811703 As part of mapping the application's memory, a buffer to hold page pointers is allocated and the count of mapped pages is stored in the do_dio field. A non-zero do_dio marks that direct I/O is in use. However, do_dio is only one byte in size. Previously, mapping 256 pages overflowed do_dio and caused it to be set to 0. As a consequence, when large enough number of read or write requests were sent using the st driver's direct I/O path, a memory leak could occur in the driver. This update increases the size of do_dio, thus preventing this bug. BZ# 728315 In the hpet_next_event() function, an interrupt could have occurred between the read and write of the HPET (High Performance Event Timer) and the value of HPET_COUNTER was then beyond that being written to the comparator (HPET_Tn_CMP). Consequently, the timers were overdue for up to several minutes. Now, a comparison is performed between the value of the counter and the comparator in the HPET code. If the counter is beyond the comparator, the "-ETIME" error code is returned, which fixes this bug. BZ# 722297 In a Boot-from-San (BFS) installation via certain iSCSI adapters, driver exported sendtarget entries in the sysfs file system but the iscsistart failed to perform discovery. Consequently, a kernel panic occurred during the first boot sequence. With this update, the driver performs the discovery instead, thus preventing this bug. BZ# 805519 The SCSI layer was not using a large enough buffer to properly read the entire 'BLOCK LIMITS VPD' page that is advertised by a storage array. Consequently, the 'WRITE SAME MAX LEN' parameter was read incorrectly and this could result in the block layer issuing discard requests that were too large for the storage array to handle. This update increases the size of the buffer that the 'BLOCK LIMITS VPD' page is read into and the discard requests are now issued with proper size, thus fixing this bug. BZ# 803378 The Intelligent Platform Management Interface (IPMI) specification requires a minimum communication timeout of five seconds. Previously, the kernel incorrectly used a timeout of 1 second. This could result in failures to communicate with Baseboard Management Controllers (BMC) under certain circumstances. With this update, the timeout has been increased to five seconds to prevent such problems. BZ# 758404 The dm_mirror module can send discard requests. However, the dm_io interface did not support discard requests, and running an LVM mirror over a discard-enabled device led to a kernel panic. This update adds support for the discard requests to the dm_io interface, so that kernel panics no longer occur in the described scenario. BZ# 766051 Previously, when the schedule() function was run shortly after a boot, the following warning message was sometimes returned once per boot on the console: An upstream patch has been provided to address this issue and the WARN_ON_ONCE() call is no longer present in schedule(), thus fixing this bug. BZ# 786996 Prior to this update, bugs in the close() and send() functions caused delays and operation of these two functions took too long to complete. This update adds the IUCV_CLOSED state change and improves locking for close(). Also, the net_device handling has been improved in send(). As a result, the delays no longer occur. BZ# 770250 On NFS, when repeatedly reading a directory, content of which kept changing, the client issued the same readdir request twice. Consequently, the following warning messages were returned to the dmesg output: This update fixes the bug by turning off the loop detection and letting the NFS client try to recover in the described scenario and the messages are no longer returned. BZ# 635817 A number of patches have been applied to the kernel in Red Hat Enterprise Linux 6.3 to improve overall performance and reduce boot time on extremely large UV systems (patches were tested on a system with 2048 cores and 16 TB of memory). Additionally, boot messages for the SGI UV2 platform were updated. BZ# 822697 Previously, if creation of an MFN (Machine Frame Number) was lazily deferred, the MFN could appear invalid when is was not. If at this point read_pmd_atomic() was called, which then called the paravirtualized __pmd() function, and returned zero, the kernel could terminate unexpectedly. With this update, the __pmd() call is avoided in the described scenario and the open-coded compound literal is returned instead, thus fixing this bug. BZ# 781566 Previously, on a system where intermediate P-states were disabled, the powernow-k8 driver could cause a kernel panic in the cpufreq subsystem. Additionally, not all available P-states were recognized by the driver. This update modifies the drive code so that it now properly recognizes all P-states and does not cause the panics in the described scenario. BZ# 783497 Due to an off-by-one bug in max_blocks checks, on the 64-bit PowerPC architecture, the tmpfs file system did not respect the size= parameter and consequently reported incorrect number of available blocks. A backported upstream patch has been provided to address this issue and tmpfs now respects the size= parameter as expected. BZ# 681906 This update introduces a performance enhancement which dramatically improves the time taken to read large directories from disk when accessing them sequentially. Large in this case means several hundred thousand entries or more. It does not affect the speed of looking up individual files (which is already fast), nor does it make any noticeable difference for smaller directories. Once a directory is cached, then again no difference can be noticed in performance. The initial read however, should be faster due to the readahead which this update introduces. BZ# 729586 Red Hat Enterprise Linux 6.1 introduced naming scheme adjustments for emulated SCSI disks used with paravirtual drivers to prevent namespace clashes between emulated IDE and emulated SCSI disks. Both emulated disk types use the paravirt block device xvd . Consider the example below: Table 5.1. The naming scheme example Red Hat Enterprise Linux 6.0 Red Hat Enterprise Linux 6.1 or later emulated IDE hda -> xvda unchanged emulated SCSI sda -> xvda sda -> xvde, sdb -> xvdf, ... This update introduces a new module parameter, xen_blkfront.sda_is_xvda , that provides a seamless upgrade path from 6.0 to 6.3 kernel release. The default value of xen_blkfront.sda_is_xvda is 0 and it keeps the naming scheme consistent with 6.1 and later releases. When xen_blkfront.sda_is_xvda is set to 1 , the naming scheme reverts to the 6.0-compatible mode. Note Note that when upgrading from 6.0 to 6.3 release, if a virtual machine specifies emulated SCSI devices and utilizes paravirtual drivers and uses explicit disk names such as xvd[a-d] , it is advised to add the xen_blkfront.sda_is_xvda=1 parameter to the kernel command line before performing the upgrade. BZ# 756307 In Red Hat Enterprise Linux 6 releases, the kernel option xen_emul_unplug=never did not disable xen platform pci device and that lead to using para-virtual devices instead of emulated ones. This fix, in addition to fixing the irq allocation issue for emulated network devices, allows to disable para-virtual drivers using the xen_emul_unplug=never kernel option as described in "Virtualization Guide: Edition 5.8" chapter "12.3.5. Xen Para-virtualized Drivers on Red Hat Enterprise Linux 6". BZ# 749251 When a process isolation mechanism such as LXC (Linux Containers) was used and the user space was running without the CAP_SYS_ADMIN identifier set, a jailed root user could bypass the dmesg_restrict protection, creating an inconsistency. Now, writing to dmesg_restrict is only allowed when the root has CAP_SYS_ADMIN set, thus preventing this bug. BZ# 788591 Previously, the code for loading multipath tables attempted to load the scsi_dh module even when it was already loaded, which caused the system to become unresponsive. With this update, the code does not attempt to load the scsi_dh module when it is already loaded and multipath tables are loaded successfully. BZ# 801877 Due to an error in the code for ASPM (Active State Power Management) tracking, the system terminated unexpectedly after attempts to remove a PCI bus with both PCIe and PCI devices connected to it when PCIe ASPM was disabled using the "pcie_aspm=off" kernel parameter. This update ensures that the ASPM handling code is not executed when ASPM is disabled and the server no longer crashes in the aforementioned scenario. BZ# 804608 Due to an error in the underlying source code, the perf performance counter subsystem calculated event frequencies incorrectly. This update fixes the code and calculation of event frequencies now returns correct results. BZ# 787771 Previously, when a memory allocation failure occurred, the mlx4 driver did not free the previously allocated memory correctly. Consequently, hotplug removal of devices using the mlx4 driver could not be performed. With this update, a memory allocation failure still occurs when the device MTU (Maximal Transfer Unit) is set to 9000, but hotplug removal the device is possible afer the failure. BZ# 787762 Previously, an incorrect portion of memory was freed when unmapping a DMA (Direct Memory Access) area used by the mlx4 driver. Consequently, a DMA leak occurred after removing a network device that used the driver. This update ensures that the mlx4 driver unmaps the correct portion of memory. As a result, the memory is freed correctly and no DMA leak occurs. BZ# 812415 The Intel SCU driver did not properly interact with the system BIOS to honor the Spread Spectrum Clock (SSC) settings and state by the BIOS controls: even though the SSC mode was enabled in the preboot BIOS environment, it became disabled after boot due to incorrect parameter parsing from the ROM option. With this update, the kernel driver has been modified to correctly parse OEM parameters from the ROM option and the problem no longer occurs. BZ# 811023 The iw_cxgb4 driver has been updated so as to fix a race that occurred when an ingress abort failed to wake up the thread blocked in rdma_init() causing the application to become unresponsive. Also, the driver has been modified to return and not to call the wake_up() function if no endpoint is found as this is not necessary. BZ# 818371 When creating a snapshot of a mounted RAID volume, a kernel panic could occur. This happened because a timer designed to wake up an I/O processing thread was not deactivated when the RAID device was replace by a snapshot origin. The timer then woke a thread that attempted to access memory that had already been freed, resulting in a kernel panic. With this update, this bug has been fixed and the kernel panic no longer occurs in this scenario. BZ# 821329 Previously, attempts to add a write-intent bitmap to an MD array using v1.0 metadata and then using the array without rebooting caused a kernel OOPS. This occurred because the kernel did not reload the bitmap information correctly after creating the bitmap. With this update, the kernel loads the information correctly on bitmap creation, as expected and the kernel OOPS no longer occurs. BZ# 817090 On IBM System z, a kernel panic could occur if there was high traffic workload on HiperSockets devices. This happened due to a conflict in the qeth driver between asynchronous delivery of storage blocks for HiperSockets devices and outdated SIGA (System Information GAthering) retry code. With this update, the SIGA retry code has been removed from the qeth driver and the problem no longer occurs. BZ# 736931 Previously, certain internal functions in the real-time scheduler only iterated over runnable real-time tasks instead of iterating over all existing tasks. Consequently, when processing multiple real-time threads on multiple logical CPUs and one CPU was disabled, the kernel could panic with the following error message: This update modifies the real-time scheduler so that all real-time tasks are processed as expected and the kernel no longer crashes in this scenario. BZ# 756301 Due to a bug in the qla2xxx driver and the HBA firmware, storage I/O traffic could become unresponsive during storage fault testing. With this update, these bugs have been fixed and storage traffic no longer hangs in the described scenario. BZ# 767505 When resetting a virtual block device and a config interrupt was received, the config_work handler could attempt to access the device configuration after the device had already been removed from the system but before the device was reset. This resulted in a kernel panic. With this update, the underlying code has been modified to use a mutex lock and disable the device configuration during the reset. Config interrupts can no longer be processed during the reset of the virtual block device and the kernel no longer panics in this scenario. BZ# 784430 After some recent changes in USB driver code, versions of the kernel did not handle, under some circumstances, standard and warm reset of USB3.0 ports correctly. Consequently, the system was not able to detect and automatically mount a USB3.0 device when the device was re-attached to a USB3.0 port after it was unmounted. This update applies several upstream patches related to handling USB3.0 ports, and USB3.0 devices are now automatically re-attached as expected in the scenario described. BZ# 738491 Previously, the mlx4 driver expected Remote Direct Memory Access (RDMA) communication to be performed over an InfiniBand link layer and the driver thus used the InfiniBand link layer part of the code to record transfer statistics. However, Mellanox RDMA over Converged Ethernet (RoCE) devices use an Ethernet link layer for RDMA communication, which caused that RDMA communication was not accounted under these circumstances, and the displayed statistics were incorrect. With this update, the underlying code has been modified so that the driver now uses a "global" counter for RDMA traffic accounting on Ethernet ports, and users can see correct RDMA transfer statistics. BZ# 749059 Due to a missing validation check, the mlx4 driver could attempt to access an already freed data element in the core network device structure of the network layer. As a consequence, if a Mellanox ConnectX HCA InfiniBand adapter was unexpectedly removed from the system while the adapter processed ongoing Remote Direct Memory Access (RDMA) communication, the kernel panicked. With this update, the mlx4 driver has been modified to verify that the core network device structure is valid before attempting to use it for outgoing communication. The kernel now no longer panics when an adapter port is unexpectedly disabled. Enhancements Note For more information on the most important of the RHEL 6.3 kernel enhancements, refer to the Kernel and Device Drivers chapters in the Red Hat Enterprise Linux 6.3 Release Notes . For a summary of added or updated procfs entries, sysfs default values, boot parameters, kernel configuration options, or any noticeable behavior changes, refer to Chapter 1, Important Changes to External Kernel Parameters . BZ# 808315 LED support has been added to the sysfs interfaces. BZ# 805658 The WinFast VP200 H (Teradici) snd-hda-intel audio device has been added, and is recognized by the alsa driver. BZ# 744301 The Brocade BFA Fibre Channel and FCoE driver is no longer a Technology Preview. In Red Hat Enterprise Linux 6.3 the BFA driver is fully supported. BZ# 744302 The Brocade BNA driver for Brocade 10Gb PCIe ethernet Controllers is no longer a Technology Preview. In Red Hat Enterprise Linux 6.3 the BNA driver is fully supported. BZ# 696383 Persistent storage (pstore), a file system interface for platform dependent persistent storage, now supports UEFI. BZ# 661765 This release adds support for a new kernel auditing feature that allows for inter-field comparisons. For each audit event, the kernel collects information about what is causing the event. Now, you can use the "-C" command to tell the kernel to compare: auid, uid, euid, suid, fsuid, or obj_uid; and gid, egid, sgid, fsgid, or obj_gid. The two groups cannot be mixed. Comparisons can use either of the equal or not equal operators. BZ# 821561 This update adds the rh_check_unsupported() function and blacklists unsupported future Intel processors. BZ# 786997 When AF_IUCV sockets were using the HiperSockets transport, maximum message size for such transports depended on the MTU (maximum transmission unit) size of the HiperSockets device bound to a AF_IUCV socket. However, a socket program could not determine maximum size of a message. This update adds the MSGSIZE option for the getsockopt() function. Through this option, the maximum message size can be read and properly handled by AF_IUCV. BZ# 596419 The cred argument has been included in the security_capable() function so that it can be used in a wider range of call sites. BZ# 773052 Red Hat Enterprise Linux 6.3 adds support for the Wacom Cintiq 24HD (a 24-inch Drawing Tablet). BZ# 738720 This update adds additional fixed tracepoints to trace signal events. BZ# 704003 This update adds the missing raid6test.ko module. BZ# 788634 The keyrings kernel facility has been upgraded to the upstream version, which provides a number of bug fixes and enhancements over the version. In particular, the garbage collection mechanism has been re-worked. BZ# 788156 The perf tool has been upgraded to upstream version 3.3-rc1, which provides a number of bug fixes and enhancements over the version. BZ# 766952 The wireless LAN subsystem has been updated. It introduces the dma_unmap state API and adds a new kernel header file: include/linux/pci-dma.h. BZ# 723018 The dm-thinp targets, thin and thin-pool, provide a device mapper device with thin-provisioning and scalable snapshot capabilities. This feature is available as a Technology Preview. BZ# 768460 In Red Hat Enterprise Linux 6.3, SHA384 and SHA512 HMAC authentication algorithms have been added to XFRM. Users should upgrade to these updated packages, which contain backported patches to correct these issues, fix these bugs, and add these enhancement. The system must be rebooted for this update to take effect. 5.135.15. RHSA-2013:0662 - Important: kernel security and bug fix update Updated kernel packages that fix one security issue and several bugs are now available for Red Hat Enterprise Linux 6.3 Extended Update Support. The Red Hat Security Response Team has rated this update as having important security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available from the CVE link associated with the description below. The kernel packages contain the Linux kernel, the core of any Linux operating system. Security Fix CVE-2013-0871 , Important This update fixes the following security issue: * A race condition was found in the way the Linux kernel's ptrace implementation handled PTRACE_SETREGS requests when the debuggee was woken due to a SIGKILL signal instead of being stopped. A local, unprivileged user could use this flaw to escalate their privileges. Bug Fixes BZ# 908735 Previously, init scripts were unable to set the MAC address of the master interface properly because it was overwritten by the first slave MAC address. To avoid this problem, this update re-introduces the check for an unassigned MAC address before setting the MAC address of the first slave interface as the MAC address of the master interface. BZ# 909158 When using transparent proxy (TProxy) over IPv6, the kernel previously created neighbor entries for local interfaces and peers that were not reachable directly. This update corrects this problem and the kernel no longer creates invalid neighbor entries. BZ# 915582 Due to the incorrect validation of a pointer dereference in the d_validate() function, running a command such as ls or find on the MultiVersion File System (MVFS), used by IBM Rational ClearCase, for example, could trigger a kernel panic. This update modifies d_validate() to verify the parent-child dentry relationship by searching through the parent's d_child list. The kernel no longer panics in this situation. BZ# 916956 A previously backported patch introduced usage of the page_descs length field but did not set the page data length for the FUSE page descriptor. This code path can be exercised by a loopback device (pagecache_write_end) if used over FUSE. As a result, fuse_copy_page does not copy page data from the page descriptor to the user-space request buffer and the user space can see uninitialized data. This could previously lead to file system data corruption. This problem has been fixed by setting the page_descs length prior to submitting the requests, and FUSE therefore provides correctly initialized data. Users should upgrade to these updated packages, which contain backported patches to resolve these issues. The system must be rebooted for this update to take effect. 5.135.16. RHSA-2013:0832 - Important: kernel security update Updated kernel packages that fix one security issue are now available for Red Hat Enterprise Linux 6.3 Extended Update Support. The Red Hat Security Response Team has rated this update as having important security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available from the CVE link associated with the description below. The kernel packages contain the Linux kernel, the core of any Linux operating system. Security Fix CVE-2013-2094 , Important This update fixes the following security issue: * It was found that the Red Hat Enterprise Linux 6.1 kernel update (RHSA-2011:0542) introduced an integer conversion issue in the Linux kernel's Performance Events implementation. This led to a user-supplied index into the perf_swevent_enabled array not being validated properly, resulting in out-of-bounds kernel memory access. A local, unprivileged user could use this flaw to escalate their privileges. A public exploit that affects Red Hat Enterprise Linux 6 is available. Refer to Red Hat Knowledge Solution 373743, linked to in the References, for further information and mitigation instructions for users who are unable to immediately apply this update. Users should upgrade to these updated packages, which contain a backported patch to correct this issue. The system must be rebooted for this update to take effect. 5.135.17. RHSA-2013:1450 - Important: kernel security and bug fix update Updated kernel packages that fix three security issues and several bugs are now available for Red Hat Enterprise Linux 6.3 Extended Update Support. The Red Hat Security Response Team has rated this update as having important security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links associated with each description below. The kernel packages contain the Linux kernel, the core of any Linux operating system. Security Fixes CVE-2013-2224 , Important It was found that the fix for CVE-2012-3552 released via RHSA-2012:1540 introduced an invalid free flaw in the Linux kernel's TCP/IP protocol suite implementation. A local, unprivileged user could use this flaw to corrupt kernel memory via crafted sendmsg() calls, allowing them to cause a denial of service or, potentially, escalate their privileges on the system. CVE-2013-4299 , Moderate An information leak flaw was found in the way Linux kernel's device mapper subsystem, under certain conditions, interpreted data written to snapshot block devices. An attacker could use this flaw to read data from disk blocks in free space, which are normally inaccessible. CVE-2013-2852 , Low A format string flaw was found in the b43_do_request_fw() function in the Linux kernel's b43 driver implementation. A local user who is able to specify the "fwpostfix" b43 module parameter could use this flaw to cause a denial of service or, potentially, escalate their privileges. Red Hat would like to thank Fujitsu for reporting CVE-2013-4299, and Kees Cook for reporting CVE-2013-2852. Bug Fixes BZ# 1004185 An insufficiently designed calculation in the CPU accelerator could cause an arithmetic overflow in the set_cyc2ns_scale() function if the system uptime exceeded 208 days prior to using kexec to boot into a new kernel. This overflow led to a kernel panic on the systems using the Time Stamp Counter (TSC) clock source, primarily the systems using Intel Xeon E5 processors that do not reset TSC on soft power cycles. A patch has been applied to modify the calculation so that this arithmetic overflow and kernel panic can no longer occur under these circumstances. BZ# 1007467 A race condition in the abort task and SPP device task management path of the isci driver could, under certain circumstances, cause the driver to fail cleaning up timed-out I/O requests that were pending on an SAS disk device. As a consequence, the kernel removed such a device from the system. A patch applied to the isci driver fixes this problem by sending the task management function request to the SAS drive anytime the abort function is entered and the task has not completed. The driver now cleans up timed-out I/O requests as expected in this situation. BZ# 1008507 A kernel panic could occur during path failover on systems using multiple iSCSI, FC or SRP paths to connect an iSCSI initiator and an iSCSI target. This happened because a race condition in the SCSI driver allowed removing a SCSI device from the system before processing its run queue, which led to a NULL pointer dereference. The SCSI driver has been modified and the race is now avoided by holding a reference to a SCSI device run queue while it is active. All kernel users are advised to upgrade to these updated packages, which contain backported patches to correct these issues. The system must be rebooted for this update to take effect. 5.135.18. RHBA-2013:1190 - kernel bug fix update Updated kernel packages that fix several bugs are now available for Red Hat Enterprise Linux 6 Extended Update Support. The kernel packages contain the Linux kernel, the core of any Linux operating system. Bug Fixes BZ# 979291 Cyclic adding and removing of the st kernel module could previously cause a system to become unresponsive. This was caused by a disk queue reference count bug in the SCSI tape driver. An upstream patch addressing this bug has been backported to the SCSI tape driver and the system now responds as expected in this situation. BZ# 982114 The bnx2x driver could have previously reported an occasional MDC/MDIO timeout error along with the loss of the link connection. This could happen in environments using an older boot code because the MDIO clock was set in the beginning of each boot code sequence instead of per CL45 command. To avoid this problem, the bnx2x driver now sets the MDIO clock per CL45 command. Additionally, the MDIO clock is now implemented per EMAC register instead of per port number, which prevents ports from using different EMAC addresses for different PHY accesses. Also, boot code or Management Firmware (MFW) upgrade is required to prevent the boot code (firmware) from taking over link ownership if the driver's pulse is delayed. The BCM57711 card requires boot code version 6.2.24 or later, and the BCM57712/578xx cards require MFW version 7.4.22 or later. BZ# 982469 If the audit queue is too long, the kernel schedules the kauditd daemon to alleviate the load on the audit queue. Previously, if the current audit process had any pending signals in such a situation, it entered a busy-wait loop for the duration of an audit backlog timeout because the wait_for_auditd() function was called as an interruptible task. This could lead to system lockup in non-preemptive uniprocessor systems. This update fixes the problem by setting wait_for_auditd() as uninterruptible. BZ# 988226 The kernel could rarely terminate instead of creating a dump file when a multi-threaded process using FPU aborted. This happened because the kernel did not wait until all threads became inactive and attempted to dump the FPU state of active threads into memory which triggered a BUG_ON() routine. A patch addressing this problem has been applied and the kernel now waits for the threads to become inactive before dumping their FPU state into memory. BZ# 990087 BE family hardware could falsely indicate an unrecoverable error (UE) on certain platforms and stop further access to be2net-based network interface cards (NICs). A patch has been applied to disable the code that stops further access to hardware for BE family network interface cards (NICs). For a real UE, it is not necessary as the corresponding hardware block is not accessible in this situation. BZ# 991344 The fnic driver previously allowed I/O requests with the number of SGL descriptors greater than is supported by Cisco UCS Palo adapters. Consequently, the adapter returned any I/O request with more than 256 SGL descriptors with an error indicating invalid SGLs. A patch has been applied to limit the maximum number of supported SGLs in the fnic driver to 256 and the problem no longer occurs. Users should upgrade to these updated packages, which contain backported patches to correct these bugs. The system must be rebooted for this update to take effect | [
"Multicast hash table maximum reached, disabling snooping: vnet1, 512",
"Shutting down interface breth0",
"5915: WARN_ON_ONCE(test_tsk_need_resched(next));",
"NFS: directory A/B/C contains a readdir loop.",
"kernel BUG at kernel/sched_rt.c:460!"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/kernel |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/replacing_nodes/providing-feedback-on-red-hat-documentation_rhodf |
Chapter 2. Requirements | Chapter 2. Requirements 2.1. Red Hat Virtualization Manager Requirements 2.1.1. Hardware Requirements The minimum and recommended hardware requirements outlined here are based on a typical small to medium-sized installation. The exact requirements vary between deployments based on sizing and load. Hardware certification for Red Hat Virtualization is covered by the hardware certification for Red Hat Enterprise Linux. For more information, see Does Red Hat Virtualization also have hardware certification? . To confirm whether specific hardware items are certified for use with Red Hat Enterprise Linux, see Red Hat certified hardware . Table 2.1. Red Hat Virtualization Manager Hardware Requirements Resource Minimum Recommended CPU A dual core x86_64 CPU. A quad core x86_64 CPU or multiple dual core x86_64 CPUs. Memory 4 GB of available system RAM if Data Warehouse is not installed and if memory is not being consumed by existing processes. 16 GB of system RAM. Hard Disk 25 GB of locally accessible, writable disk space. 50 GB of locally accessible, writable disk space. You can use the RHV Manager History Database Size Calculator to calculate the appropriate disk space for the Manager history database size. Network Interface 1 Network Interface Card (NIC) with bandwidth of at least 1 Gbps. 1 Network Interface Card (NIC) with bandwidth of at least 1 Gbps. 2.1.2. Browser Requirements The following browser versions and operating systems can be used to access the Administration Portal and the VM Portal. Browser support is divided into tiers: Tier 1: Browser and operating system combinations that are fully tested and fully supported. Red Hat Engineering is committed to fixing issues with browsers on this tier. Tier 2: Browser and operating system combinations that are partially tested, and are likely to work. Limited support is provided for this tier. Red Hat Engineering will attempt to fix issues with browsers on this tier. Tier 3: Browser and operating system combinations that are not tested, but may work. Minimal support is provided for this tier. Red Hat Engineering will attempt to fix only minor issues with browsers on this tier. Table 2.2. Browser Requirements Support Tier Operating System Family Browser Tier 1 Red Hat Enterprise Linux Mozilla Firefox Extended Support Release (ESR) version Any Most recent version of Google Chrome, Mozilla Firefox, or Microsoft Edge Tier 2 Tier 3 Any Earlier versions of Google Chrome or Mozilla Firefox Any Other browsers 2.1.3. Client Requirements Virtual machine consoles can only be accessed using supported Remote Viewer ( virt-viewer ) clients on Red Hat Enterprise Linux and Windows. To install virt-viewer , see Installing Supporting Components on Client Machines in the Virtual Machine Management Guide . Installing virt-viewer requires Administrator privileges. You can access virtual machine consoles using the SPICE, VNC, or RDP (Windows only) protocols. You can install the QXLDOD graphical driver in the guest operating system to improve the functionality of SPICE. SPICE currently supports a maximum resolution of 2560x1600 pixels. Client Operating System SPICE Support Supported QXLDOD drivers are available on Red Hat Enterprise Linux 7.2 and later, and Windows 10. Note SPICE may work with Windows 8 or 8.1 using QXLDOD drivers, but it is neither certified nor tested. 2.1.4. Operating System Requirements The Red Hat Virtualization Manager must be installed on a base installation of Red Hat Enterprise Linux 8.6. Do not install any additional packages after the base installation, as they may cause dependency issues when attempting to install the packages required by the Manager. Do not enable additional repositories other than those required for the Manager installation. 2.2. Host Requirements Hardware certification for Red Hat Virtualization is covered by the hardware certification for Red Hat Enterprise Linux. For more information, see Does Red Hat Virtualization also have hardware certification? . To confirm whether specific hardware items are certified for use with Red Hat Enterprise Linux, see Find a certified solution . For more information on the requirements and limitations that apply to guests see Red Hat Enterprise Linux Technology Capabilities and Limits and Supported Limits for Red Hat Virtualization . 2.2.1. CPU Requirements All CPUs must have support for the Intel(R) 64 or AMD64 CPU extensions, and the AMD-VTM or Intel VT(R) hardware virtualization extensions enabled. Support for the No eXecute flag (NX) is also required. The following CPU models are supported: AMD Opteron G4 Opteron G5 EPYC Intel Nehalem Westmere SandyBridge IvyBridge Haswell Broadwell Skylake Client Skylake Server Cascadelake Server IBM POWER8 POWER9 For each CPU model with security updates, the CPU Type lists a basic type and a secure type. For example: Intel Cascadelake Server Family Secure Intel Cascadelake Server Family The Secure CPU type contains the latest updates. For details, see BZ# 1731395 2.2.1.1. Checking if a Processor Supports the Required Flags You must enable virtualization in the BIOS. Power off and reboot the host after this change to ensure that the change is applied. Procedure At the Red Hat Enterprise Linux or Red Hat Virtualization Host boot screen, press any key and select the Boot or Boot with serial console entry from the list. Press Tab to edit the kernel parameters for the selected option. Ensure there is a space after the last kernel parameter listed, and append the parameter rescue . Press Enter to boot into rescue mode. At the prompt, determine that your processor has the required extensions and that they are enabled by running this command: If any output is shown, the processor is hardware virtualization capable. If no output is shown, your processor may still support hardware virtualization; in some circumstances manufacturers disable the virtualization extensions in the BIOS. If you believe this to be the case, consult the system's BIOS and the motherboard manual provided by the manufacturer. 2.2.2. Memory Requirements The minimum required RAM is 2 GB. For cluster levels 4.2 to 4.5, the maximum supported RAM per VM in Red Hat Virtualization Host is 6 TB. For cluster levels 4.6 to 4.7, the maximum supported RAM per VM in Red Hat Virtualization Host is 16 TB. However, the amount of RAM required varies depending on guest operating system requirements, guest application requirements, and guest memory activity and usage. KVM can also overcommit physical RAM for virtualized guests, allowing you to provision guests with RAM requirements greater than what is physically present, on the assumption that the guests are not all working concurrently at peak load. KVM does this by only allocating RAM for guests as required and shifting underutilized guests into swap. 2.2.3. Storage Requirements Hosts require storage to store configuration, logs, kernel dumps, and for use as swap space. Storage can be local or network-based. Red Hat Virtualization Host (RHVH) can boot with one, some, or all of its default allocations in network storage. Booting from network storage can result in a freeze if there is a network disconnect. Adding a drop-in multipath configuration file can help address losses in network connectivity. If RHVH boots from SAN storage and loses connectivity, the files become read-only until network connectivity restores. Using network storage might result in a performance downgrade. The minimum storage requirements of RHVH are documented in this section. The storage requirements for Red Hat Enterprise Linux hosts vary based on the amount of disk space used by their existing configuration but are expected to be greater than those of RHVH. The minimum storage requirements for host installation are listed below. However, use the default allocations, which use more storage space. / (root) - 6 GB /home - 1 GB /tmp - 1 GB /boot - 1 GB /var - 5 GB /var/crash - 10 GB /var/log - 8 GB /var/log/audit - 2 GB /var/tmp - 10 GB swap - 1 GB. See What is the recommended swap size for Red Hat platforms? for details. Anaconda reserves 20% of the thin pool size within the volume group for future metadata expansion. This is to prevent an out-of-the-box configuration from running out of space under normal usage conditions. Overprovisioning of thin pools during installation is also not supported. Minimum Total - 64 GiB If you are also installing the RHV-M Appliance for self-hosted engine installation, /var/tmp must be at least 10 GB. If you plan to use memory overcommitment, add enough swap space to provide virtual memory for all of virtual machines. See Memory Optimization . 2.2.4. PCI Device Requirements Hosts must have at least one network interface with a minimum bandwidth of 1 Gbps. Each host should have two network interfaces, with one dedicated to supporting network-intensive activities, such as virtual machine migration. The performance of such operations is limited by the bandwidth available. For information about how to use PCI Express and conventional PCI devices with Intel Q35-based virtual machines, see Using PCI Express and Conventional PCI Devices with the Q35 Virtual Machine . 2.2.5. Device Assignment Requirements If you plan to implement device assignment and PCI passthrough so that a virtual machine can use a specific PCIe device from a host, ensure the following requirements are met: CPU must support IOMMU (for example, VT-d or AMD-Vi). IBM POWER8 supports IOMMU by default. Firmware must support IOMMU. CPU root ports used must support ACS or ACS-equivalent capability. PCIe devices must support ACS or ACS-equivalent capability. All PCIe switches and bridges between the PCIe device and the root port should support ACS. For example, if a switch does not support ACS, all devices behind that switch share the same IOMMU group, and can only be assigned to the same virtual machine. For GPU support, Red Hat Enterprise Linux 8 supports PCI device assignment of PCIe-based NVIDIA K-Series Quadro (model 2000 series or higher), GRID, and Tesla as non-VGA graphics devices. Currently up to two GPUs may be attached to a virtual machine in addition to one of the standard, emulated VGA interfaces. The emulated VGA is used for pre-boot and installation and the NVIDIA GPU takes over when the NVIDIA graphics drivers are loaded. Note that the NVIDIA Quadro 2000 is not supported, nor is the Quadro K420 card. Check vendor specification and datasheets to confirm that your hardware meets these requirements. The lspci -v command can be used to print information for PCI devices already installed on a system. 2.2.6. vGPU Requirements A host must meet the following requirements in order for virtual machines on that host to use a vGPU: vGPU-compatible GPU GPU-enabled host kernel Installed GPU with correct drivers Select a vGPU type and the number of instances that you would like to use with this virtual machine using the Manage vGPU dialog in the Administration Portal Host Devices tab of the virtual machine. vGPU-capable drivers installed on each host in the cluster vGPU-supported virtual machine operating system with vGPU drivers installed 2.3. Networking requirements 2.3.1. General requirements Red Hat Virtualization requires IPv6 to remain enabled on the physical or virtual machine running the Manager. Do not disable IPv6 on the Manager machine, even if your systems do not use it. 2.3.2. Firewall Requirements for DNS, NTP, and IPMI Fencing The firewall requirements for all of the following topics are special cases that require individual consideration. DNS and NTP Red Hat Virtualization does not create a DNS or NTP server, so the firewall does not need to have open ports for incoming traffic. By default, Red Hat Enterprise Linux allows outbound traffic to DNS and NTP on any destination address. If you disable outgoing traffic, define exceptions for requests that are sent to DNS and NTP servers. Important The Red Hat Virtualization Manager and all hosts (Red Hat Virtualization Host and Red Hat Enterprise Linux host) must have a fully qualified domain name and full, perfectly-aligned forward and reverse name resolution. Running a DNS service as a virtual machine in the Red Hat Virtualization environment is not supported. All DNS services the Red Hat Virtualization environment uses must be hosted outside of the environment. Use DNS instead of the /etc/hosts file for name resolution. Using a hosts file typically requires more work and has a greater chance for errors. IPMI and Other Fencing Mechanisms (optional) For IPMI (Intelligent Platform Management Interface) and other fencing mechanisms, the firewall does not need to have open ports for incoming traffic. By default, Red Hat Enterprise Linux allows outbound IPMI traffic to ports on any destination address. If you disable outgoing traffic, make exceptions for requests being sent to your IPMI or fencing servers. Each Red Hat Virtualization Host and Red Hat Enterprise Linux host in the cluster must be able to connect to the fencing devices of all other hosts in the cluster. If the cluster hosts are experiencing an error (network error, storage error... ) and cannot function as hosts, they must be able to connect to other hosts in the data center. The specific port number depends on the type of the fence agent you are using and how it is configured. The firewall requirement tables in the following sections do not represent this option. 2.3.3. Red Hat Virtualization Manager Firewall Requirements The Red Hat Virtualization Manager requires that a number of ports be opened to allow network traffic through the system's firewall. The engine-setup script can configure the firewall automatically. The firewall configuration documented here assumes a default configuration. Note A diagram of these firewall requirements is available at https://access.redhat.com/articles/3932211 . You can use the IDs in the table to look up connections in the diagram. Table 2.3. Red Hat Virtualization Manager Firewall Requirements ID Port(s) Protocol Source Destination Purpose Encrypted by default M1 - ICMP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Manager Optional. May help in diagnosis. No M2 22 TCP System(s) used for maintenance of the Manager including backend configuration, and software upgrades. Red Hat Virtualization Manager Secure Shell (SSH) access. Optional. Yes M3 2222 TCP Clients accessing virtual machine serial consoles. Red Hat Virtualization Manager Secure Shell (SSH) access to enable connection to virtual machine serial consoles. Yes M4 80, 443 TCP Administration Portal clients VM Portal clients Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts REST API clients Red Hat Virtualization Manager Provides HTTP (port 80, not encrypted) and HTTPS (port 443, encrypted) access to the Manager. HTTP redirects connections to HTTPS. Yes M5 6100 TCP Administration Portal clients VM Portal clients Red Hat Virtualization Manager Provides websocket proxy access for a web-based console client, noVNC , when the websocket proxy is running on the Manager. No M6 7410 UDP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Manager If Kdump is enabled on the hosts, open this port for the fence_kdump listener on the Manager. See fence_kdump Advanced Configuration . fence_kdump doesn't provide a way to encrypt the connection. However, you can manually configure this port to block access from hosts that are not eligible. No M7 54323 TCP Administration Portal clients Red Hat Virtualization Manager ( ovirt-imageio service) Required for communication with the ovirt-imageo service. Yes M8 6642 TCP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Open Virtual Network (OVN) southbound database Connect to Open Virtual Network (OVN) database Yes M9 9696 TCP Clients of external network provider for OVN External network provider for OVN OpenStack Networking API Yes, with configuration generated by engine-setup. M10 35357 TCP Clients of external network provider for OVN External network provider for OVN OpenStack Identity API Yes, with configuration generated by engine-setup. M11 53 TCP, UDP Red Hat Virtualization Manager DNS Server DNS lookup requests from ports above 1023 to port 53, and responses. Open by default. No M12 123 UDP Red Hat Virtualization Manager NTP Server NTP requests from ports above 1023 to port 123, and responses. Open by default. No Note A port for the OVN northbound database (6641) is not listed because, in the default configuration, the only client for the OVN northbound database (6641) is ovirt-provider-ovn . Because they both run on the same host, their communication is not visible to the network. By default, Red Hat Enterprise Linux allows outbound traffic to DNS and NTP on any destination address. If you disable outgoing traffic, make exceptions for the Manager to send requests to DNS and NTP servers. Other nodes may also require DNS and NTP. In that case, consult the requirements for those nodes and configure the firewall accordingly. 2.3.4. Host Firewall Requirements Red Hat Enterprise Linux hosts and Red Hat Virtualization Hosts (RHVH) require a number of ports to be opened to allow network traffic through the system's firewall. The firewall rules are automatically configured by default when adding a new host to the Manager, overwriting any pre-existing firewall configuration. To disable automatic firewall configuration when adding a new host, clear the Automatically configure host firewall check box under Advanced Parameters . To customize the host firewall rules, see RHV: How to customize the Host's firewall rules? . Note A diagram of these firewall requirements is available at Red Hat Virtualization: Firewall Requirements Diagram . You can use the IDs in the table to look up connections in the diagram. Table 2.4. Virtualization Host Firewall Requirements ID Port(s) Protocol Source Destination Purpose Encrypted by default H1 22 TCP Red Hat Virtualization Manager Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Secure Shell (SSH) access. Optional. Yes H2 2223 TCP Red Hat Virtualization Manager Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Secure Shell (SSH) access to enable connection to virtual machine serial consoles. Yes H3 161 UDP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Manager Simple network management protocol (SNMP). Only required if you want Simple Network Management Protocol traps sent from the host to one or more external SNMP managers. Optional. No H4 111 TCP NFS storage server Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts NFS connections. Optional. No H5 5900 - 6923 TCP Administration Portal clients VM Portal clients Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Remote guest console access via VNC and SPICE. These ports must be open to facilitate client access to virtual machines. Yes (optional) H6 5989 TCP, UDP Common Information Model Object Manager (CIMOM) Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Used by Common Information Model Object Managers (CIMOM) to monitor virtual machines running on the host. Only required if you want to use a CIMOM to monitor the virtual machines in your virtualization environment. Optional. No H7 9090 TCP Red Hat Virtualization Manager Client machines Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Required to access the Cockpit web interface, if installed. Yes H8 16514 TCP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Virtual machine migration using libvirt . Yes H9 49152 - 49215 TCP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Virtual machine migration and fencing using VDSM. These ports must be open to facilitate both automated and manual migration of virtual machines. Yes. Depending on agent for fencing, migration is done through libvirt. H10 54321 TCP Red Hat Virtualization Manager Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts VDSM communications with the Manager and other virtualization hosts. Yes H11 54322 TCP Red Hat Virtualization Manager ovirt-imageio service Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Required for communication with the ovirt-imageo service. Yes H12 6081 UDP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Required, when Open Virtual Network (OVN) is used as a network provider, to allow OVN to create tunnels between hosts. No H13 53 TCP, UDP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts DNS Server DNS lookup requests from ports above 1023 to port 53, and responses. This port is required and open by default. No H14 123 UDP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts NTP Server NTP requests from ports above 1023 to port 123, and responses. This port is required and open by default. H15 4500 TCP, UDP Red Hat Virtualization Hosts Red Hat Virtualization Hosts Internet Security Protocol (IPSec) Yes H16 500 UDP Red Hat Virtualization Hosts Red Hat Virtualization Hosts Internet Security Protocol (IPSec) Yes H17 - AH, ESP Red Hat Virtualization Hosts Red Hat Virtualization Hosts Internet Security Protocol (IPSec) Yes Note By default, Red Hat Enterprise Linux allows outbound traffic to DNS and NTP on any destination address. If you disable outgoing traffic, make exceptions for the Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts to send requests to DNS and NTP servers. Other nodes may also require DNS and NTP. In that case, consult the requirements for those nodes and configure the firewall accordingly. 2.3.5. Database Server Firewall Requirements Red Hat Virtualization supports the use of a remote database server for the Manager database ( engine ) and the Data Warehouse database ( ovirt-engine-history ). If you plan to use a remote database server, it must allow connections from the Manager and the Data Warehouse service (which can be separate from the Manager). Similarly, if you plan to access a local or remote Data Warehouse database from an external system, the database must allow connections from that system. Important Accessing the Manager database from external systems is not supported. Note A diagram of these firewall requirements is available at https://access.redhat.com/articles/3932211 . You can use the IDs in the table to look up connections in the diagram. Table 2.5. Database Server Firewall Requirements ID Port(s) Protocol Source Destination Purpose Encrypted by default D1 5432 TCP, UDP Red Hat Virtualization Manager Data Warehouse service Manager ( engine ) database server Data Warehouse ( ovirt-engine-history ) database server Default port for PostgreSQL database connections. No, but can be enabled . D2 5432 TCP, UDP External systems Data Warehouse ( ovirt-engine-history ) database server Default port for PostgreSQL database connections. Disabled by default. No, but can be enabled . 2.3.6. Maximum Transmission Unit Requirements The recommended Maximum Transmission Units (MTU) setting for Hosts during deployment is 1500. It is possible to update this setting after the environment is set up to a different MTU. For more information on changing the MTU setting, see How to change the Hosted Engine VM network MTU . | [
"grep -E 'svm|vmx' /proc/cpuinfo | grep nx"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/installing_red_hat_virtualization_as_a_standalone_manager_with_local_databases/rhv_requirements |
10.7. Sub-Collections | 10.7. Sub-Collections 10.7.1. Networks Sub-Collection 10.7.1.1. Networks Sub-Collection Networks associated with a cluster are represented with the networks sub-collection. Every host within a cluster connects to these associated networks. The representation of a cluster's network sub-collection is the same as a standard network resource except for the following additional elements: Table 10.4. Additional network elements Element Type Description Properties cluster id= relationship A reference to the cluster of which this network is a member. required Boolean Defines required or optional network status. display Boolean Defines the display network status. Used for backward compatibility. usages complex Defines a set of usage elements for the network. Users can define networks as VM and DISPLAY networks at this level. An API user manipulates the networks sub-collection with the standard REST methods. POST ing a network id or name reference to the networks sub-collection associates the network with the cluster. Example 10.6. Associating a network resource with a cluster Update the resource with a PUT request. Example 10.7. Setting the display network status The required or optional network status is set using a PUT request to specify the Boolean value (true or false) of the required element. Example 10.8. Setting optional network status An association is removed with a DELETE request to the appropriate element in the collection. Example 10.9. Removing a network association from a cluster 10.7.2. Storage Volumes Sub-Collection 10.7.2.1. Red Hat Gluster Storage Volumes Sub-Collection Red Hat Virtualization provides a means for creating and managing Red Hat Gluster Storage volumes. Red Hat Gluster Storage volumes are associated with clusters and are represented with the glustervolumes sub-collection. The representation of a Red Hat Gluster Storage volume resource in the glustervolumes sub-collection is defined using the following elements: Table 10.5. Gluster volume elements Element Type Description Properties volume_type enumerated Defines the volume type. See the capabilities collection for a list of volume types. bricks relationship The sub-collection for the Red Hat Gluster Storage bricks. When creating a new volume, the request requires a set of brick elements to create and manage in this cluster. Requires the server_id of the Red Hat Gluster Storage server and a brick_dir element for the brick directory transport_types complex Defines a set of volume transport_type elements. See the capabilities collection for a list of available transport types. replica_count integer Defines the file replication count for a replicated volume. stripe_count integer Defines the stripe count for a striped volume options complex A set of additional Red Hat Gluster Storage option elements. Each option includes an option name and a value . Example 10.10. An XML representation of a Red Hat Gluster Storage volume Create a Red Hat Gluster Storage volume via a POST request with the required name , volume_type and bricks to the sub-collection. Example 10.11. Creating a Red Hat Gluster Storage volume Remove a Red Hat Gluster Storage volume with a DELETE request. Example 10.12. Removing a Red Hat Gluster Storage volume Important Resources in the glustervolumes sub-collection cannot be updated. 10.7.2.2. Bricks Sub-Collection The glustervolumes sub-collection contains its own bricks sub-collection to define individual bricks in a Red Hat Gluster Storage volume. Additional information can be retrieved for GET requests using the All-Content: true header. The representation of a volume's bricks sub-collection is defined using the following elements: Table 10.6. Brick elements Element Type Description Properties server_id string A reference to the Red Hat Gluster Storage server. brick_dir string Defines a brick directory on the Red Hat Gluster Storage server. replica_count integer Defines the file replication count for the brick in the volume. stripe_count integer Defines the stripe count for the brick in the volume Create new bricks via a POST request with the required server_id and brick_dir to the sub-collection. Example 10.13. Adding a brick Remove a brick with a DELETE request. Example 10.14. Removing a brick Important Resources in the bricks sub-collection cannot be updated. 10.7.2.3. Actions 10.7.2.3.1. Start Action The start action makes a Gluster volume available for use. Example 10.15. Starting a Volume Use an optional force Boolean element to force the action for a running volume. This is useful for starting disabled brick processes in a running volume. 10.7.2.3.2. Stop Action The stop action deactivates a Gluster volume. Example 10.16. Stopping a Volume Use an optional force Boolean element to brute force the stop action. 10.7.2.3.3. Set Option Action The setoption action sets a volume option. Example 10.17. Set an option 10.7.2.3.4. Reset Option Action The resetoption action resets a volume option. Example 10.18. Reset an option 10.7.2.3.5. Reset All Options Action The resetalloptions action resets all volume options. Example 10.19. Reset all options 10.7.3. Affinity Groups Sub-Collection 10.7.3.1. Affinity Group Sub-Collection The representation of a virtual machine affinity group resource in the affinitygroups sub-collection is defined using the following elements: Table 10.7. Affinity group elements Element Type Description Properties name string A plain text, human readable name for the affinity group. cluster relationship A reference to the cluster to which the affinity group applies. positive Boolean: true or false Specifies whether the affinity group applies positive affinity or negative affinity to virtual machines that are members of that affinity group. enforcing Boolean: true or false Specifies whether the affinity group uses hard or soft enforcement of the affinity applied to virtual machines that are members of that affinity group. Example 10.20. An XML representation of a virtual machine affinity group Create a virtual machine affinity group via a POST request with the required name attribute. Example 10.21. Creating a virtual machine affinity group Remove a virtual machine affinity group with a DELETE request. Example 10.22. Removing a virtual machine affinity group | [
"POST /ovirt-engine/api/clusters/99408929-82cf-4dc7-a532-9d998063fa95/networks HTTP/1.1 Accept: application/xml Content-Type: application/xml <network id=\"da05ac09-00be-45a1-b0b5-4a6a2438665f\"> <name>ovirtmgmt</name> </network> HTTP/1.1 201 Created Location: http://{host}/clusters/99408929-82cf-4dc7-a532-9d998063fa95/networks/da05ac09-00be-45a1-b0b5-4a6a2438665f Content-Type: application/xml <network id=\"da05ac09-00be-45a1-b0b5-4a6a2438665f\" href=\"/ovirt-engine/api/clusters/99408929-82cf-4dc7-a532-9d998063fa95/networks/ da05ac09-00be-45a1-b0b5-4a6a2438665f\"> <name>ovirtmgmt</name> <status> <state>operational</state> </status> <description>Display Network</description> <cluster id=\"99408929-82cf-4dc7-a532-9d998063fa95\" href=\"/ovirt-engine/api/clusters/99408929-82cf-4dc7-a532-9d998063fa95\"/> <data_center id=\"d70d5e2d-b8ad-494a-a4d2-c7a5631073c4\" href=\"/ovirt-engine/api/datacenters/d70d5e2d-b8ad-494a-a4d2-c7a5631073c4\"/> <required>true</required> <usages> <usage>VM</usage> </usages> </network>",
"PUT /ovirt-engine/api/clusters/99408929-82cf-4dc7-a532-9d998063fa95/networks/da05ac09-00be-45a1-b0b5-4a6a2438665f HTTP/1.1 Accept: application/xml Content-Type: application/xml <network> <required>false</required> <usages> <usage>VM</usage> <usage>DISPLAY</usage> </usages> </network>",
"PUT /ovirt-engine/api/clusters/99408929-82cf-4dc7-a532-9d998063fa95/networks/da05ac09-00be-45a1-b0b5-4a6a2438665f HTTP/1.1 Accept: application/xml Content-Type: application/xml <network> <required>false</required> </network>",
"DELETE /ovirt-engine/api/clusters/99408929-82cf-4dc7-a532-9d998063fa95/networks/da05ac09-00be-45a1-b0b5-4a6a2438665f HTTP/1.1 HTTP/1.1 204 No Content",
"<gluster_volume id=\"99408929-82cf-4dc7-a532-9d998063fa95\" href=\"/ovirt-engine/api/clusters/99408929-82cf-4dc7-a532-9d998063fa95 /glustervolume/e199f877-900a-4e30-8114-8e3177f47651\"> <name>GlusterVolume1</name> <link rel=\"bricks\" href=\"/ovirt-engine/api/clusters/99408929-82cf-4dc7-a532-9d998063fa95 /glustervolume/e199f877-900a-4e30-8114-8e3177f47651/bricks\"/> <volume_type>DISTRIBUTED_REPLICATE</volume_type> <transport_types> <transport_type>TCP</transport_type> </transport_types> <replica_count>2</replica_count> <stripe_count>1</stripe_count> <options> <option> <name>cluster.min-free-disk</name> <value>536870912</value> </option> </options> </gluster_volume>",
"POST /ovirt-engine/api/clusters/99408929-82cf-4dc7-a532-9d998063fa95/glustervolumes HTTP/1.1 Accept: application/xml Content-Type: application/xml <gluster_volume> <name>GlusterVolume1</name> <volume_type>DISTRIBUTED_REPLICATE</volume_type> <bricks> <brick> <server_id>server1</server_id> <brick_dir>/exp1</brick_dir> </brick> <bricks> </gluster_volume>",
"DELETE /ovirt-engine/api/clusters/99408929-82cf-4dc7-a532-9d998063fa95/glustervolumes/e199f877-900a-4e30-8114-8e3177f47651 HTTP/1.1 HTTP/1.1 204 No Content",
"POST /ovirt-engine/api/clusters/99408929-82cf-4dc7-a532-9d998063fa95/glustervolumes/e199f877-900a-4e30-8114-8e3177f47651/bricks HTTP/1.1 Accept: application/xml Content-Type: application/xml <brick> <server_id>server1</server_id> <brick_dir>/exp1</brick_dir> </brick>",
"DELETE /ovirt-engine/api/clusters/99408929-82cf-4dc7-a532-9d998063fa95/glustervolumes/e199f877-900a-4e30-8114-8e3177f47651/bricks/0a473ebe-01d2-444d-8f58-f565a436b8eb HTTP/1.1 HTTP/1.1 204 No Content",
"POST /ovirt-engine/api/clusters/99408929-82cf-4dc7-a532-9d998063fa95/glustervolumes/e199f877-900a-4e30-8114-8e3177f47651/start HTTP/1.1 Accept: application/xml Content-Type: application/xml <action/>",
"POST /ovirt-engine/api/clusters/99408929-82cf-4dc7-a532-9d998063fa95/glustervolumes/e199f877-900a-4e30-8114-8e3177f47651/stop HTTP/1.1 Accept: application/xml Content-Type: application/xml <action/>",
"POST /ovirt-engine/api/clusters/99408929-82cf-4dc7-a532-9d998063fa95/glustervolumes/e199f877-900a-4e30-8114-8e3177f47651/setoption HTTP/1.1 Accept: application/xml Content-Type: application/xml <action> <option> <name>cluster.min-free-disk</name> <value>536870912</value> </option> </action>",
"POST /ovirt-engine/api/clusters/99408929-82cf-4dc7-a532-9d998063fa95/glustervolumes/e199f877-900a-4e30-8114-8e3177f47651/resetoption HTTP/1.1 Accept: application/xml Content-Type: application/xml <action> <option> <name>cluster.min-free-disk</name> </option> </action>",
"POST /ovirt-engine/api/clusters/99408929-82cf-4dc7-a532-9d998063fa95/glustervolumes/e199f877-900a-4e30-8114-8e3177f47651/resetalloptions HTTP/1.1 Accept: application/xml Content-Type: application/xml <action/>",
"<affinity_group href=\"/ovirt-engine/api/clusters/00000000-0000-0000-0000-000000000000/affinitygroups/00000000-0000-0000-0000-000000000000\" id=\"00000000-0000-0000-0000-000000000000\"> <name>AF_GROUP_001</name> <cluster href=\"/ovirt-engine/api/clusters/00000000-0000-0000-0000-000000000000\" id=\"00000000-0000-0000-0000-000000000000\"/> <positive>true</positive> <enforcing>true</enforcing> </affinity_group>",
"POST https://XX.XX.XX.XX/ovirt-engine/api/clusters/00000000-0000-0000-0000-000000000000/affinitygroups HTTP/1.1 Accept: application/xml Content-Type: application/xml <affinity_group> <name>AF_GROUP_001</name> <positive>true</positive> <enforcing>true</enforcing> </affinity_group>",
"DELETE https://XX.XX.XX.XX/ovirt-engine/api/clusters/00000000-0000-0000-0000-000000000000/affinitygroups/00000000-0000-0000-0000-000000000000 HTTP/1.1 HTTP/1.1 204 No Content"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/sect-sub-collections4 |
Chapter 14. Activating and deactivating telemetry | Chapter 14. Activating and deactivating telemetry Activate the telemetry module to help Ceph developers understand how Ceph is used and what problems users might be experiencing. This helps improve the dashboard experience. Activating the telemetry module sends anonymous data about the cluster back to the Ceph developers. View the telemetry data that is sent to the Ceph developers on the public telemetry dashboard . This allows the community to easily see summary statistics on how many clusters are reporting, their total capacity and OSD count, and version distribution trends. The telemetry report is broken down into several channels, each with a different type of information. Assuming telemetry has been enabled, you can turn on and off the individual channels. If telemetry is off, the per-channel setting has no effect. Basic Provides basic information about the cluster. Crash Provides information about daemon crashes. Device Provides information about device metrics. Ident Provides user-provided identifying information about the cluster. Perf Provides various performance metrics of the cluster. The data reports contain information that help the developers gain a better understanding of the way Ceph is used. The data includes counters and statistics on how the cluster has been deployed, the version of Ceph, the distribution of the hosts, and other parameters. Important The data reports do not contain any sensitive data like pool names, object names, object contents, hostnames, or device serial numbers. Note Telemetry can also be managed by using an API. For more information, see the Telemetry chapter in the Red Hat Ceph Storage Developer Guide . Procedure Activate the telemetry module in one of the following ways: From the banner within the Ceph dashboard. Go to Settings->Telemetry configuration . Select each channel that telemetry should be enabled on. Note For detailed information about each channel type, click More Info to the channels. Complete the Contact Information for the cluster. Enter the contact, Ceph cluster description, and organization. Optional: Complete the Advanced Settings field options. Interval Set the interval by hour. The module compiles and sends a new report per this hour interval. The default interval is 24 hours. Proxy Use this to configure an HTTP or HTTPs proxy server if the cluster cannot directly connect to the configured telemetry endpoint. Add the server in one of the following formats: https://10.0.0.1:8080 or https://ceph:[email protected]:8080 The default endpoint is telemetry.ceph.com . Click . This displays the Telemetry report preview before enabling telemetry. Review the Report preview . Note The report can be downloaded and saved locally or copied to the clipboard. Select I agree to my telemetry data being submitted under the Community Data License Agreement . Enable the telemetry module by clicking Update . The following message is displayed, confirming the telemetry activation: 14.1. Deactivating telemetry To deactivate the telemetry module, go to Settings->Telemetry configuration and click Deactivate . | [
"The Telemetry module has been configured and activated successfully"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/dashboard_guide/activating-and-deactivating-telemetry_dash |
Chapter 2. Configuring notifications and integrations for Insights tasks events | Chapter 2. Configuring notifications and integrations for Insights tasks events You can configure the notifications service on the Red Hat Hybrid Cloud Console to send notifications whenever the Red Hat Insights tasks service detects certain events that occur during the start and execution of a task. Using the notifications service can provide an alternative to having to continually check the tasks Activity tab for events related to a task's status. You can find the notifications service settings at Red Hat Hybrid Cloud Console > Settings > Notifications . When you begin to think about configuring events for which you want notifications for Insights tasks, it is important to understand the difference between an Insights task, an Insights task execution, and an Insights task job, and how they work to accomplish a specific task. Insights task: a predefined script or playbook that is designed to accomplish a specific task. Insights task execution: an instance of running that script or playbook on one or more systems. Insights task job: the execution of a specific task on a specific system. For example, you can configure the notifications service to automatically send an email message when an Insights task starts, completes, or fails. As another example, you can configure the notifications service to automatically send an email message when an Insights task job starts or completes. In addition to sending email messages, you can configure the notifications service to send event data in other ways: Using an authenticated client to query Red Hat Insights APIs for event data Using webhooks to send events to third-party applications that accept inbound requests Integrating notifications with applications such as Splunk to route tasks events to the application dashboard, or to your preferred messaging application such as Slack or Microsoft Teams. Configuring the notifications service to inform members of your Red Hat account of tasks events requires three main steps: An Organization Administrator creates a User Access group with the Notifications administrator role, and then adds account members to the group. A Notifications administrator sets up behavior groups for events in the notifications service. Behavior groups specify the delivery method for each notification. The Notifications administrator selects the event types to make available for the specified group of users. For example, a behavior group can specify whether to send email notifications to all users, or just to Organization Administrators. Members on the account who want to receive email notifications about events must set their user preferences so that they receive individual emails for each event. Note You must be a Notifications administrator to view configurable tasks events. You can learn more about events and notifications by using resources in the following Additional Resources section. ADDITIONAL RESOURCES See Configuring notifications on the Red Hat Hybrid Cloud Console for more information about how to set up notifications for tasks events. | null | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/assessing_and_remediating_system_issues_using_red_hat_insights_tasks/notifications-and-events-for-tasks_overview-tasks |
Chapter 7. Installing a cluster on OpenStack in a restricted network | Chapter 7. Installing a cluster on OpenStack in a restricted network In OpenShift Container Platform 4.14, you can install a cluster on Red Hat OpenStack Platform (RHOSP) in a restricted network by creating an internal mirror of the installation release content. 7.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You verified that OpenShift Container Platform 4.14 is compatible with your RHOSP version by using the Supported platforms for OpenShift clusters section. You can also compare platform support across different versions by viewing the OpenShift Container Platform on RHOSP support matrix . You created a registry on your mirror host and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. You understand performance and scalability practices for cluster scaling, control plane sizing, and etcd. For more information, see Recommended practices for scaling the cluster . You have the metadata service enabled in RHOSP. 7.2. About installations in restricted networks In OpenShift Container Platform 4.14, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. 7.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 7.3. Resource guidelines for installing OpenShift Container Platform on RHOSP To support an OpenShift Container Platform installation, your Red Hat OpenStack Platform (RHOSP) quota must meet the following requirements: Table 7.1. Recommended resources for a default OpenShift Container Platform cluster on RHOSP Resource Value Floating IP addresses 3 Ports 15 Routers 1 Subnets 1 RAM 88 GB vCPUs 22 Volume storage 275 GB Instances 7 Security groups 3 Security group rules 60 Server groups 2 - plus 1 for each additional availability zone in each machine pool A cluster might function with fewer than recommended resources, but its performance is not guaranteed. Important If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry. Note By default, your security group and security group rule quotas might be low. If you encounter problems, run openstack quota set --secgroups 3 --secgroup-rules 60 <project> as an administrator to increase them. An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine. 7.3.1. Control plane machines By default, the OpenShift Container Platform installation process creates three control plane machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 7.3.2. Compute machines By default, the OpenShift Container Platform installation process creates three compute machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 8 GB memory and 2 vCPUs At least 100 GB storage space from the RHOSP quota Tip Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can. 7.3.3. Bootstrap machine During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned. The bootstrap machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 7.4. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. 7.5. Enabling Swift on RHOSP Swift is operated by a user account with the swiftoperator role. Add the role to an account before you run the installation program. Important If the Red Hat OpenStack Platform (RHOSP) object storage service , commonly known as Swift, is available, OpenShift Container Platform uses it as the image registry storage. If it is unavailable, the installation program relies on the RHOSP block storage service, commonly known as Cinder. If Swift is present and you want to use it, you must enable access to it. If it is not present, or if you do not want to use it, skip this section. Important RHOSP 17 sets the rgw_max_attr_size parameter of Ceph RGW to 256 characters. This setting causes issues with uploading container images to the OpenShift Container Platform registry. You must set the value of rgw_max_attr_size to at least 1024 characters. Before installation, check if your RHOSP deployment is affected by this problem. If it is, reconfigure Ceph RGW. Prerequisites You have a RHOSP administrator account on the target environment. The Swift service is installed. On Ceph RGW , the account in url option is enabled. Procedure To enable Swift on RHOSP: As an administrator in the RHOSP CLI, add the swiftoperator role to the account that will access Swift: USD openstack role add --user <user> --project <project> swiftoperator Your RHOSP deployment can now use Swift for the image registry. 7.6. Defining parameters for the installation program The OpenShift Container Platform installation program relies on a file that is called clouds.yaml . The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs. Procedure Create the clouds.yaml file: If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it. Important Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml . If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml , see Config files in the RHOSP documentation. clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0' If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication: Copy the certificate authority file to your machine. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate: clouds: shiftstack: ... cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem" Tip After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config Place the clouds.yaml file in one of the following locations: The value of the OS_CLIENT_CONFIG_FILE environment variable The current directory A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml The installation program searches for clouds.yaml in that order. 7.7. Setting OpenStack Cloud Controller Manager options Optionally, you can edit the OpenStack Cloud Controller Manager (CCM) configuration for your cluster. This configuration controls how OpenShift Container Platform interacts with Red Hat OpenStack Platform (RHOSP). For a complete list of configuration parameters, see the "OpenStack Cloud Controller Manager reference guide" page in the "Installing on OpenStack" documentation. Procedure If you have not already generated manifest files for your cluster, generate them by running the following command: USD openshift-install --dir <destination_directory> create manifests In a text editor, open the cloud-provider configuration manifest file. For example: USD vi openshift/manifests/cloud-provider-config.yaml Modify the options according to the CCM reference guide. Configuring Octavia for load balancing is a common case for clusters that do not use Kuryr. For example: #... [LoadBalancer] lb-provider = "amphora" 1 floating-network-id="d3deb660-4190-40a3-91f1-37326fe6ec4a" 2 create-monitor = True 3 monitor-delay = 10s 4 monitor-timeout = 10s 5 monitor-max-retries = 1 6 #... 1 This property sets the Octavia provider that your load balancer uses. It accepts "ovn" or "amphora" as values. If you choose to use OVN, you must also set lb-method to SOURCE_IP_PORT . 2 This property is required if you want to use multiple external networks with your cluster. The cloud provider creates floating IP addresses on the network that is specified here. 3 This property controls whether the cloud provider creates health monitors for Octavia load balancers. Set the value to True to create health monitors. As of RHOSP 16.2, this feature is only available for the Amphora provider. 4 This property sets the frequency with which endpoints are monitored. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True . 5 This property sets the time that monitoring requests are open before timing out. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True . 6 This property defines how many successful monitoring requests are required before a load balancer is marked as online. The value must be an integer. This property is required if the value of the create-monitor property is True . Important Prior to saving your changes, verify that the file is structured correctly. Clusters might fail if properties are not placed in the appropriate section. Important You must set the value of the create-monitor property to True if you use services that have the value of the .spec.externalTrafficPolicy property set to Local . The OVN Octavia provider in RHOSP 16.2 does not support health monitors. Therefore, services that have ETP parameter values set to Local might not respond when the lb-provider value is set to "ovn" . Important For installations that use Kuryr, Kuryr handles relevant services. There is no need to configure Octavia load balancing in the cloud provider. Save the changes to the file and proceed with installation. Tip You can update your cloud provider configuration after you run the installer. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config After you save your changes, your cluster will take some time to reconfigure itself. The process is complete if none of your nodes have a SchedulingDisabled status. 7.8. Creating the RHCOS image for restricted network installations Download the Red Hat Enterprise Linux CoreOS (RHCOS) image to install OpenShift Container Platform on a restricted network Red Hat OpenStack Platform (RHOSP) environment. Prerequisites Obtain the OpenShift Container Platform installation program. For a restricted network installation, the program is on your mirror registry host. Procedure Log in to the Red Hat Customer Portal's Product Downloads page . Under Version , select the most recent release of OpenShift Container Platform 4.14 for RHEL 8. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Download the Red Hat Enterprise Linux CoreOS (RHCOS) - OpenStack Image (QCOW) image. Decompress the image. Note You must decompress the image before the cluster can use it. The name of the downloaded file might not contain a compression extension, like .gz or .tgz . To find out if or how the file is compressed, in a command line, enter: Upload the image that you decompressed to a location that is accessible from the bastion server, like Glance. For example: Important Depending on your RHOSP environment, you might be able to upload the image in either .raw or .qcow2 formats . If you use Ceph, you must use the .raw format. Warning If the installation program finds multiple images with the same name, it chooses one of them at random. To avoid this behavior, create unique names for resources in RHOSP. The image is now available for a restricted installation. Note the image name or location for use in OpenShift Container Platform deployment. 7.9. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. You have the imageContentSources values that were generated during mirror registry creation. You have obtained the contents of the certificate for your mirror registry. You have retrieved a Red Hat Enterprise Linux CoreOS (RHCOS) image and uploaded it to an accessible location. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select openstack as the platform to target. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster. Specify the floating IP address to use for external access to the OpenShift API. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name. Enter a name for your cluster. The name must be 14 or fewer characters long. In the install-config.yaml file, set the value of platform.openstack.clusterOSImage to the image location or name. For example: platform: openstack: clusterOSImage: http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Add the image content resources, which resemble the following YAML excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release For these values, use the imageContentSources that you recorded during mirror registry creation. Optional: Set the publishing strategy to Internal : publish: Internal By setting this option, you create an internal Ingress Controller and a private load balancer. Make any other modifications to the install-config.yaml file that you require. You can find more information about the available parameters in the Installation configuration parameters section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for OpenStack 7.9.1. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Note Kuryr installations default to HTTP proxies. Prerequisites For Kuryr installations on restricted networks that use the Proxy object, the proxy must be able to reply to the router that the cluster uses. To add a static route for the proxy configuration, from a command line as the root user, enter: USD ip route add <cluster_network_cidr> via <installer_subnet_gateway> The restricted subnet must have a gateway that is defined and available to be linked to the Router resource that Kuryr creates. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 7.9.2. Sample customized install-config.yaml file for restricted OpenStack installations This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform (RHOSP) customization options. Important This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program. apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OVNKubernetes platform: openstack: region: region1 cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: - mirrors: - <mirror_registry>/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_registry>/<repo_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 7.10. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 7.11. Enabling access to the environment At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments. You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally. 7.11.1. Enabling access with floating IP addresses Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API and cluster applications. Procedure Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP: USD openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network> Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP: USD openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network> Add records that follow these patterns to your DNS server for the API and Ingress FIPs: api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP> Note If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file: <api_floating_ip> api.<cluster_name>.<base_domain> <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain> <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain> application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain> The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc . You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing. Add the FIPs to the install-config.yaml file as the values of the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you use these values, you must also enter an external network as the value of the platform.openstack.externalNetwork parameter in the install-config.yaml file. Tip You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration. 7.11.2. Completing installation without floating IP addresses You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses. In the install-config.yaml file, do not define the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you cannot provide an external network, you can also leave platform.openstack.externalNetwork blank. If you do not provide a value for platform.openstack.externalNetwork , a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. You must configure external connectivity on your own. If you run the installer from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines. Note You can enable name resolution by creating DNS records for the API and Ingress ports. For example: api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP> If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing. 7.12. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 7.13. Verifying cluster status You can verify your OpenShift Container Platform cluster's status during or after installation. Procedure In the cluster environment, export the administrator's kubeconfig file: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. View the control plane and compute machines created after a deployment: USD oc get nodes View your cluster's version: USD oc get clusterversion View your Operators' status: USD oc get clusteroperator View all running pods in the cluster: USD oc get pods -A 7.14. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 7.15. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 7.16. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 7.17. steps Customize your cluster . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . If necessary, see Registering your disconnected cluster Configure image streams for the Cluster Samples Operator and the must-gather tool. Learn how to use Operator Lifecycle Manager (OLM) on restricted networks . If you did not configure RHOSP to accept application traffic over floating IP addresses, configure RHOSP access with floating IP addresses . | [
"openstack role add --user <user> --project <project> swiftoperator",
"clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'",
"clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"",
"oc edit configmap -n openshift-config cloud-provider-config",
"openshift-install --dir <destination_directory> create manifests",
"vi openshift/manifests/cloud-provider-config.yaml",
"# [LoadBalancer] lb-provider = \"amphora\" 1 floating-network-id=\"d3deb660-4190-40a3-91f1-37326fe6ec4a\" 2 create-monitor = True 3 monitor-delay = 10s 4 monitor-timeout = 10s 5 monitor-max-retries = 1 6 #",
"oc edit configmap -n openshift-config cloud-provider-config",
"file <name_of_downloaded_file>",
"openstack image create --file rhcos-44.81.202003110027-0-openstack.x86_64.qcow2 --disk-format qcow2 rhcos-USD{RHCOS_VERSION}",
"./openshift-install create install-config --dir <installation_directory> 1",
"rm -rf ~/.powervs",
"platform: openstack: clusterOSImage: http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d",
"pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'",
"additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----",
"imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release",
"publish: Internal",
"ip route add <cluster_network_cidr> via <installer_subnet_gateway>",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OVNKubernetes platform: openstack: region: region1 cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: - mirrors: - <mirror_registry>/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_registry>/<repo_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>",
"api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>",
"api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc get nodes",
"oc get clusterversion",
"oc get clusteroperator",
"oc get pods -A",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_openstack/installing-openstack-installer-restricted |
6.6. Resource Operations | 6.6. Resource Operations To ensure that resources remain healthy, you can add a monitoring operation to a resource's definition. If you do not specify a monitoring operation for a resource, by default the pcs command will create a monitoring operation, with an interval that is determined by the resource agent. If the resource agent does not provide a default monitoring interval, the pcs command will create a monitoring operation with an interval of 60 seconds. Table 6.4, "Properties of an Operation" summarizes the properties of a resource monitoring operation. Table 6.4. Properties of an Operation Field Description id Unique name for the action. The system assigns this when you configure an operation. name The action to perform. Common values: monitor , start , stop interval If set to a nonzero value, a recurring operation is created that repeats at this frequency, in seconds. A nonzero value makes sense only when the action name is set to monitor . A recurring monitor action will be executed immediately after a resource start completes, and subsequent monitor actions are scheduled starting at the time the monitor action completed. For example, if a monitor action with interval=20s is executed at 01:00:00, the monitor action does not occur at 01:00:20, but at 20 seconds after the first monitor action completes. If set to zero, which is the default value, this parameter allows you to provide values to be used for operations created by the cluster. For example, if the interval is set to zero, the name of the operation is set to start , and the timeout value is set to 40, then Pacemaker will use a timeout of 40 seconds when starting this resource. A monitor operation with a zero interval allows you to set the timeout / on-fail / enabled values for the probes that Pacemaker does at startup to get the current status of all resources when the defaults are not desirable. timeout If the operation does not complete in the amount of time set by this parameter, abort the operation and consider it failed. The default value is the value of timeout if set with the pcs resource op defaults command, or 20 seconds if it is not set. If you find that your system includes a resource that requires more time than the system allows to perform an operation (such as start , stop , or monitor ), investigate the cause and if the lengthy execution time is expected you can increase this value. The timeout value is not a delay of any kind, nor does the cluster wait the entire timeout period if the operation returns before the timeout period has completed. on-fail The action to take if this action ever fails. Allowed values: * ignore - Pretend the resource did not fail * block - Do not perform any further operations on the resource * stop - Stop the resource and do not start it elsewhere * restart - Stop the resource and start it again (possibly on a different node) * fence - STONITH the node on which the resource failed * standby - Move all resources away from the node on which the resource failed The default for the stop operation is fence when STONITH is enabled and block otherwise. All other operations default to restart . enabled If false , the operation is treated as if it does not exist. Allowed values: true , false 6.6.1. Configuring Resource Operations You can configure monitoring operations when you create a resource, using the following command. For example, the following command creates an IPaddr2 resource with a monitoring operation. The new resource is called VirtualIP with an IP address of 192.168.0.99 and a netmask of 24 on eth2 . A monitoring operation will be performed every 30 seconds. Alternately, you can add a monitoring operation to an existing resource with the following command. Use the following command to delete a configured resource operation. Note You must specify the exact operation properties to properly remove an existing operation. To change the values of a monitoring option, you can update the resource. For example, you can create a VirtualIP with the following command. By default, this command creates these operations. To change the stop timeout operation, execute the following command. Note When you update a resource's operation with the pcs resource update command, any options you do not specifically call out are reset to their default values. 6.6.2. Configuring Global Resource Operation Defaults You can use the following command to set global default values for monitoring operations. For example, the following command sets a global default of a timeout value of 240 seconds for all monitoring operations. To display the currently configured default values for monitoring operations, do not specify any options when you execute the pcs resource op defaults command. For example, following command displays the default monitoring operation values for a cluster which has been configured with a timeout value of 240 seconds. Note that a cluster resource will use the global default only when the option is not specified in the cluster resource definition. By default, resource agents define the timeout option for all operations. For the global operation timeout value to be honored, you must create the cluster resource without the timeout option explicitly or you must remove the timeout option by updating the cluster resource, as in the following command. For example, after setting a global default of a timeout value of 240 seconds for all monitoring operations and updating the cluster resource VirtualIP to remove the timeout value for the monitor operation, the resource VirtualIP will then have timeout values for start , stop , and monitor operations of 20s, 40s and 240s, respectively. The global default value for timeout operations is applied here only on the monitor operation, where the default timeout option was removed by the command. | [
"pcs resource create resource_id standard:provider:type|type [ resource_options ] [op operation_action operation_options [ operation_type operation_options ]...]",
"pcs resource create VirtualIP ocf:heartbeat:IPaddr2 ip=192.168.0.99 cidr_netmask=24 nic=eth2 op monitor interval=30s",
"pcs resource op add resource_id operation_action [ operation_properties ]",
"pcs resource op remove resource_id operation_name operation_properties",
"pcs resource create VirtualIP ocf:heartbeat:IPaddr2 ip=192.168.0.99 cidr_netmask=24 nic=eth2",
"Operations: start interval=0s timeout=20s (VirtualIP-start-timeout-20s) stop interval=0s timeout=20s (VirtualIP-stop-timeout-20s) monitor interval=10s timeout=20s (VirtualIP-monitor-interval-10s)",
"pcs resource update VirtualIP op stop interval=0s timeout=40s pcs resource show VirtualIP Resource: VirtualIP (class=ocf provider=heartbeat type=IPaddr2) Attributes: ip=192.168.0.99 cidr_netmask=24 nic=eth2 Operations: start interval=0s timeout=20s (VirtualIP-start-timeout-20s) monitor interval=10s timeout=20s (VirtualIP-monitor-interval-10s) stop interval=0s timeout=40s (VirtualIP-name-stop-interval-0s-timeout-40s)",
"pcs resource op defaults [ options ]",
"pcs resource op defaults timeout=240s",
"pcs resource op defaults timeout: 240s",
"pcs resource update VirtualIP op monitor interval=10s",
"pcs resource show VirtualIP Resource: VirtualIP (class=ocf provider=heartbeat type=IPaddr2) Attributes: ip=192.168.0.99 cidr_netmask=24 nic=eth2 Operations: start interval=0s timeout=20s (VirtualIP-start-timeout-20s) monitor interval=10s (VirtualIP-monitor-interval-10s) stop interval=0s timeout=40s (VirtualIP-name-stop-interval-0s-timeout-40s)"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/s1-resourceoperate-HAAR |
10.5.24. AllowOverride | 10.5.24. AllowOverride The AllowOverride directive sets whether any Options can be overridden by the declarations in an .htaccess file. By default, both the root directory and the DocumentRoot are set to allow no .htaccess overrides. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-apache-allowoverride |
Metadata APIs | Metadata APIs OpenShift Container Platform 4.16 Reference guide for metadata APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/metadata_apis/index |
Chapter 5. Remote health monitoring | Chapter 5. Remote health monitoring OpenShift Data Foundation collects anonymized aggregated information about the health, usage, and size of clusters and reports it to Red Hat via an integrated component called Telemetry. This information allows Red Hat to improve OpenShift Data Foundation and to react to issues that impact customers more quickly. A cluster that reports data to Red Hat via Telemetry is considered a connected cluster . 5.1. About Telemetry Telemetry sends a carefully chosen subset of the cluster monitoring metrics to Red Hat. These metrics are sent continuously and describe: The size of an OpenShift Data Foundation cluster The health and status of OpenShift Data Foundation components The health and status of any upgrade being performed Limited usage information about OpenShift Data Foundation components and features Summary info about alerts reported by the cluster monitoring component This continuous stream of data is used by Red Hat to monitor the health of clusters in real time and to react as necessary to problems that impact our customers. It also allows Red Hat to roll out OpenShift Data Foundation upgrades to customers so as to minimize service impact and continuously improve the upgrade experience. This debugging information is available to Red Hat Support and engineering teams with the same restrictions as accessing data reported via support cases. All connected cluster information is used by Red Hat to help make OpenShift Data Foundation better and more intuitive to use. None of the information is shared with third parties. 5.2. Information collected by Telemetry Primary information collected by Telemetry includes: The size of the Ceph cluster in bytes : "ceph_cluster_total_bytes" , The amount of the Ceph cluster storage used in bytes : "ceph_cluster_total_used_raw_bytes" , Ceph cluster health status : "ceph_health_status" , The total count of object storage devices (OSDs) : "job:ceph_osd_metadata:count" , The total number of OpenShift Data Foundation Persistent Volumes (PVs) present in the Red Hat OpenShift Container Platform cluster : "job:kube_pv:count" , The total input/output operations per second (IOPS) (reads+writes) value for all the pools in the Ceph cluster : "job:ceph_pools_iops:total" , The total IOPS (reads+writes) value in bytes for all the pools in the Ceph cluster : "job:ceph_pools_iops_bytes:total" , The total count of the Ceph cluster versions running : "job:ceph_versions_running:count" The total number of unhealthy NooBaa buckets : "job:noobaa_total_unhealthy_buckets:sum" , The total number of NooBaa buckets : "job:noobaa_bucket_count:sum" , The total number of NooBaa objects : "job:noobaa_total_object_count:sum" , The count of NooBaa accounts : "noobaa_accounts_num" , The total usage of storage by NooBaa in bytes : "noobaa_total_usage" , The total amount of storage requested by the persistent volume claims (PVCs) from a particular storage provisioner in bytes: "cluster:kube_persistentvolumeclaim_resource_requests_storage_bytes:provisioner:sum" , The total amount of storage used by the PVCs from a particular storage provisioner in bytes: "cluster:kubelet_volume_stats_used_bytes:provisioner:sum" . Telemetry does not collect identifying information such as user names, passwords, or the names or addresses of user resources. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/monitoring_openshift_data_foundation/remote_health_monitoring |
3.4. Configuring luci with /etc/sysconfig/luci | 3.4. Configuring luci with /etc/sysconfig/luci As of the Red Hat Enterprise Linux 6.1 release, you can configure some aspects of luci 's behavior by means of the /etc/sysconfig/luci file. The parameters you can change with this file include auxiliary settings of the running environment used by the init script as well as server configuration. In addition, you can edit this file to modify some application configuration parameters. There are instructions within the file itself describing which configuration parameters you can change by editing this file. In order to protect the intended format, you should not change the non-configuration lines of the /etc/sysconfig/luci file when you edit the file. Additionally, you should take care to follow the required syntax for this file, particularly for the INITSCRIPT section which does not allow for white spaces around the equal sign and requires that you use quotation marks to enclose strings containing white spaces. The following example shows how to change the port at which luci is being served by editing the /etc/sysconfig/luci file. Uncomment the following line in the /etc/sysconfig/luci file: Replace 4443 with the desired port number, which must be higher than or equal to 1024 (not a privileged port). For example, you can edit that line of the file as follows to set the port at which luci is being served to 8084 (commenting the line out again would have the same affect, as this is the default value). Restart the luci service for the changes to take effect. As the Red Hat Enterprise Linux 6.6 release, you can implement a fine-grained control over the ciphers behind the secured connection between luci and the web browser with the ssl_cipher_list configuration parameter in /etc/sysconfig/luci . This parameter can be used to impose restrictions as expressed with OpenSSL cipher notation. Important When you modify a configuration parameter in the /etc/sysconfig/luci file to redefine a default value, you should take care to use the new value in place of the documented default value. For example, when you modify the port at which luci is being served, make sure that you specify the modified value when you enable an IP port for luci , as described in Section 3.3.2, "Enabling the IP Port for luci " . Modified port and host parameters will automatically be reflected in the URL displayed when the luci service starts, as described in Section 4.2, "Starting luci " . You should use this URL to access luci . For more complete information on the parameters you can configure with the /etc/sysconfig/luci file, refer to the documentation within the file itself. | [
"#port = 4443",
"port = 8084"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-sysconfigluci-CA |
Managing hybrid and multicloud resources | Managing hybrid and multicloud resources Red Hat OpenShift Data Foundation 4.9 Instructions for how to manage storage resources across a hybrid cloud or multicloud environment using the Multicloud Object Gateway (NooBaa). Red Hat Storage Documentation Team Abstract This document explains how to manage storage resources across a hybrid cloud or multicloud environment. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/managing_hybrid_and_multicloud_resources/index |
Chapter 15. Prometheus metrics monitoring in Red Hat Process Automation Manager | Chapter 15. Prometheus metrics monitoring in Red Hat Process Automation Manager Prometheus is an open-source systems monitoring toolkit that you can use with Red Hat Process Automation Manager to collect and store metrics related to the execution of business rules, processes, Decision Model and Notation (DMN) models, and other Red Hat Process Automation Manager assets. You can access the stored metrics through a REST API call to the KIE Server, through the Prometheus expression browser, or using a data-graphing tool such as Grafana. You can configure Prometheus metrics monitoring for an on-premise KIE Server instance, for KIE Server on Spring Boot, or for a KIE Server deployment on Red Hat OpenShift Container Platform. For the list of available metrics that KIE Server exposes with Prometheus, download the Red Hat Process Automation Manager 7.13.5 Source Distribution from the Red Hat Customer Portal and navigate to ~/rhpam-7.13.5-sources/src/droolsjbpm-integration-USDVERSION/kie-server-parent/kie-server-services/kie-server-services-prometheus/src/main/java/org/kie/server/services/prometheus . Important Red Hat support for Prometheus is limited to the setup and configuration recommendations provided in Red Hat product documentation. 15.1. Configuring Prometheus metrics monitoring for KIE Server You can configure your KIE Server instances to use Prometheus to collect and store metrics related to your business asset activity in Red Hat Process Automation Manager. For the list of available metrics that KIE Server exposes with Prometheus, download the Red Hat Process Automation Manager 7.13.5 Source Distribution from the Red Hat Customer Portal and navigate to ~/rhpam-7.13.5-sources/src/droolsjbpm-integration-USDVERSION/kie-server-parent/kie-server-services/kie-server-services-prometheus/src/main/java/org/kie/server/services/prometheus . Prerequisites KIE Server is installed. You have kie-server user role access to KIE Server. Prometheus is installed. For information about downloading and using Prometheus, see the Prometheus documentation page . Procedure In your KIE Server instance, set the org.kie.prometheus.server.ext.disabled system property to false to enable the Prometheus extension. You can define this property when you start KIE Server or in the standalone.xml or standalone-full.xml file of Red Hat Process Automation Manager distribution. If you are running Red Hat Process Automation Manager on Spring Boot, configure the required key in the application.properties system property: Spring Boot application.properties key for Red Hat Process Automation Manager and Prometheus kieserver.jbpm.enabled=true kieserver.drools.enabled=true kieserver.dmn.enabled=true kieserver.prometheus.enabled=true In the prometheus.yaml file of your Prometheus distribution, add the following settings in the scrape_configs section to configure Prometheus to scrape metrics from KIE Server: Scrape configurations in prometheus.yaml file scrape_configs: - job_name: 'kie-server' metrics_path: /SERVER_PATH/services/rest/metrics basicAuth: username: USER_NAME password: PASSWORD static_configs: - targets: ["HOST:PORT"] Scrape configurations in prometheus.yaml file for Spring Boot (if applicable) scrape_configs: - job_name: 'kie' metrics_path: /rest/metrics static_configs: - targets: ["HOST:PORT"] Replace the values according to your KIE Server location and settings. Start the KIE Server instance. Example start command for Red Hat Process Automation Manager on Red Hat JBoss EAP After you start the configured KIE Server instance, Prometheus begins collecting metrics and KIE Server publishes the metrics to the REST API endpoint http://HOST:PORT/SERVER/services/rest/metrics (or on Spring Boot, to http://HOST:PORT/rest/metrics ). In a REST client or curl utility, send a REST API request with the following components to verify that KIE Server is publishing the metrics: For REST client: Authentication : Enter the user name and password of the KIE Server user with the kie-server role. HTTP Headers : Set the following header: Accept : application/json HTTP method : Set to GET . URL : Enter the KIE Server REST API base URL and metrics endpoint, such as http://localhost:8080/kie-server/services/rest/metrics (or on Spring Boot, http://localhost:8080/rest/metrics ). For curl utility: -u : Enter the user name and password of the KIE Server user with the kie-server role. -H : Set the following header: accept : application/json -X : Set to GET . URL : Enter the KIE Server REST API base URL and metrics endpoint, such as http://localhost:8080/kie-server/services/rest/metrics (or on Spring Boot, http://localhost:8080/rest/metrics ). Example curl command for Red Hat Process Automation Manager on Red Hat JBoss EAP Example curl command for Red Hat Process Automation Manager on Spring Boot Example server response If the metrics are not available in KIE Server, review and verify the KIE Server and Prometheus configurations described in this section. You can also interact with your collected metrics in the Prometheus expression browser at http://HOST:PORT/graph , or integrate your Prometheus data source with a data-graphing tool such as Grafana: Figure 15.1. Prometheus expression browser with KIE Server metrics Figure 15.2. Prometheus expression browser with KIE Server target Figure 15.3. Grafana dashboard with KIE Server metrics for DMN models Figure 15.4. Grafana dashboard with KIE Server metrics for solvers Figure 15.5. Grafana dashboard with KIE Server metrics for processes, cases, and tasks Additional resources Getting Started with Prometheus Grafana Support for Prometheus Using Prometheus in Grafana 15.2. Configuring Prometheus metrics monitoring for KIE Server on Red Hat OpenShift Container Platform You can configure your KIE Server deployment on Red Hat OpenShift Container Platform to use Prometheus to collect and store metrics related to your business asset activity in Red Hat Process Automation Manager. For the list of available metrics that KIE Server exposes with Prometheus, download the Red Hat Process Automation Manager 7.13.5 Source Distribution from the Red Hat Customer Portal and navigate to ~/rhpam-7.13.5-sources/src/droolsjbpm-integration-USDVERSION/kie-server-parent/kie-server-services/kie-server-services-prometheus/src/main/java/org/kie/server/services/prometheus . Prerequisites KIE Server is installed and deployed on Red Hat OpenShift Container Platform. For more information about KIE Server on OpenShift, see the relevant OpenShift deployment option in the Product documentation for Red Hat Process Automation Manager 7.13 . You have kie-server user role access to KIE Server. Prometheus Operator is installed. For information about downloading and using Prometheus Operator, see the Prometheus Operator project in GitHub. Procedure In the DeploymentConfig object of your KIE Server deployment on OpenShift, set the PROMETHEUS_SERVER_EXT_DISABLED environment variable to false to enable the Prometheus extension. You can set this variable in the OpenShift web console or use the oc command in a command terminal: If you have not yet deployed your KIE Server on OpenShift, then in the OpenShift template that you plan to use for your OpenShift deployment (for example, rhpam713-prod-immutable-kieserver.yaml ), you can set the PROMETHEUS_SERVER_EXT_DISABLED template parameter to false to enable the Prometheus extension. If you are using the OpenShift Operator to deploy KIE Server on OpenShift, then in your KIE Server configuration, set the PROMETHEUS_SERVER_EXT_DISABLED environment variable to false to enable the Prometheus extension: apiVersion: app.kiegroup.org/v1 kind: KieApp metadata: name: enable-prometheus spec: environment: rhpam-trial objects: servers: - env: - name: PROMETHEUS_SERVER_EXT_DISABLED value: "false" Create a service-metrics.yaml file to add a service that exposes the metrics from KIE Server to Prometheus: apiVersion: v1 kind: Service metadata: annotations: description: RHPAM Prometheus metrics exposed labels: app: myapp-kieserver application: myapp-kieserver template: myapp-kieserver metrics: rhpam name: rhpam-app-metrics spec: ports: - name: web port: 8080 protocol: TCP targetPort: 8080 selector: deploymentConfig: myapp-kieserver sessionAffinity: None type: ClusterIP In a command terminal, use the oc command to apply the service-metrics.yaml file to your OpenShift deployment: oc apply -f service-metrics.yaml Create an OpenShift secret, such as metrics-secret , to access the Prometheus metrics on KIE Server. The secret must contain the "username" and "password" elements with KIE Server user credentials. For information about OpenShift secrets, see the Secrets chapter in the OpenShift Developer Guide . Create a service-monitor.yaml file that defines the ServiceMonitor object. A service monitor enables Prometheus to connect to the KIE Server metrics service. apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: rhpam-service-monitor labels: team: frontend spec: selector: matchLabels: metrics: rhpam endpoints: - port: web path: /services/rest/metrics basicAuth: password: name: metrics-secret key: password username: name: metrics-secret key: username In a command terminal, use the oc command to apply the service-monitor.yaml file to your OpenShift deployment: oc apply -f service-monitor.yaml After you complete these configurations, Prometheus begins collecting metrics and KIE Server publishes the metrics to the REST API endpoint http://HOST:PORT/kie-server/services/rest/metrics . You can interact with your collected metrics in the Prometheus expression browser at http://HOST:PORT/graph , or integrate your Prometheus data source with a data-graphing tool such as Grafana. The host and port for the Prometheus expression browser location http://HOST:PORT/graph was defined in the route where you exposed the Prometheus web console when you installed the Prometheus Operator. For information about OpenShift routes, see the Routes chapter in the OpenShift Architecture documentation. Figure 15.6. Prometheus expression browser with KIE Server metrics Figure 15.7. Prometheus expression browser with KIE Server target Figure 15.8. Grafana dashboard with KIE Server metrics for DMN models Figure 15.9. Grafana dashboard with KIE Server metrics for solvers Figure 15.10. Grafana dashboard with KIE Server metrics for processes, cases, and tasks Additional resources Prometheus Operator Getting started with the Prometheus Operator Prometheus RBAC Grafana Support for Prometheus Using Prometheus in Grafana OpenShift deployment options in Product documentation for Red Hat Process Automation Manager 7.13 15.3. Extending Prometheus metrics monitoring in KIE Server with custom metrics After you configure your KIE Server instance to use Prometheus metrics monitoring, you can extend the Prometheus functionality in KIE Server to use custom metrics according to your business needs. Prometheus then collects and stores your custom metrics along with the default metrics that KIE Server exposes with Prometheus. As an example, this procedure defines custom Decision Model and Notation (DMN) metrics to be collected and stored by Prometheus. Prerequisites Prometheus metrics monitoring is configured for your KIE Server instance. For information about Prometheus configuration with KIE Server on-premise, see Section 15.1, "Configuring Prometheus metrics monitoring for KIE Server" . For information about Prometheus configuration with KIE Server on Red Hat OpenShift Container Platform, see Section 15.2, "Configuring Prometheus metrics monitoring for KIE Server on Red Hat OpenShift Container Platform" . Procedure Create an empty Maven project and define the following packaging type and dependencies in the pom.xml file for the project: Example pom.xml file in the sample project <packaging>jar</packaging> <properties> <version.org.kie>7.67.0.Final-redhat-00024</version.org.kie> </properties> <dependencies> <dependency> <groupId>org.kie</groupId> <artifactId>kie-api</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-api</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-services-common</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-services-drools</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-services-prometheus</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie</groupId> <artifactId>kie-dmn-api</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie</groupId> <artifactId>kie-dmn-core</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-services-api</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-executor</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.optaplanner</groupId> <artifactId>optaplanner-core</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>io.prometheus</groupId> <artifactId>simpleclient</artifactId> <version>0.5.0</version> </dependency> </dependencies> Implement the relevant listener from the org.kie.server.services.prometheus.PrometheusMetricsProvider interface as part of the custom listener class that defines your custom Prometheus metrics, as shown in the following example: Sample implementation of the DMNRuntimeEventListener listener in a custom listener class package org.kie.server.ext.prometheus; import io.prometheus.client.Gauge; import org.kie.dmn.api.core.ast.DecisionNode; import org.kie.dmn.api.core.event.AfterEvaluateBKMEvent; import org.kie.dmn.api.core.event.AfterEvaluateContextEntryEvent; import org.kie.dmn.api.core.event.AfterEvaluateDecisionEvent; import org.kie.dmn.api.core.event.AfterEvaluateDecisionServiceEvent; import org.kie.dmn.api.core.event.AfterEvaluateDecisionTableEvent; import org.kie.dmn.api.core.event.BeforeEvaluateBKMEvent; import org.kie.dmn.api.core.event.BeforeEvaluateContextEntryEvent; import org.kie.dmn.api.core.event.BeforeEvaluateDecisionEvent; import org.kie.dmn.api.core.event.BeforeEvaluateDecisionServiceEvent; import org.kie.dmn.api.core.event.BeforeEvaluateDecisionTableEvent; import org.kie.dmn.api.core.event.DMNRuntimeEventListener; import org.kie.server.api.model.ReleaseId; import org.kie.server.services.api.KieContainerInstance; public class ExampleCustomPrometheusMetricListener implements DMNRuntimeEventListener { private final KieContainerInstance kieContainer; private final Gauge randomGauge = Gauge.build() .name("random_gauge_nanosecond") .help("Random gauge as an example of custom KIE Prometheus metric") .labelNames("container_id", "group_id", "artifact_id", "version", "decision_namespace", "decision_name") .register(); public ExampleCustomPrometheusMetricListener(KieContainerInstance containerInstance) { kieContainer = containerInstance; } public void beforeEvaluateDecision(BeforeEvaluateDecisionEvent e) { } public void afterEvaluateDecision(AfterEvaluateDecisionEvent e) { DecisionNode decisionNode = e.getDecision(); ReleaseId releaseId = kieContainer.getResource().getReleaseId(); randomGauge.labels(kieContainer.getContainerId(), releaseId.getGroupId(), releaseId.getArtifactId(), releaseId.getVersion(), decisionNode.getModelName(), decisionNode.getModelNamespace()) .set((int) (Math.random() * 100)); } public void beforeEvaluateBKM(BeforeEvaluateBKMEvent event) { } public void afterEvaluateBKM(AfterEvaluateBKMEvent event) { } public void beforeEvaluateContextEntry(BeforeEvaluateContextEntryEvent event) { } public void afterEvaluateContextEntry(AfterEvaluateContextEntryEvent event) { } public void beforeEvaluateDecisionTable(BeforeEvaluateDecisionTableEvent event) { } public void afterEvaluateDecisionTable(AfterEvaluateDecisionTableEvent event) { } public void beforeEvaluateDecisionService(BeforeEvaluateDecisionServiceEvent event) { } public void afterEvaluateDecisionService(AfterEvaluateDecisionServiceEvent event) { } } The PrometheusMetricsProvider interface contains the required listeners for collecting Prometheus metrics. The interface is incorporated by the kie-server-services-prometheus dependency that you declared in your project pom.xml file. In this example, the ExampleCustomPrometheusMetricListener class implements the DMNRuntimeEventListener listener (from the PrometheusMetricsProvider interface) and defines the custom DMN metrics to be collected and stored by Prometheus. Implement the PrometheusMetricsProvider interface as part of a custom metrics provider class that associates your custom listener with the PrometheusMetricsProvider interface, as shown in the following example: Sample implementation of the PrometheusMetricsProvider interface in a custom metrics provider class package org.kie.server.ext.prometheus; import org.jbpm.executor.AsynchronousJobListener; import org.jbpm.services.api.DeploymentEventListener; import org.kie.api.event.rule.AgendaEventListener; import org.kie.api.event.rule.DefaultAgendaEventListener; import org.kie.dmn.api.core.event.DMNRuntimeEventListener; import org.kie.server.services.api.KieContainerInstance; import org.kie.server.services.prometheus.PrometheusMetricsProvider; import org.optaplanner.core.impl.phase.event.PhaseLifecycleListener; import org.optaplanner.core.impl.phase.event.PhaseLifecycleListenerAdapter; public class MyPrometheusMetricsProvider implements PrometheusMetricsProvider { public DMNRuntimeEventListener createDMNRuntimeEventListener(KieContainerInstance kContainer) { return new ExampleCustomPrometheusMetricListener(kContainer); } public AgendaEventListener createAgendaEventListener(String kieSessionId, KieContainerInstance kContainer) { return new DefaultAgendaEventListener(); } public PhaseLifecycleListener createPhaseLifecycleListener(String solverId) { return new PhaseLifecycleListenerAdapter() { }; } public AsynchronousJobListener createAsynchronousJobListener() { return null; } public DeploymentEventListener createDeploymentEventListener() { return null; } } In this example, the MyPrometheusMetricsProvider class implements the PrometheusMetricsProvider interface and includes your custom ExampleCustomPrometheusMetricListener listener class. To make the new metrics provider discoverable for KIE Server, create a META-INF/services/org.kie.server.services.prometheus.PrometheusMetricsProvider file in your Maven project and add the fully qualified class name of the PrometheusMetricsProvider implementation class within the file. For this example, the file contains the single line org.kie.server.ext.prometheus.MyPrometheusMetricsProvider . Build your project and copy the resulting JAR file into the ~/kie-server.war/WEB-INF/lib directory of your project. For example, on Red Hat JBoss EAP, the path to this directory is EAP_HOME /standalone/deployments/kie-server.war/WEB-INF/lib . If you are deploying Red Hat Process Automation Manager on Red Hat OpenShift Container Platform, create a custom KIE Server image and add this JAR file to the image. For more information about creating a custom KIE Server image with an additional JAR file, see Deploying an Red Hat Process Automation Manager environment on Red Hat OpenShift Container Platform 4 using Operators . Start the KIE Server and deploy the built project to the running KIE Server. You can deploy the project using the Business Central interface or the KIE Server REST API (a PUT request to http://SERVER:PORT/kie-server/services/rest/server/containers/{containerId} ). After your project is deployed on a running KIE Server, Prometheus begins collecting metrics and KIE Server publishes the metrics to the REST API endpoint http://HOST:PORT/SERVER/services/rest/metrics (or on Spring Boot, to http://HOST:PORT/rest/metrics ). | [
"kieserver.jbpm.enabled=true kieserver.drools.enabled=true kieserver.dmn.enabled=true kieserver.prometheus.enabled=true",
"scrape_configs: - job_name: 'kie-server' metrics_path: /SERVER_PATH/services/rest/metrics basicAuth: username: USER_NAME password: PASSWORD static_configs: - targets: [\"HOST:PORT\"]",
"scrape_configs: - job_name: 'kie' metrics_path: /rest/metrics static_configs: - targets: [\"HOST:PORT\"]",
"cd ~/EAP_HOME/bin ./standalone.sh --c standalone-full.xml",
"curl -u 'baAdmin:password@1' -X GET \"http://localhost:8080/kie-server/services/rest/metrics\"",
"curl -u 'baAdmin:password@1' -X GET \"http://localhost:8080/rest/metrics\"",
"HELP kie_server_container_started_total Kie Server Started Containers TYPE kie_server_container_started_total counter kie_server_container_started_total{container_id=\"task-assignment-kjar-1.0\",} 1.0 HELP solvers_running Number of solvers currently running TYPE solvers_running gauge solvers_running 0.0 HELP dmn_evaluate_decision_nanosecond DMN Evaluation Time TYPE dmn_evaluate_decision_nanosecond histogram HELP solver_duration_seconds Time in seconds it took solver to solve the constraint problem TYPE solver_duration_seconds summary solver_duration_seconds_count{solver_id=\"100tasks-5employees.xml\",} 1.0 solver_duration_seconds_sum{solver_id=\"100tasks-5employees.xml\",} 179.828255925 solver_duration_seconds_count{solver_id=\"24tasks-8employees.xml\",} 1.0 solver_duration_seconds_sum{solver_id=\"24tasks-8employees.xml\",} 179.995759653 HELP drl_match_fired_nanosecond Drools Firing Time TYPE drl_match_fired_nanosecond histogram HELP dmn_evaluate_failed_count DMN Evaluation Failed TYPE dmn_evaluate_failed_count counter HELP kie_server_start_time Kie Server Start Time TYPE kie_server_start_time gauge kie_server_start_time{name=\"myapp-kieserver\",server_id=\"myapp-kieserver\",location=\"http://myapp-kieserver-demo-monitoring.127.0.0.1.nip.io:80/services/rest/server\",version=\"7.4.0.redhat-20190428\",} 1.557221271502E12 HELP kie_server_container_running_total Kie Server Running Containers TYPE kie_server_container_running_total gauge kie_server_container_running_total{container_id=\"task-assignment-kjar-1.0\",} 1.0 HELP solver_score_calculation_speed Number of moves per second for a particular solver solving the constraint problem TYPE solver_score_calculation_speed summary solver_score_calculation_speed_count{solver_id=\"100tasks-5employees.xml\",} 1.0 solver_score_calculation_speed_sum{solver_id=\"100tasks-5employees.xml\",} 6997.0 solver_score_calculation_speed_count{solver_id=\"24tasks-8employees.xml\",} 1.0 solver_score_calculation_speed_sum{solver_id=\"24tasks-8employees.xml\",} 19772.0 HELP kie_server_case_started_total Kie Server Started Cases TYPE kie_server_case_started_total counter kie_server_case_started_total{case_definition_id=\"itorders.orderhardware\",} 1.0 HELP kie_server_case_running_total Kie Server Running Cases TYPE kie_server_case_running_total gauge kie_server_case_running_total{case_definition_id=\"itorders.orderhardware\",} 2.0 HELP kie_server_data_set_registered_total Kie Server Data Set Registered TYPE kie_server_data_set_registered_total gauge kie_server_data_set_registered_total{name=\"jbpmProcessInstanceLogs::CUSTOM\",uuid=\"jbpmProcessInstanceLogs\",} 1.0 kie_server_data_set_registered_total{name=\"jbpmRequestList::CUSTOM\",uuid=\"jbpmRequestList\",} 1.0 kie_server_data_set_registered_total{name=\"tasksMonitoring::CUSTOM\",uuid=\"tasksMonitoring\",} 1.0 kie_server_data_set_registered_total{name=\"jbpmHumanTasks::CUSTOM\",uuid=\"jbpmHumanTasks\",} 1.0 kie_server_data_set_registered_total{name=\"jbpmHumanTasksWithUser::FILTERED_PO_TASK\",uuid=\"jbpmHumanTasksWithUser\",} 1.0 kie_server_data_set_registered_total{name=\"jbpmHumanTasksWithVariables::CUSTOM\",uuid=\"jbpmHumanTasksWithVariables\",} 1.0 kie_server_data_set_registered_total{name=\"jbpmProcessInstancesWithVariables::CUSTOM\",uuid=\"jbpmProcessInstancesWithVariables\",} 1.0 kie_server_data_set_registered_total{name=\"jbpmProcessInstances::CUSTOM\",uuid=\"jbpmProcessInstances\",} 1.0 kie_server_data_set_registered_total{name=\"jbpmExecutionErrorList::CUSTOM\",uuid=\"jbpmExecutionErrorList\",} 1.0 kie_server_data_set_registered_total{name=\"processesMonitoring::CUSTOM\",uuid=\"processesMonitoring\",} 1.0 kie_server_data_set_registered_total{name=\"jbpmHumanTasksWithAdmin::FILTERED_BA_TASK\",uuid=\"jbpmHumanTasksWithAdmin\",} 1.0 HELP kie_server_execution_error_total Kie Server Execution Errors TYPE kie_server_execution_error_total counter HELP kie_server_task_completed_total Kie Server Completed Tasks TYPE kie_server_task_completed_total counter HELP kie_server_container_running_total Kie Server Running Containers TYPE kie_server_container_running_total gauge kie_server_container_running_total{container_id=\"itorders_1.0.0-SNAPSHOT\",} 1.0 HELP kie_server_job_cancelled_total Kie Server Cancelled Jobs TYPE kie_server_job_cancelled_total counter HELP kie_server_process_instance_started_total Kie Server Started Process Instances TYPE kie_server_process_instance_started_total counter kie_server_process_instance_started_total{container_id=\"itorders_1.0.0-SNAPSHOT\",process_id=\"itorders.orderhardware\",} 1.0 HELP solver_duration_seconds Time in seconds it took solver to solve the constraint problem TYPE solver_duration_seconds summary HELP kie_server_task_skipped_total Kie Server Skipped Tasks TYPE kie_server_task_skipped_total counter HELP kie_server_data_set_execution_time_seconds Kie Server Data Set Execution Time TYPE kie_server_data_set_execution_time_seconds summary kie_server_data_set_execution_time_seconds_count{uuid=\"jbpmProcessInstances\",} 8.0 kie_server_data_set_execution_time_seconds_sum{uuid=\"jbpmProcessInstances\",} 0.05600000000000001 HELP kie_server_job_scheduled_total Kie Server Started Jobs TYPE kie_server_job_scheduled_total counter HELP kie_server_data_set_execution_total Kie Server Data Set Execution TYPE kie_server_data_set_execution_total counter kie_server_data_set_execution_total{uuid=\"jbpmProcessInstances\",} 8.0 HELP kie_server_process_instance_completed_total Kie Server Completed Process Instances TYPE kie_server_process_instance_completed_total counter HELP kie_server_job_running_total Kie Server Running Jobs TYPE kie_server_job_running_total gauge HELP kie_server_task_failed_total Kie Server Failed Tasks TYPE kie_server_task_failed_total counter HELP kie_server_task_exited_total Kie Server Exited Tasks TYPE kie_server_task_exited_total counter HELP dmn_evaluate_decision_nanosecond DMN Evaluation Time TYPE dmn_evaluate_decision_nanosecond histogram HELP kie_server_data_set_lookups_total Kie Server Data Set Running Lookups TYPE kie_server_data_set_lookups_total gauge kie_server_data_set_lookups_total{uuid=\"jbpmProcessInstances\",} 0.0 HELP kie_server_process_instance_duration_seconds Kie Server Process Instances Duration TYPE kie_server_process_instance_duration_seconds summary HELP kie_server_case_duration_seconds Kie Server Case Duration TYPE kie_server_case_duration_seconds summary HELP dmn_evaluate_failed_count DMN Evaluation Failed TYPE dmn_evaluate_failed_count counter HELP kie_server_task_added_total Kie Server Added Tasks TYPE kie_server_task_added_total counter kie_server_task_added_total{deployment_id=\"itorders_1.0.0-SNAPSHOT\",process_id=\"itorders.orderhardware\",task_name=\"Prepare hardware spec\",} 1.0 HELP drl_match_fired_nanosecond Drools Firing Time TYPE drl_match_fired_nanosecond histogram HELP kie_server_container_started_total Kie Server Started Containers TYPE kie_server_container_started_total counter kie_server_container_started_total{container_id=\"itorders_1.0.0-SNAPSHOT\",} 1.0 HELP kie_server_process_instance_sla_violated_total Kie Server Process Instances SLA Violated TYPE kie_server_process_instance_sla_violated_total counter HELP kie_server_task_duration_seconds Kie Server Task Duration TYPE kie_server_task_duration_seconds summary HELP kie_server_job_executed_total Kie Server Executed Jobs TYPE kie_server_job_executed_total counter HELP kie_server_deployments_active_total Kie Server Active Deployments TYPE kie_server_deployments_active_total gauge kie_server_deployments_active_total{deployment_id=\"itorders_1.0.0-SNAPSHOT\",} 1.0 HELP kie_server_process_instance_running_total Kie Server Running Process Instances TYPE kie_server_process_instance_running_total gauge kie_server_process_instance_running_total{container_id=\"itorders_1.0.0-SNAPSHOT\",process_id=\"itorders.orderhardware\",} 2.0 HELP solvers_running Number of solvers currently running TYPE solvers_running gauge solvers_running 0.0 HELP kie_server_work_item_duration_seconds Kie Server Work Items Duration TYPE kie_server_work_item_duration_seconds summary HELP kie_server_job_duration_seconds Kie Server Job Duration TYPE kie_server_job_duration_seconds summary HELP solver_score_calculation_speed Number of moves per second for a particular solver solving the constraint problem TYPE solver_score_calculation_speed summary HELP kie_server_start_time Kie Server Start Time TYPE kie_server_start_time gauge kie_server_start_time{name=\"sample-server\",server_id=\"sample-server\",location=\"http://localhost:8080/kie-server/services/rest/server\",version=\"7.68.0-SNAPSHOT\",} 1.557285486469E12",
"set env dc/<dc_name> PROMETHEUS_SERVER_EXT_DISABLED=false -n <namespace>",
"apiVersion: app.kiegroup.org/v1 kind: KieApp metadata: name: enable-prometheus spec: environment: rhpam-trial objects: servers: - env: - name: PROMETHEUS_SERVER_EXT_DISABLED value: \"false\"",
"apiVersion: v1 kind: Service metadata: annotations: description: RHPAM Prometheus metrics exposed labels: app: myapp-kieserver application: myapp-kieserver template: myapp-kieserver metrics: rhpam name: rhpam-app-metrics spec: ports: - name: web port: 8080 protocol: TCP targetPort: 8080 selector: deploymentConfig: myapp-kieserver sessionAffinity: None type: ClusterIP",
"apply -f service-metrics.yaml",
"apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: rhpam-service-monitor labels: team: frontend spec: selector: matchLabels: metrics: rhpam endpoints: - port: web path: /services/rest/metrics basicAuth: password: name: metrics-secret key: password username: name: metrics-secret key: username",
"apply -f service-monitor.yaml",
"<packaging>jar</packaging> <properties> <version.org.kie>7.67.0.Final-redhat-00024</version.org.kie> </properties> <dependencies> <dependency> <groupId>org.kie</groupId> <artifactId>kie-api</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-api</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-services-common</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-services-drools</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-services-prometheus</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie</groupId> <artifactId>kie-dmn-api</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.kie</groupId> <artifactId>kie-dmn-core</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-services-api</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-executor</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>org.optaplanner</groupId> <artifactId>optaplanner-core</artifactId> <version>USD{version.org.kie}</version> </dependency> <dependency> <groupId>io.prometheus</groupId> <artifactId>simpleclient</artifactId> <version>0.5.0</version> </dependency> </dependencies>",
"package org.kie.server.ext.prometheus; import io.prometheus.client.Gauge; import org.kie.dmn.api.core.ast.DecisionNode; import org.kie.dmn.api.core.event.AfterEvaluateBKMEvent; import org.kie.dmn.api.core.event.AfterEvaluateContextEntryEvent; import org.kie.dmn.api.core.event.AfterEvaluateDecisionEvent; import org.kie.dmn.api.core.event.AfterEvaluateDecisionServiceEvent; import org.kie.dmn.api.core.event.AfterEvaluateDecisionTableEvent; import org.kie.dmn.api.core.event.BeforeEvaluateBKMEvent; import org.kie.dmn.api.core.event.BeforeEvaluateContextEntryEvent; import org.kie.dmn.api.core.event.BeforeEvaluateDecisionEvent; import org.kie.dmn.api.core.event.BeforeEvaluateDecisionServiceEvent; import org.kie.dmn.api.core.event.BeforeEvaluateDecisionTableEvent; import org.kie.dmn.api.core.event.DMNRuntimeEventListener; import org.kie.server.api.model.ReleaseId; import org.kie.server.services.api.KieContainerInstance; public class ExampleCustomPrometheusMetricListener implements DMNRuntimeEventListener { private final KieContainerInstance kieContainer; private final Gauge randomGauge = Gauge.build() .name(\"random_gauge_nanosecond\") .help(\"Random gauge as an example of custom KIE Prometheus metric\") .labelNames(\"container_id\", \"group_id\", \"artifact_id\", \"version\", \"decision_namespace\", \"decision_name\") .register(); public ExampleCustomPrometheusMetricListener(KieContainerInstance containerInstance) { kieContainer = containerInstance; } public void beforeEvaluateDecision(BeforeEvaluateDecisionEvent e) { } public void afterEvaluateDecision(AfterEvaluateDecisionEvent e) { DecisionNode decisionNode = e.getDecision(); ReleaseId releaseId = kieContainer.getResource().getReleaseId(); randomGauge.labels(kieContainer.getContainerId(), releaseId.getGroupId(), releaseId.getArtifactId(), releaseId.getVersion(), decisionNode.getModelName(), decisionNode.getModelNamespace()) .set((int) (Math.random() * 100)); } public void beforeEvaluateBKM(BeforeEvaluateBKMEvent event) { } public void afterEvaluateBKM(AfterEvaluateBKMEvent event) { } public void beforeEvaluateContextEntry(BeforeEvaluateContextEntryEvent event) { } public void afterEvaluateContextEntry(AfterEvaluateContextEntryEvent event) { } public void beforeEvaluateDecisionTable(BeforeEvaluateDecisionTableEvent event) { } public void afterEvaluateDecisionTable(AfterEvaluateDecisionTableEvent event) { } public void beforeEvaluateDecisionService(BeforeEvaluateDecisionServiceEvent event) { } public void afterEvaluateDecisionService(AfterEvaluateDecisionServiceEvent event) { } }",
"package org.kie.server.ext.prometheus; import org.jbpm.executor.AsynchronousJobListener; import org.jbpm.services.api.DeploymentEventListener; import org.kie.api.event.rule.AgendaEventListener; import org.kie.api.event.rule.DefaultAgendaEventListener; import org.kie.dmn.api.core.event.DMNRuntimeEventListener; import org.kie.server.services.api.KieContainerInstance; import org.kie.server.services.prometheus.PrometheusMetricsProvider; import org.optaplanner.core.impl.phase.event.PhaseLifecycleListener; import org.optaplanner.core.impl.phase.event.PhaseLifecycleListenerAdapter; public class MyPrometheusMetricsProvider implements PrometheusMetricsProvider { public DMNRuntimeEventListener createDMNRuntimeEventListener(KieContainerInstance kContainer) { return new ExampleCustomPrometheusMetricListener(kContainer); } public AgendaEventListener createAgendaEventListener(String kieSessionId, KieContainerInstance kContainer) { return new DefaultAgendaEventListener(); } public PhaseLifecycleListener createPhaseLifecycleListener(String solverId) { return new PhaseLifecycleListenerAdapter() { }; } public AsynchronousJobListener createAsynchronousJobListener() { return null; } public DeploymentEventListener createDeploymentEventListener() { return null; } }"
] | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/managing_red_hat_process_automation_manager_and_kie_server_settings/prometheus-monitoring-con_execution-server |
Release notes | Release notes OpenShift Container Platform 4.7 Highlights of what is new and what has changed with this OpenShift Container Platform release Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/release_notes/index |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/5.1_release_notes/making-open-source-more-inclusive |
Chapter 8. ServiceNow Custom actions in Red Hat Developer Hub | Chapter 8. ServiceNow Custom actions in Red Hat Developer Hub Important These features are for Technology Preview only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information on Red Hat Technology Preview features, see Technology Preview Features Scope . In Red Hat Developer Hub, you can access ServiceNow custom actions (custom actions) for fetching and registering resources in the catalog. The custom actions in Developer Hub enable you to facilitate and automate the management of records. Using the custom actions, you can perform the following actions: Create, update, or delete a record Retrieve information about a single record or multiple records 8.1. Enabling ServiceNow custom actions plugin in Red Hat Developer Hub In Red Hat Developer Hub, the ServiceNow custom actions are provided as a pre-loaded plugin, which is disabled by default. You can enable the custom actions plugin using the following procedure. Prerequisites Red Hat Developer Hub is installed and running. For more information about installing the Developer Hub, see Installing Red Hat Developer Hub on OpenShift Container Platform with the Helm chart . You have created a project in the Developer Hub. Procedure To activate the custom actions plugin, add a package with plugin name and update the disabled field in your Helm Chart as follows: global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: ./dynamic-plugins/dist/janus-idp-backstage-scaffolder-backend-module-servicenow-dynamic disabled: false Note The default configuration for a plugin is extracted from the dynamic-plugins.default.yaml file, however, you can use a pluginConfig entry to override the default configuration. Set the following variables in the Helm Chart to access the custom actions: servicenow: # The base url of the ServiceNow instance. baseUrl: USD{SERVICENOW_BASE_URL} # The username to use for authentication. username: USD{SERVICENOW_USERNAME} # The password to use for authentication. password: USD{SERVICENOW_PASSWORD} 8.2. Supported ServiceNow custom actions in Red Hat Developer Hub The ServiceNow custom actions enable you to manage records in the Red Hat Developer Hub. The custom actions support the following HTTP methods for API requests: GET : Retrieves specified information from a specified resource endpoint POST : Creates or updates a resource PUT : Modify a resource PATCH : Updates a resource DELETE : Deletes a resource 8.2.1. ServiceNow custom actions [GET] servicenow:now:table:retrieveRecord Retrieves information of a specified record from a table in the Developer Hub. Table 8.1. Input parameters Name Type Requirement Description tableName string Required Name of the table to retrieve the record from sysId string Required Unique identifier of the record to retrieve sysparmDisplayValue enum("true", "false", "all") Optional Returns field display values such as true , actual values as false , or both. The default value is false . sysparmExcludeReferenceLink boolean Optional Set as true to exclude Table API links for reference fields. The default value is false . sysparmFields string[] Optional Array of fields to return in the response sysparmView string Optional Renders the response according to the specified UI view. You can override this parameter using sysparm_fields . sysparmQueryNoDomain boolean Optional Set as true to access data across domains if authorized. The default value is false . Table 8.2. Output parameters Name Type Description result Record<PropertyKey, unknown> The response body of the request [GET] servicenow:now:table:retrieveRecords Retrieves information about multiple records from a table in the Developer Hub. Table 8.3. Input parameters Name Type Requirement Description tableName string Required Name of the table to retrieve the records from sysparamQuery string Optional Encoded query string used to filter the results sysparmDisplayValue enum("true", "false", "all") Optional Returns field display values such as true , actual values as false , or both. The default value is false . sysparmExcludeReferenceLink boolean Optional Set as true to exclude Table API links for reference fields. The default value is false . sysparmSuppressPaginationHeader boolean Optional Set as true to suppress pagination header. The default value is false . sysparmFields string[] Optional Array of fields to return in the response sysparmLimit int Optional Maximum number of results returned per page. The default value is 10,000 . sysparmView string Optional Renders the response according to the specified UI view. You can override this parameter using sysparm_fields . sysparmQueryCategory string Optional Name of the query category to use for queries sysparmQueryNoDomain boolean Optional Set as true to access data across domains if authorized. The default value is false . sysparmNoCount boolean Optional Does not execute a select count(*) on the table. The default value is false . Table 8.4. Output parameters Name Type Description result Record<PropertyKey, unknown> The response body of the request [POST] servicenow:now:table:createRecord Creates a record in a table in the Developer Hub. Table 8.5. Input parameters Name Type Requirement Description tableName string Required Name of the table to save the record in requestBody Record<PropertyKey, unknown> Optional Field name and associated value for each parameter to define in the specified record sysparmDisplayValue enum("true", "false", "all") Optional Returns field display values such as true , actual values as false , or both. The default value is false . sysparmExcludeReferenceLink boolean Optional Set as true to exclude Table API links for reference fields. The default value is false . sysparmFields string[] Optional Array of fields to return in the response sysparmInputDisplayValue boolean Optional Set field values using their display value such as true or actual value as false . The default value is false . sysparmSuppressAutoSysField boolean Optional Set as true to suppress auto-generation of system fields. The default value is false . sysparmView string Optional Renders the response according to the specified UI view. You can override this parameter using sysparm_fields . Table 8.6. Output parameters Name Type Description result Record<PropertyKey, unknown> The response body of the request [PUT] servicenow:now:table:modifyRecord Modifies a record in a table in the Developer Hub. Table 8.7. Input parameters Name Type Requirement Description tableName string Required Name of the table to modify the record from sysId string Required Unique identifier of the record to modify requestBody Record<PropertyKey, unknown> Optional Field name and associated value for each parameter to define in the specified record sysparmDisplayValue enum("true", "false", "all") Optional Returns field display values such as true , actual values as false , or both. The default value is false . sysparmExcludeReferenceLink boolean Optional Set as true to exclude Table API links for reference fields. The default value is false . sysparmFields string[] Optional Array of fields to return in the response sysparmInputDisplayValue boolean Optional Set field values using their display value such as true or actual value as false . The default value is false . sysparmSuppressAutoSysField boolean Optional Set as true to suppress auto-generation of system fields. The default value is false . sysparmView string Optional Renders the response according to the specified UI view. You can override this parameter using sysparm_fields . sysparmQueryNoDomain boolean Optional Set as true to access data across domains if authorized. The default value is false . Table 8.8. Output parameters Name Type Description result Record<PropertyKey, unknown> The response body of the request [PATCH] servicenow:now:table:updateRecord Updates a record in a table in the Developer Hub. Table 8.9. Input parameters Name Type Requirement Description tableName string Required Name of the table to update the record in sysId string Required Unique identifier of the record to update requestBody Record<PropertyKey, unknown> Optional Field name and associated value for each parameter to define in the specified record sysparmDisplayValue enum("true", "false", "all") Optional Returns field display values such as true , actual values as false , or both. The default value is false . sysparmExcludeReferenceLink boolean Optional Set as true to exclude Table API links for reference fields. The default value is false . sysparmFields string[] Optional Array of fields to return in the response sysparmInputDisplayValue boolean Optional Set field values using their display value such as true or actual value as false . The default value is false . sysparmSuppressAutoSysField boolean Optional Set as true to suppress auto-generation of system fields. The default value is false . sysparmView string Optional Renders the response according to the specified UI view. You can override this parameter using sysparm_fields . sysparmQueryNoDomain boolean Optional Set as true to access data across domains if authorized. The default value is false . Table 8.10. Output parameters Name Type Description result Record<PropertyKey, unknown> The response body of the request [DELETE] servicenow:now:table:deleteRecord Deletes a record from a table in the Developer Hub. Table 8.11. Input parameters Name Type Requirement Description tableName string Required Name of the table to delete the record from sysId string Required Unique identifier of the record to delete sysparmQueryNoDomain boolean Optional Set as true to access data across domains if authorized. The default value is false . | [
"global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: ./dynamic-plugins/dist/janus-idp-backstage-scaffolder-backend-module-servicenow-dynamic disabled: false",
"servicenow: # The base url of the ServiceNow instance. baseUrl: USD{SERVICENOW_BASE_URL} # The username to use for authentication. username: USD{SERVICENOW_USERNAME} # The password to use for authentication. password: USD{SERVICENOW_PASSWORD}"
] | https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.2/html/getting_started_with_red_hat_developer_hub/con-servicenow-custom-actions_assembly-customize-rhdh-theme |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_openwire_jms_client/making-open-source-more-inclusive |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/deploying_openshift_data_foundation_using_bare_metal_infrastructure/making-open-source-more-inclusive |
Chapter 1. Introducing the GNOME 3 Desktop | Chapter 1. Introducing the GNOME 3 Desktop 1.1. What Is GNOME 3? In Red Hat Enterprise Linux 7, GNOME 3 is the default desktop environment. It is the major version of the GNOME Desktop, which introduces a new user interface and substantial feature improvements over the GNOME 2 Desktop shipped with Red Hat Enterprise Linux 5 and 6. Figure 1.1. The GNOME 3 Desktop (GNOME Classic) GNOME 3 provides a focused working environment that encourages productivity. A powerful search feature lets you access all your work from one place. For example, you can turn off notifications when you need to concentrate on the task in hand. Important To function properly, GNOME requires your system to support 3D acceleration . This includes bare metal systems, as well as hypervisor solutions such as VMWare . If GNOME does not start or performs poorly on your VMWare virtual machine (VM), see the following solution: Why does the GUI fail to start on my VMware virtual machine? . For more information, see Section 1.2.1, "Hardware Acceleration and Software Rendering" . GNOME 3 is built on a number of powerful components: GNOME Shell GNOME Shell is a modern and intuitive graphical user interface. It provides quality user experience, including visual effects and hardware acceleration support. GNOME Classic GNOME Classic combines old and new; it keeps the familiar look and feel of GNOME 2, but adds the powerful new features and 3-D capabilities of GNOME Shell. GNOME Classic is the default GNOME session and GNOME Shell mode in Red Hat Enterprise Linux 7. GSettings GSettings is a configuration storage system, replacing GConf found in older GNOME versions. For more information about the transition from GConf to GSettings , see Chapter 3, GSettings and dconf . To learn more about configuring your desktop with GSettings , read Chapter 9, Configuring Desktop with GSettings and dconf . GVFS GVFS provides complete virtual file system infrastructure and handles storage in the GNOME Desktop in general. Through GVFS , GNOME 3 integrates well with online document-storage services, calendars, and contact lists, so all your data can be accessed from the same place. Read more about GVFS in Chapter 15, Virtual File Systems and Disk Management . GTK+ GTK+ , a multi-platform toolkit for creating graphical user interfaces, provides a highly-usable feature-rich API. Thanks to GTK+ , GNOME 3 is able to change the look of an application or provide smooth appearance of graphics. In addition, GTK+ contains a number of features such as object-oriented programming support (GObject), wide support of international character sets and text layouts (Pango), or a set of accessibility interfaces (ATK). | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/desktop_migration_and_administration_guide/introducing-gnome3-desktop |
Chapter 6. Uninstalling power monitoring | Chapter 6. Uninstalling power monitoring Important Power monitoring is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can uninstall power monitoring by deleting the Kepler instance and then the Power monitoring Operator in the OpenShift Container Platform web console. 6.1. Deleting Kepler You can delete Kepler by removing the Kepler instance of the Kepler custom resource definition (CRD) from the OpenShift Container Platform web console. Prerequisites You have access to the OpenShift Container Platform web console. You are logged in as a user with the cluster-admin role. Procedure In the Administrator perspective of the web console, go to Operators Installed Operators . Click Power monitoring for Red Hat OpenShift from the Installed Operators list and go to the Kepler tab. Locate the Kepler instance entry in the list. Click for this entry and select Delete Kepler . In the Delete Kepler? dialog, click Delete to delete the Kepler instance. 6.2. Uninstalling the Power monitoring Operator If you installed the Power monitoring Operator by using OperatorHub, you can uninstall it from the OpenShift Container Platform web console. Prerequisites You have access to the OpenShift Container Platform web console. You are logged in as a user with the cluster-admin role. Procedure Delete the Kepler instance. Warning Ensure that you have deleted the Kepler instance before uninstalling the Power monitoring Operator. Go to Operators Installed Operators . Locate the Power monitoring for Red Hat OpenShift entry in the list. Click for this entry and select Uninstall Operator . In the Uninstall Operator? dialog, click Uninstall to uninstall the Power monitoring Operator. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/power_monitoring/uninstalling-power-monitoring |
Chapter 1. Introduction | Chapter 1. Introduction An Ansible Playbook is a blueprint for automation tasks, which are actions executed with limited manual effort across an inventory of solutions. Playbooks tell Ansible what to do on which devices. Instead of manually applying the same action to hundreds or thousands of similar technologies across IT environments, executing a playbook automatically completes the same action for the specified type of inventory-such as a set of routers. Playbooks are regularly used to automate IT infrastructure-such as operating systems and Kubernetes platforms-networks, security systems, and code repositories like GitHub. You can use playbooks to program applications, services, server nodes, and other devices, without the manual overhead of creating everything from scratch. Playbooks, and the conditions, variables, and tasks within them, can be saved, shared, or reused indefinitely. This makes it easier for you to codify operational knowledge and ensure that the same actions are performed consistently. 1.1. How do Ansible Playbooks work? Ansible Playbooks are lists of tasks that automatically execute for your specified inventory or groups of hosts. One or more Ansible tasks can be combined to make a play, that is, an ordered grouping of tasks mapped to specific hosts. Tasks are executed in the order in which they are written. A playbook can include one or more plays. A playbook is composed of one or more plays in an ordered list. The terms playbook and play are sports analogies. Each play executes part of the overall goal of the playbook, running one or more tasks. Each task calls an Ansible module. Playbook A list of plays that define the order in which Ansible performs operations, from top to bottom, to achieve an overall goal. Play An ordered list of tasks that maps to managed nodes in an inventory. Task A reference to a single module that defines the operations that Ansible performs. Roles Roles are a way to make code in playbooks reusable by putting the functionality into "libraries" that can then be used in any playbook as needed. Module A unit of code or binary that Ansible runs on managed nodes. Ansible modules are grouped in collections with a Fully Qualified Collection Name (FQCN) for each module. Tasks are executed by modules, each of which performs a specific task in a playbook. A module contains metadata that determines when and where a task is executed, as well as which user executes it. There are thousands of Ansible modules that perform all kinds of IT tasks, such as: Cloud management User management Networking Security Configuration management Communication 1.2. How do I use Ansible Playbooks? Ansible uses the YAML syntax. YAML is a human-readable language that enables you to create playbooks without having to learn a complicated coding language. For more information on YAML, see YAML Syntax and consider installing an add-on for your text editor, see Other Tools and Programs to help you write clean YAML syntax in your playbooks. There are two ways of using Ansible Playbooks: From the command line interface (CLI) Using Red Hat Ansible Automation Platform's push-button deployments. 1.2.1. From the CLI After installing the open source Ansible project or Red Hat Ansible Automation Platform by using USD sudo dnf install ansible in the Red Hat Enterprise Linux CLI, you can use the ansible-playbook command to run Ansible Playbooks. 1.2.2. From within the platform The Red Hat Ansible Automation Platform user interface offers push-button Ansible Playbook deployments that can be used as part of larger jobs or job templates. These deployments come with additional safeguards that are particularly helpful to users who are newer to IT automation, or those without as much experience working in the CLI. 1.3. Starting automation with Ansible Get started with Ansible by creating an automation project, building an inventory, and creating a Hello World playbook. Prerequisites The Ansible package must be installed. Procedure Create a project folder on your filesystem. mkdir ansible_quickstart cd ansible_quickstart Using a single directory structure makes it easier to add to source control, and reuse and share automation content. 1.4. Building an inventory Inventories organize managed nodes in centralized files that provide Ansible with system information and network locations. Using an inventory file, Ansible can manage a large number of hosts with a single command. To complete the following steps, you need the IP address or fully qualified domain name (FQDN) of at least one host system. For demonstration purposes, the host could be running locally in a container or a virtual machine. You must also ensure that your public SSH key is added to the authorized_keys file on each host. Use the following procedure to build an inventory. Procedure Create a file named inventory.ini in the ansible_quickstart directory that you created. Add a new [myhosts] group to the inventory.ini file and specify the IP address or fully qualified domain name (FQDN) of each host system. [myhosts] 192.0.2.50 192.0.2.51 192.0.2.52 Verify your inventory, using: ansible-inventory -i inventory.ini --list Ping the myhosts group in your inventory, using: ansible myhosts -m ping -i inventory.ini Pass the -u option with the Ansible command if the username is different on the control node and the managed node(s). 192.0.2.50 | SUCCESS => { "ansible_facts": { "discovered_interpreter_python": "/usr/bin/python3" }, "changed": false, "ping": "pong" } 192.0.2.51 | SUCCESS => { "ansible_facts": { "discovered_interpreter_python": "/usr/bin/python3" }, "changed": false, "ping": "pong" } 192.0.2.52 | SUCCESS => { "ansible_facts": { "discovered_interpreter_python": "/usr/bin/python3" }, "changed": false, "ping": "pong" } You have successfully built an inventory. 1.4.1. Inventories in INI or YAML format You can create inventories in either INI files or in YAML. In most cases, such as the preceding example, INI files are straightforward and easy to read for a small number of managed nodes. Creating an inventory in YAML format becomes a sensible option as the number of managed nodes increases. For example, the following is an equivalent of the inventory.ini that declares unique names for managed nodes and uses the ansible_host field: myhosts: hosts: my_host_01: ansible_host: 192.0.2.50 my_host_02: ansible_host: 192.0.2.51 my_host_03: ansible_host: 192.0.2.52 1.4.2. Tips for building inventories Ensure that group names are meaningful and unique. Group names are also case sensitive. Do not use spaces, hyphens, or preceding numbers (use floor_19, not 19th_floor) in group names. Group hosts in your inventory logically according to their What, Where, and When: What: Group hosts according to the topology, for example: db, web, leaf, spine. Where: Group hosts by geographic location, for example: datacenter, region, floor, building. When: Group hosts by stage, for example: development, test, staging, production. 1.4.3. Use metagroups Create a metagroup that organizes multiple groups in your inventory with the following syntax: metagroupname: children: The following inventory illustrates a basic structure for a data center. This example inventory contains a network metagroup that includes all network devices and a datacenter metagroup that includes the network group and all webservers. leafs: hosts: leaf01: ansible_host: 192.0.2.100 leaf02: ansible_host: 192.0.2.110 spines: hosts: spine01: ansible_host: 192.0.2.120 spine02: ansible_host: 192.0.2.130 network: children: leafs: spines: webservers: hosts: webserver01: ansible_host: 192.0.2.140 webserver02: ansible_host: 192.0.2.150 datacenter: children: network: webservers: 1.5. Create variables Variables set values for managed nodes, such as the IP address, FQDN, operating system, and SSH user, so you do not need to pass them when running Ansible commands. Variables can apply to specific hosts. webservers: hosts: webserver01: ansible_host: 192.0.2.140 http_port: 80 webserver02: ansible_host: 192.0.2.150 http_port: 443 Variables can also apply to all hosts in a group. webservers: hosts: webserver01: ansible_host: 192.0.2.140 http_port: 80 webserver02: ansible_host: 192.0.2.150 http_port: 443 vars: ansible_user: my_server_user For more information about inventories and Ansible inventory variables, see About the Installer Inventory file and Inventory file variables . 1.6. Creating your first playbook Use the following procedure to create a playbook that pings your hosts and prints a "Hello world" message. Procedure Create a file named playbook.yaml in your ansible_quickstart directory, with the following content: - name: My first play hosts: myhosts tasks: - name: Ping my hosts ansible.builtin.ping: - name: Print message ansible.builtin.debug: msg: Hello world Run your playbook, using the following command: ansible-playbook -i inventory.ini playbook.yaml Ansible returns the following output: PLAY [My first play] **************************************************************************** TASK [Gathering Facts] ************************************************************************** ok: [192.0.2.50] ok: [192.0.2.51] ok: [192.0.2.52] TASK [Ping my hosts] **************************************************************************** ok: [192.0.2.50] ok: [192.0.2.51] ok: [192.0.2.52] TASK [Print message] **************************************************************************** ok: [192.0.2.50] => { "msg": "Hello world" } ok: [192.0.2.51] => { "msg": "Hello world" } ok: [192.0.2.52] => { "msg": "Hello world" } PLAY RECAP ************************************************************************************** 192.0.2.50: ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 192.0.2.51: ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 192.0.2.52: ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 In this output you can see: The names that you give the play and each task. Always use descriptive names that make it easy to verify and troubleshoot playbooks. The Gather Facts task runs implicitly. By default Ansible gathers information about your inventory that it can use in the playbook. The status of each task. Each task has a status of ok which means it ran successfully. The play recap that summarizes results of all tasks in the playbook per host. In this example, there are three tasks so ok=3 indicates that each task ran successfully. | [
"sudo dnf install ansible",
"mkdir ansible_quickstart cd ansible_quickstart",
"[myhosts] 192.0.2.50 192.0.2.51 192.0.2.52",
"ansible-inventory -i inventory.ini --list",
"ansible myhosts -m ping -i inventory.ini",
"192.0.2.50 | SUCCESS => { \"ansible_facts\": { \"discovered_interpreter_python\": \"/usr/bin/python3\" }, \"changed\": false, \"ping\": \"pong\" } 192.0.2.51 | SUCCESS => { \"ansible_facts\": { \"discovered_interpreter_python\": \"/usr/bin/python3\" }, \"changed\": false, \"ping\": \"pong\" } 192.0.2.52 | SUCCESS => { \"ansible_facts\": { \"discovered_interpreter_python\": \"/usr/bin/python3\" }, \"changed\": false, \"ping\": \"pong\" }",
"myhosts: hosts: my_host_01: ansible_host: 192.0.2.50 my_host_02: ansible_host: 192.0.2.51 my_host_03: ansible_host: 192.0.2.52",
"metagroupname: children:",
"leafs: hosts: leaf01: ansible_host: 192.0.2.100 leaf02: ansible_host: 192.0.2.110 spines: hosts: spine01: ansible_host: 192.0.2.120 spine02: ansible_host: 192.0.2.130 network: children: leafs: spines: webservers: hosts: webserver01: ansible_host: 192.0.2.140 webserver02: ansible_host: 192.0.2.150 datacenter: children: network: webservers:",
"webservers: hosts: webserver01: ansible_host: 192.0.2.140 http_port: 80 webserver02: ansible_host: 192.0.2.150 http_port: 443",
"webservers: hosts: webserver01: ansible_host: 192.0.2.140 http_port: 80 webserver02: ansible_host: 192.0.2.150 http_port: 443 vars: ansible_user: my_server_user",
"- name: My first play hosts: myhosts tasks: - name: Ping my hosts ansible.builtin.ping: - name: Print message ansible.builtin.debug: msg: Hello world",
"ansible-playbook -i inventory.ini playbook.yaml",
"PLAY [My first play] **************************************************************************** TASK [Gathering Facts] ************************************************************************** ok: [192.0.2.50] ok: [192.0.2.51] ok: [192.0.2.52] TASK [Ping my hosts] **************************************************************************** ok: [192.0.2.50] ok: [192.0.2.51] ok: [192.0.2.52] TASK [Print message] **************************************************************************** ok: [192.0.2.50] => { \"msg\": \"Hello world\" } ok: [192.0.2.51] => { \"msg\": \"Hello world\" } ok: [192.0.2.52] => { \"msg\": \"Hello world\" } PLAY RECAP ************************************************************************************** 192.0.2.50: ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 192.0.2.51: ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 192.0.2.52: ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0"
] | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/getting_started_with_playbooks/assembly-intro-to-playbooks |
2.2. Compatible Versions | 2.2. Compatible Versions The product and package versions required to create a supported deployment of Red Hat Gluster Storage (RHGS) nodes managed by the specified version of Red Hat Virtualization (RHV) are documented in the following knowledge base article: https://access.redhat.com/articles/2356261 . | null | https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/configuring_red_hat_virtualization_with_red_hat_gluster_storage/compatible_versions |
1.3. KVM Guest Virtual Machine Compatibility | 1.3. KVM Guest Virtual Machine Compatibility Red Hat Enterprise Linux 7 servers have certain support limits. The following URLs explain the processor and memory amount limitations for Red Hat Enterprise Linux: For host systems: https://access.redhat.com/articles/rhel-limits For the KVM hypervisor: https://access.redhat.com/articles/rhel-kvm-limits The following URL lists guest operating systems certified to run on a Red Hat Enterprise Linux KVM host: https://access.redhat.com/articles/973163 Note For additional information on the KVM hypervisor's restrictions and support limits, see Appendix C, Virtualization Restrictions . | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-kvm_guest_virtual_machine_compatibility-red_hat_enterprise_linux_7_support_limits |
Chapter 11. Known issues in Red Hat Process Automation Manager 7.13.2 | Chapter 11. Known issues in Red Hat Process Automation Manager 7.13.2 This section lists known issues with Red Hat Process Automation Manager 7.13.2. 11.1. Red Hat OpenShift Container Platform PostgreSQL 13 Pod won't start because of an incompatible data dirctory [ RHPAM-4464 ] Issue: When you start a PostgreSQL pod after you upgrade the operator, the pod fails to start and you receive the following message: Incompatible data directory. This container image provides PostgreSQL '13', but data directory is of version '10'. This image supports automatic data directory upgrade from '12', please carefully consult image documentation about how to use the 'USDPOSTGRESQL_UPGRADE' startup option. Workaround: Check the version of PostgreSQL: If the PostgreSQL version returned is 12.x or earlier, upgrade PostgreSQL: Red Hat Process Automation Manager version PostgreSQL version Upgrade instructions 7.13.1 7.10 Follow the instructions in Upgrading database (by switching to newer PostgreSQL image version) to upgrade to PostgreSQL 12.x. 7.13.2 7.10 1. Follow the instructions in Upgrading database (by switching to newer PostgreSQL image version) to upgrade to PostgreSQL 12.x. 2. Follow the instructions in Upgrading database (by switching to newer PostgreSQL image version) to upgrade to PostgreSQL 13.x. 7.13.2 7.12 Follow the instructions in Upgrading database (by switching to newer PostgreSQL image version) to upgrade to PostgreSQL 13.x. Verify that PostpreSQL has been upgraded to your required version: | [
"postgres -V",
"postgres -V"
] | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/release_notes_for_red_hat_process_automation_manager_7.13/rn-7.13.2-known-issues-ref |
Kafka configuration tuning | Kafka configuration tuning Red Hat Streams for Apache Kafka 2.7 Use Kafka configuration properties to optimize the streaming of data | [
"num.partitions:1",
"num.partitions=1",
"broker.id = 1 log.dirs = /var/lib/kafka zookeeper.connect = zoo1.my-domain.com:2181,zoo2.my-domain.com:2181,zoo3.my-domain.com:2181 listeners = internal-1://:9092 authorizer.class.name = kafka.security.auth.SimpleAclAuthorizer ssl.truststore.location = /path/to/truststore.jks ssl.truststore.password = 123456 ssl.client.auth = required",
"num.partitions=1 default.replication.factor=3 offsets.topic.replication.factor=3 transaction.state.log.replication.factor=3 transaction.state.log.min.isr=2 log.retention.hours=168 log.segment.bytes=1073741824 log.retention.check.interval.ms=300000 num.network.threads=3 num.io.threads=8 num.recovery.threads.per.data.dir=1 socket.send.buffer.bytes=102400 socket.receive.buffer.bytes=102400 socket.request.max.bytes=104857600 group.initial.rebalance.delay.ms=0 zookeeper.connection.timeout.ms=6000",
"num.partitions=1 auto.create.topics.enable=false default.replication.factor=3 min.insync.replicas=2 replica.fetch.max.bytes=1048576",
"auto.create.topics.enable=false delete.topic.enable=true",
"transaction.state.log.replication.factor=3 transaction.state.log.min.isr=2",
"offsets.topic.num.partitions=50 offsets.topic.replication.factor=3",
"num.network.threads=3 1 queued.max.requests=500 2 num.io.threads=8 3 num.recovery.threads.per.data.dir=4 4",
"replica.socket.receive.buffer.bytes=65536",
"socket.request.max.bytes=104857600",
"socket.send.buffer.bytes=1048576 socket.receive.buffer.bytes=1048576",
"log.segment.bytes=1073741824 log.roll.ms=604800000",
"log.cleanup.policy=delete log.cleaner.enable=true",
"log.retention.ms=1680000",
"log.retention.bytes=1073741824",
"log.segment.delete.delay.ms=60000",
"log.retention.check.interval.ms=300000",
"log.cleaner.delete.retention.ms=86400000",
"log.cleaner.backoff.ms=15000",
"log.cleaner.dedupe.buffer.size=134217728 log.cleaner.io.buffer.load.factor=0.9",
"log.cleaner.threads=8",
"log.cleaner.io.max.bytes.per.second=1.7976931348623157E308",
"log.flush.scheduler.interval.ms=2000",
"log.flush.interval.ms=50000 log.flush.interval.messages=100000",
"replica.lag.time.max.ms=30000",
"# auto.leader.rebalance.enable=true leader.imbalance.check.interval.seconds=300 leader.imbalance.per.broker.percentage=10 #",
"unclean.leader.election.enable=false",
"group.initial.rebalance.delay.ms=3000",
"bootstrap.servers=localhost:9092 1 key.deserializer=org.apache.kafka.common.serialization.StringDeserializer 2 value.deserializer=org.apache.kafka.common.serialization.StringDeserializer 3 client.id=my-client 4 group.id=my-group-id 5",
"group.id=my-group-id 1",
"fetch.max.wait.ms=500 1 fetch.min.bytes=16384 2",
"NUMBER-OF-BROKERS * fetch.max.bytes and NUMBER-OF-PARTITIONS * max.partition.fetch.bytes",
"fetch.max.bytes=52428800 1 max.partition.fetch.bytes=1048576 2",
"enable.auto.commit=false 1",
"enable.auto.commit=false isolation.level=read_committed 1",
"heartbeat.interval.ms=3000 1 session.timeout.ms=45000 2 auto.offset.reset=earliest 3",
"group.instance.id= UNIQUE-ID 1 max.poll.interval.ms=300000 2 max.poll.records=500 3",
"bootstrap.servers=localhost:9092 1 key.serializer=org.apache.kafka.common.serialization.StringSerializer 2 value.serializer=org.apache.kafka.common.serialization.StringSerializer 3 client.id=my-client 4 compression.type=gzip 5",
"acks=all 1 delivery.timeout.ms=120000 2",
"enable.idempotence=true 1 max.in.flight.requests.per.connection=5 2 acks=all 3 retries=2147483647 4",
"enable.idempotence=false 1 max.in.flight.requests.per.connection=1 2 retries=2147483647",
"enable.idempotence=true max.in.flight.requests.per.connection=5 acks=all retries=2147483647 transactional.id= UNIQUE-ID 1 transaction.timeout.ms=900000 2",
"linger.ms=100 1 batch.size=16384 2 buffer.memory=33554432 3",
"partitioner.class=my-custom-partitioner 1",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: \"true\" spec: replicas: 3 config: offset.flush.timeout.ms: 10000 # resources: requests: cpu: \"1\" memory: 2Gi limits: cpu: \"2\" memory: 2Gi #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector labels: strimzi.io/cluster: my-connect-cluster spec: class: org.apache.kafka.connect.file.FileStreamSourceConnector tasksMax: 2 config: producer.override.batch.size: 327680 producer.override.linger.ms: 100 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-sink-connector labels: strimzi.io/cluster: my-connect-cluster spec: class: org.apache.kafka.connect.file.FileStreamSinkConnector tasksMax: 2 config: consumer.fetch.max.bytes: 52428800 consumer.max.partition.fetch.bytes: 1048576 consumer.max.poll.records: 500 #",
"curl -X POST http://my-connect-cluster-connect-api:8083/connectors -H 'Content-Type: application/json' -d '{ \"name\": \"my-source-connector\", \"config\": { \"connector.class\":\"org.apache.kafka.connect.file.FileStreamSourceConnector\", \"file\": \"/opt/kafka/LICENSE\", \"topic\":\"my-topic\", \"tasksMax\": \"4\", \"type\": \"source\" \"producer.override.batch.size\": 327680 \"producer.override.linger.ms\": 100 } }'",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: version: 3.7.0 replicas: 1 connectCluster: \"my-cluster-target\" clusters: - alias: \"my-cluster-source\" bootstrapServers: my-cluster-source-kafka-bootstrap:9092 - alias: \"my-cluster-target\" config: offset.flush.timeout.ms: 10000 bootstrapServers: my-cluster-target-kafka-bootstrap:9092 mirrors: - sourceCluster: \"my-cluster-source\" targetCluster: \"my-cluster-target\" sourceConnector: tasksMax: 2 config: producer.override.batch.size: 327680 producer.override.linger.ms: 100 consumer.fetch.max.bytes: 52428800 consumer.max.partition.fetch.bytes: 1048576 consumer.max.poll.records: 500 # resources: requests: cpu: \"1\" memory: Gi limits: cpu: \"2\" memory: 4Gi",
"message.max.bytes: 10000000 replica.fetch.max.bytes: 10485760",
"batch.size: 327680 max.request.size: 10000000",
"fetch.max.bytes: 10000000 max.partition.fetch.bytes: 10485760",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # producer: config: batch.size: 327680 max.request.size: 10000000 consumer: config: fetch.max.bytes: 10000000 max.partition.fetch.bytes: 10485760 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector labels: strimzi.io/cluster: my-connect-cluster spec: # config: producer.override.batch.size: 327680 producer.override.max.request.size: 10000000 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-sink-connector labels: strimzi.io/cluster: my-connect-cluster spec: # config: consumer.fetch.max.bytes: 10000000 consumer.max.partition.fetch.bytes: 10485760 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: # mirrors: - sourceCluster: \"my-cluster-source\" targetCluster: \"my-cluster-target\" sourceConnector: tasksMax: 2 config: producer.override.batch.size: 327680 producer.override.max.request.size: 10000000 consumer.fetch.max.bytes: 10000000 consumer.max.partition.fetch.bytes: 10485760 #",
"dnf install <package_name>",
"dnf install <path_to_download_package>"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html-single/kafka_configuration_tuning/index |
5.2. XFS File System Performance Analysis with Performance Co-Pilot | 5.2. XFS File System Performance Analysis with Performance Co-Pilot This section describes PCP XFS performance metrics and how to use them. Once started, the Performance Metric Collector Daemon (PMCD) begins collecting performance data from the installed Performance Metric Domain Agents (PMDAs). PMDAs can be individually loaded or unloaded on the system and are controlled by the PMCD on the same host. The XFS PMDA, which is part of the default PCP installation, is used to gather performance metric data of XFS file systems in PCP. For a list of system services and tools that are distributed with PCP, see Table A.1, "System Services Distributed with Performance Co-Pilot in Red Hat Enterprise Linux 7" and Table A.2, "Tools Distributed with Performance Co-Pilot in Red Hat Enterprise Linux 7" . 5.2.1. Installing XFS PMDA to Gather XFS Data with PCP The XFS PMDA ships as part of the pcp package and is enabled by default on installation. To install PCP, enter: To enable and start the PMDA service on the host machine after the pcp and pcp-gui packages are installed, use the following commands: To query the PCP environment to verify that the PMCD process is running on the host and that the XFS PMDA is listed as enabled in the configuration, enter: Installing XFS PMDA Manually If the XFS PMDA is not listed in PCP configuration readout, install the PMDA agent manually. The PMDA installation script prompts you to specify the PMDA role: collector, monitor, or both. The collector role allows the collection of performance metrics on the current system The monitor role allows the system to monitor local systems, remote systems, or both. The default option is both collector and monitor, which allows the XFS PMDA to operate correctly in most scenarios. To install XFS PMDA manually, change to the xfs directory: In the xfs directory, enter: 5.2.2. Configuring and Examining XFS Performance Metrics Examining Metrics with pminfo With PCP installed and the XFS PMDA enabled, instructions are available in Section 5.2.1, "Installing XFS PMDA to Gather XFS Data with PCP" , the easiest way to start looking at the performance metrics available for PCP and XFS is to use the pminfo tool, which displays information about available performance metrics. The command displays a list of all available metrics provided by the XFS PMDA. To display a list of all available metrics provided by the XFS PMDA: Use the following options to display information on selected metrics: -t metric Displays one-line help information describing the selected metric. -T metric Displays more verbose help text describing the selected metric. -f metric Displays the current reading of the performance value that corresponds to the metric. You can use the -t , -T , and -f options with a group of metrics or an individual metric. Most metric data is provided for each mounted XFS file system on the system at time of probing. There are different groups of XFS metrics , which are arranged so that each different group is a new leaf node from the root XFS metric, using a dot ( . ) as a separator. The leaf node semantics (dots) applies to all PCP metrics. For an overview of the types of metrics that are available in each of the groups, see Table A.3, "PCP Metric Groups for XFS" . Example 5.1. Using the pminfo Tool to Examine XFS Read and Write Metrics To display one-line help information describing the xfs.write_bytes metric: To display more verbose help text describing the xfs.read_bytes metric: To obtain the current reading of the performance value that corresponds to the xfs.read_bytes metric: Configuring Metrics with pmstore With PCP, you can modify the values of certain metrics, especially if the metric acts as a control variable, for example the xfs.control.reset metric. To modify a metric value, use the pmstore tool. Example 5.2. Using pmstore to Reset the xfs.control.reset Metric This example shows how to use pmstore with the xfs.control.reset metric to reset the recorded counter values for the XFS PMDA back to zero. 5.2.3. Examining XFS Metrics Available per File System Starting with Red Hat Enterprise Linux 7.3, PCP enables XFS PMDA to allow the reporting of certain XFS metrics per each of the mounted XFS file systems. This makes it easier to pinpoint specific mounted file system issues and evaluate performance. For an overview of the types of metrics available per file system in each of the groups, see Table A.4, "PCP Metric Groups for XFS per Device" . Example 5.3. Obtaining per-Device XFS Metrics with pminfo The pminfo command provides per-device XFS metrics that give instance values for each mounted XFS file system. 5.2.4. Logging Performance Data with pmlogger PCP allows you to log performance metric values that can be replayed later and used for a retrospective performance analysis. Use the pmlogger tool to create archived logs of selected metrics on the system. With pmlogger, you can specify which metrics are recorded on the system and how often. The default pmlogger configuration file is /var/lib/pcp/config/pmlogger/config.default . The configuration file specifies which metrics are logged by the primary logging instance. To log metric values on the local machine with pmlogger , start a primary logging instance: When pmlogger is enabled and a default configuration file is set, a pmlogger line is included in the PCP configuration: Modifying the pmlogger Configuration File with pmlogconf When the pmlogger service is running, PCP logs a default set of metrics on the host. You can use the pmlogconf utility to check the default configuration, and enable XFS logging groups as needed. Important XFS groups to enable include the XFS information , XFS data , and log I/O traffic groups. Follow pmlogconf prompts to enable or disable groups of related performance metrics, and to control the logging interval for each enabled group. Group selection is made by pressing y (yes) or n (no) in response to the prompt. To create or modify the generic PCP archive logger configuration file with pmlogconf, enter: Modifying the pmlogger Configuration File Manually You can edit the pmlogger configuration file manually and add specific metrics with given intervals to create a tailored logging configuration. Example 5.4. The pmlogger Configuration File with XFS Metrics The following example shows an extract of the pmlogger config.default file with some specific XFS metrics added. Replaying the PCP Log Archives After recording metric data, you can replay the PCP log archives on the system in the following ways: You can export the logs to text files and import them into spreadsheets by using PCP utilities such as pmdumptext , pmrep , or pmlogsummary . You can replay the data in the PCP Charts application and use graphs to visualize the retrospective data alongside live data of the system. See Section 5.2.5, "Visual Tracing with PCP Charts" . You can use the pmdumptext tool to view the log files. With pmdumptext, you can parse the selected PCP log archive and export the values into an ASCII table. The pmdumptext tool enables you to dump the entire archive log, or only select metric values from the log by specifying individual metrics on the command line. Example 5.5. Displaying a Specific XFS Metric Log Information For example, to show data on the xfs.perdev.log metric collected in an archive at a 5 second interval and display all headers: For more information, see the pmdumptext (1) manual page, which is available from the pcp-doc package. 5.2.5. Visual Tracing with PCP Charts To be able to use the graphical PCP Charts application, install the pcp-gui package: You can use the PCP Charts application to plot performance metric values into graphs. The PCP Charts application allows multiple charts to be displayed simultaneously. The metrics are sourced from one or more live hosts with alternative options to use metric data from PCP log archives as a source of historical data. To launch PCP Charts from the command line, use the pmchart command. After starting PCP Charts, the GUI appears: The PCP Charts application The pmtime server settings are located at the bottom. The start and pause button allows you to control: The interval in which PCP polls the metric data The date and time for the metrics of historical data Go to File New Chart to select metric from both the local machine and remote machines by specifying their host name or address. Then, select performance metrics from the remote hosts. Advanced configuration options include the ability to manually set the axis values for the chart, and to manually choose the color of the plots. There are multiple options to take images or record the views created in PCP Charts: Click File Export to save an image of the current view. Click Record Start to start a recording. Click Record Stop to stop the recording. After stopping the recording, the recorded metrics are archived to be viewed later. You can customize the PCP Charts interface to display the data from performance metrics in multiple ways, including: line plot bar graphs utilization graphs In PCP Charts, the main configuration file, known as the view , allows the metadata associated with one or more charts to be saved. This metadata describes all chart aspects, including the metrics used and the chart columns. You can create a custom view configuration, save it by clicking File Save View , and load the view configuration later. For more information about view configuration files and their syntax, see the pmchart (1) manual page. Example 5.6. Stacking Chart Graph in PCP Charts View Configuration The example PCP Charts view configuration file describes a stacking chart graph showing the total number of bytes read and written to the given XFS file system loop1 . | [
"yum install pcp",
"systemctl enable pmcd.service",
"systemctl start pmcd.service",
"pcp Performance Co-Pilot configuration on workstation: platform: Linux workstation 3.10.0-123.20.1.el7.x86_64 #1 SMP Thu Jan 29 18:05:33 UTC 2015 x86_64 hardware: 2 cpus, 2 disks, 1 node, 2048MB RAM timezone: BST-1 services pmcd pmcd: Version 3.10.6-1, 7 agents pmda: root pmcd proc xfs linux mmv jbd2",
"cd /var/lib/pcp/pmdas/xfs/",
"xfs]# ./Install You will need to choose an appropriate configuration for install of the \"xfs\" Performance Metrics Domain Agent (PMDA). collector collect performance statistics on this system monitor allow this system to monitor local and/or remote systems both collector and monitor configuration for this system Please enter c(ollector) or m(onitor) or (both) [b] Updating the Performance Metrics Name Space (PMNS) Terminate PMDA if already installed Updating the PMCD control file, and notifying PMCD Waiting for pmcd to terminate Starting pmcd Check xfs metrics have appeared ... 149 metrics and 149 values",
"pminfo xfs",
"pminfo -t xfs.write_bytes xfs.write_bytes [number of bytes written in XFS file system write operations]",
"pminfo -T xfs.read_bytes xfs.read_bytes Help: This is the number of bytes read via read(2) system calls to files in XFS file systems. It can be used in conjunction with the read_calls count to calculate the average size of the read operations to file in XFS file systems.",
"pminfo -f xfs.read_bytes xfs.read_bytes value 4891346238",
"pminfo -f xfs.write xfs.write value 325262",
"pmstore xfs.control.reset 1 xfs.control.reset old value=0 new value=1",
"pminfo -f xfs.write xfs.write value 0",
"pminfo -f -t xfs.perdev.read xfs.perdev.write xfs.perdev.read [number of XFS file system read operations] inst [0 or \"loop1\"] value 0 inst [0 or \"loop2\"] value 0 xfs.perdev.write [number of XFS file system write operations] inst [0 or \"loop1\"] value 86 inst [0 or \"loop2\"] value 0",
"systemctl start pmlogger.service",
"systemctl enable pmlogger.service",
"pcp Performance Co-Pilot configuration on workstation: platform: Linux workstation 3.10.0-123.20.1.el7.x86_64 #1 SMP Thu Jan [...] pmlogger: primary logger:/var/log/pcp/pmlogger/workstation/20160820.10.15",
"pmlogconf -r /var/lib/pcp/config/pmlogger/config.default",
"It is safe to make additions from here on # log mandatory on every 5 seconds { xfs.write xfs.write_bytes xfs.read xfs.read_bytes } log mandatory on every 10 seconds { xfs.allocs xfs.block_map xfs.transactions xfs.log } [access] disallow * : all; allow localhost : enquire;",
"pmdumptext -t 5seconds -H -a 20170605 xfs.perdev.log.writes Time local::xfs.perdev.log.writes[\"/dev/mapper/fedora-home\"] local::xfs.perdev.log.writes[\"/dev/mapper/fedora-root\"] ? 0.000 0.000 none count / second count / second Mon Jun 5 12:28:45 ? ? Mon Jun 5 12:28:50 0.000 0.000 Mon Jun 5 12:28:55 0.200 0.200 Mon Jun 5 12:29:00 6.800 1.000",
"yum install pcp-gui",
"#kmchart version 1 chart title \"Filesystem Throughput /loop1\" style stacking antialiasing off plot legend \"Read rate\" metric xfs.read_bytes instance \"loop1\" plot legend \"Write rate\" metric xfs.write_bytes instance \"loop1\""
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/performance_tuning_guide/sec-xfs-file-system-performance-analysis-with-performance-co-pilot |
Chapter 1. Overview | Chapter 1. Overview AMQ Broker configuration files define important settings for a broker instance. By editing a broker's configuration files, you can control how the broker operates in your environment. 1.1. AMQ Broker configuration files and locations All of a broker's configuration files are stored in <broker_instance_dir> /etc . You can configure a broker by editing the settings in these configuration files. Each broker instance uses the following configuration files: broker.xml The main configuration file. You use this file to configure most aspects of the broker, such as network connections, security settings, message addresses, and so on. bootstrap.xml The file that AMQ Broker uses to start a broker instance. You use it to change the location of broker.xml , configure the web server, and set some security settings. logging.properties You use this file to set logging properties for the broker instance. artemis.profile You use this file to set environment variables used while the broker instance is running. login.config , artemis-users.properties , artemis-roles.properties Security-related files. You use these files to set up authentication for user access to the broker instance. 1.2. Understanding the default broker configuration You configure most of a broker's functionality by editing the broker.xml configuration file. This file contains default settings, which are sufficient to start and operate a broker. However, you will likely need to change some of the default settings and add new settings to configure the broker for your environment. By default, broker.xml contains default settings for the following functionality: Message persistence Acceptors Security Message addresses Default message persistence settings By default, AMQ Broker persistence uses an append-only file journal that consists of a set of files on disk. The journal saves messages, transactions, and other information. <configuration ...> <core ...> ... <persistence-enabled>true</persistence-enabled> <!-- this could be ASYNCIO, MAPPED, NIO ASYNCIO: Linux Libaio MAPPED: mmap files NIO: Plain Java Files --> <journal-type>ASYNCIO</journal-type> <paging-directory>data/paging</paging-directory> <bindings-directory>data/bindings</bindings-directory> <journal-directory>data/journal</journal-directory> <large-messages-directory>data/large-messages</large-messages-directory> <journal-datasync>true</journal-datasync> <journal-min-files>2</journal-min-files> <journal-pool-files>10</journal-pool-files> <journal-file-size>10M</journal-file-size> <!-- This value was determined through a calculation. Your system could perform 8.62 writes per millisecond on the current journal configuration. That translates as a sync write every 115999 nanoseconds. Note: If you specify 0 the system will perform writes directly to the disk. We recommend this to be 0 if you are using journalType=MAPPED and journal-datasync=false. --> <journal-buffer-timeout>115999</journal-buffer-timeout> <!-- When using ASYNCIO, this will determine the writing queue depth for libaio. --> <journal-max-io>4096</journal-max-io> <!-- how often we are looking for how many bytes are being used on the disk in ms --> <disk-scan-period>5000</disk-scan-period> <!-- once the disk hits this limit the system will block, or close the connection in certain protocols that won't support flow control. --> <max-disk-usage>90</max-disk-usage> <!-- should the broker detect dead locks and other issues --> <critical-analyzer>true</critical-analyzer> <critical-analyzer-timeout>120000</critical-analyzer-timeout> <critical-analyzer-check-period>60000</critical-analyzer-check-period> <critical-analyzer-policy>HALT</critical-analyzer-policy> ... </core> </configuration> Default acceptor settings Brokers listen for incoming client connections by using an acceptor configuration element to define the port and protocols a client can use to make connections. By default, AMQ Broker includes an acceptor for each supported messaging protocol, as shown below. <configuration ...> <core ...> ... <acceptors> <!-- Acceptor for every supported protocol --> <acceptor name="artemis">tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300</acceptor> <!-- AMQP Acceptor. Listens on default AMQP port for AMQP traffic --> <acceptor name="amqp">tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300</acceptor> <!-- STOMP Acceptor --> <acceptor name="stomp">tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true</acceptor> <!-- HornetQ Compatibility Acceptor. Enables HornetQ Core and STOMP for legacy HornetQ clients. --> <acceptor name="hornetq">tcp://0.0.0.0:5445?anycastPrefix=jms.queue.;multicastPrefix=jms.topic.;protocols=HORNETQ,STOMP;useEpoll=true</acceptor> <!-- MQTT Acceptor --> <acceptor name="mqtt">tcp://0.0.0.0:1883?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=MQTT;useEpoll=true</acceptor> </acceptors> ... </core> </configuration> Default security settings AMQ Broker contains a flexible role-based security model for applying security to queues, based on their addresses. The default configuration uses wildcards to apply the amq role to all addresses (represented by the number sign, # ). <configuration ...> <core ...> ... <security-settings> <security-setting match="#"> <permission type="createNonDurableQueue" roles="amq"/> <permission type="deleteNonDurableQueue" roles="amq"/> <permission type="createDurableQueue" roles="amq"/> <permission type="deleteDurableQueue" roles="amq"/> <permission type="createAddress" roles="amq"/> <permission type="deleteAddress" roles="amq"/> <permission type="consume" roles="amq"/> <permission type="browse" roles="amq"/> <permission type="send" roles="amq"/> <!-- we need this otherwise ./artemis data imp wouldn't work --> <permission type="manage" roles="amq"/> </security-setting> </security-settings> ... </core> </configuration> Default message address settings AMQ Broker includes a default address that establishes a default set of configuration settings to be applied to any created queue or topic. Additionally, the default configuration defines two queues: DLQ (Dead Letter Queue) handles messages that arrive with no known destination, and Expiry Queue holds messages that have lived past their expiration and therefore should not be routed to their original destination. <configuration ...> <core ...> ... <address-settings> ... <!--default for catch all--> <address-setting match="#"> <dead-letter-address>DLQ</dead-letter-address> <expiry-address>ExpiryQueue</expiry-address> <redelivery-delay>0</redelivery-delay> <!-- with -1 only the global-max-size is in use for limiting --> <max-size-bytes>-1</max-size-bytes> <message-counter-history-day-limit>10</message-counter-history-day-limit> <address-full-policy>PAGE</address-full-policy> <auto-create-queues>true</auto-create-queues> <auto-create-addresses>true</auto-create-addresses> <auto-create-jms-queues>true</auto-create-jms-queues> <auto-create-jms-topics>true</auto-create-jms-topics> </address-setting> </address-settings> <addresses> <address name="DLQ"> <anycast> <queue name="DLQ" /> </anycast> </address> <address name="ExpiryQueue"> <anycast> <queue name="ExpiryQueue" /> </anycast> </address> </addresses> </core> </configuration> 1.3. Reloading configuration updates By default, a broker checks for changes in the configuration files every 5000 milliseconds. If the broker detects a change in the "last modified" time stamp of the configuration file, the broker determines that a configuration change took place. In this case, the broker reloads the configuration file to activate the changes. When the broker reloads the broker.xml configuration file, it reloads the following modules: Address settings and queues When the configuration file is reloaded, the address settings determine how to handle addresses and queues that have been deleted from the configuration file. You can set this with the config-delete-addresses and config-delete-queues properties. For more information, see Appendix B, Address Setting Configuration Elements . Security settings SSL/TLS keystores and truststores on an existing acceptor can be reloaded to establish new certificates without any impact to existing clients. Connected clients, even those with older or differing certificates, can continue to send and receive messages. Diverts A configuration reload deploys any new divert that you have added. However, removal of a divert from the configuration or a change to a sub-element within a <divert> element do not take effect until you restart the broker. The following procedure shows how to change the interval at which the broker checks for changes to the broker.xml configuration file. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Within the <core> element, add the <configuration-file-refresh-period> element and set the refresh period (in milliseconds). This example sets the configuration refresh period to be 60000 milliseconds: <configuration> <core> ... <configuration-file-refresh-period>60000</configuration-file-refresh-period> ... </core> </configuration> It is also possible to force the reloading of the configuration file using the Management API or the console if for some reason access to the configuration file is not possible. Configuration files can be reloaded using the management operation reloadConfigurationFile() on the ActiveMQServerControl (with the ObjectName org.apache.activemq.artemis:broker=" BROKER_NAME " or the resource name server ) Additional resources To learn how to use the management API, see Using the Management API in Managing AMQ Broker 1.4. Modularizing the broker configuration file If you have multiple brokers that share common configuration settings, you can define the common configuration in separate files, and then include these files in each broker's broker.xml configuration file. The most common configuration settings that you might share between brokers include: Addresses Address settings Security settings Procedure Create a separate XML file for each broker.xml section that you want to share. Each XML file can only include a single section from broker.xml (for example, either addresses or address settings, but not both). The top-level element must also define the element namespace ( xmlns="urn:activemq:core" ). This example shows a security settings configuration defined in my-security-settings.xml : my-security-settings.xml <security-settings xmlns="urn:activemq:core"> <security-setting match="a1"> <permission type="createNonDurableQueue" roles="a1.1"/> </security-setting> <security-setting match="a2"> <permission type="deleteNonDurableQueue" roles="a2.1"/> </security-setting> </security-settings> Open the <broker_instance_dir> /etc/broker.xml configuration file for each broker that should use the common configuration settings. For each broker.xml file that you opened, do the following: In the <configuration> element at the beginning of broker.xml , verify that the following line appears: xmlns:xi="http://www.w3.org/2001/XInclude" Add an XML inclusion for each XML file that contains shared configuration settings. This example includes the my-security-settings.xml file. broker.xml <configuration ...> <core ...> ... <xi:include href="/opt/my-broker-config/my-security-settings.xml"/> ... </core> </configuration> If desired, validate broker.xml to verify that the XML is valid against the schema. You can use any XML validator program. This example uses xmllint to validate broker.xml against the artemis-server.xsl schema. Additional resources For more information about XML Inclusions (XIncludes), see https://www.w3.org/TR/xinclude/ . 1.4.1. Reloading modular configuration files When the broker periodically checks for configuration changes (according to the frequency specified by configuration-file-refresh-period ), it does not automatically detect changes made to configuration files that are included in the broker.xml configuration file via xi:include . For example, if broker.xml includes my-address-settings.xml and you make configuration changes to my-address-settings.xml , the broker does not automatically detect the changes in my-address-settings.xml and reload the configuration. To force a reload of the broker.xml configuration file and any modified configuration files included within it, you must ensure that the "last modified" time stamp of the broker.xml configuration file has changed. You can use a standard Linux touch command to update the last-modified time stamp of broker.xml without making any other changes. For example: Alternatively you can use the management API to force a reload of the Broker. Configuration files can be reloaded using the management operation reloadConfigurationFile() on the ActiveMQServerControl (with the ObjectName org.apache.activemq.artemis:broker=" BROKER_NAME " or the resource name server ) Additional resources To learn how to use the management API, see Using the Management API in Managing AMQ Broker 1.5. Document conventions This document uses the following conventions for the sudo command, file paths, and replaceable values. The sudo command In this document, sudo is used for any command that requires root privileges. You should always exercise caution when using sudo , as any changes can affect the entire system. For more information about using sudo , see Managing sudo access . About the use of file paths in this document In this document, all file paths are valid for Linux, UNIX, and similar operating systems (for example, /home/... ). If you are using Microsoft Windows, you should use the equivalent Microsoft Windows paths (for example, C:\Users\... ). Replaceable values This document sometimes uses replaceable values that you must replace with values specific to your environment. Replaceable values are lowercase, enclosed by angle brackets ( < > ), and are styled using italics and monospace font. Multiple words are separated by underscores ( _ ) . For example, in the following command, replace <install_dir> with your own directory name. USD <install_dir> /bin/artemis create mybroker | [
"<configuration ...> <core ...> <persistence-enabled>true</persistence-enabled> <!-- this could be ASYNCIO, MAPPED, NIO ASYNCIO: Linux Libaio MAPPED: mmap files NIO: Plain Java Files --> <journal-type>ASYNCIO</journal-type> <paging-directory>data/paging</paging-directory> <bindings-directory>data/bindings</bindings-directory> <journal-directory>data/journal</journal-directory> <large-messages-directory>data/large-messages</large-messages-directory> <journal-datasync>true</journal-datasync> <journal-min-files>2</journal-min-files> <journal-pool-files>10</journal-pool-files> <journal-file-size>10M</journal-file-size> <!-- This value was determined through a calculation. Your system could perform 8.62 writes per millisecond on the current journal configuration. That translates as a sync write every 115999 nanoseconds. Note: If you specify 0 the system will perform writes directly to the disk. We recommend this to be 0 if you are using journalType=MAPPED and journal-datasync=false. --> <journal-buffer-timeout>115999</journal-buffer-timeout> <!-- When using ASYNCIO, this will determine the writing queue depth for libaio. --> <journal-max-io>4096</journal-max-io> <!-- how often we are looking for how many bytes are being used on the disk in ms --> <disk-scan-period>5000</disk-scan-period> <!-- once the disk hits this limit the system will block, or close the connection in certain protocols that won't support flow control. --> <max-disk-usage>90</max-disk-usage> <!-- should the broker detect dead locks and other issues --> <critical-analyzer>true</critical-analyzer> <critical-analyzer-timeout>120000</critical-analyzer-timeout> <critical-analyzer-check-period>60000</critical-analyzer-check-period> <critical-analyzer-policy>HALT</critical-analyzer-policy> </core> </configuration>",
"<configuration ...> <core ...> <acceptors> <!-- Acceptor for every supported protocol --> <acceptor name=\"artemis\">tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300</acceptor> <!-- AMQP Acceptor. Listens on default AMQP port for AMQP traffic --> <acceptor name=\"amqp\">tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300</acceptor> <!-- STOMP Acceptor --> <acceptor name=\"stomp\">tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true</acceptor> <!-- HornetQ Compatibility Acceptor. Enables HornetQ Core and STOMP for legacy HornetQ clients. --> <acceptor name=\"hornetq\">tcp://0.0.0.0:5445?anycastPrefix=jms.queue.;multicastPrefix=jms.topic.;protocols=HORNETQ,STOMP;useEpoll=true</acceptor> <!-- MQTT Acceptor --> <acceptor name=\"mqtt\">tcp://0.0.0.0:1883?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=MQTT;useEpoll=true</acceptor> </acceptors> </core> </configuration>",
"<configuration ...> <core ...> <security-settings> <security-setting match=\"#\"> <permission type=\"createNonDurableQueue\" roles=\"amq\"/> <permission type=\"deleteNonDurableQueue\" roles=\"amq\"/> <permission type=\"createDurableQueue\" roles=\"amq\"/> <permission type=\"deleteDurableQueue\" roles=\"amq\"/> <permission type=\"createAddress\" roles=\"amq\"/> <permission type=\"deleteAddress\" roles=\"amq\"/> <permission type=\"consume\" roles=\"amq\"/> <permission type=\"browse\" roles=\"amq\"/> <permission type=\"send\" roles=\"amq\"/> <!-- we need this otherwise ./artemis data imp wouldn't work --> <permission type=\"manage\" roles=\"amq\"/> </security-setting> </security-settings> </core> </configuration>",
"<configuration ...> <core ...> <address-settings> <!--default for catch all--> <address-setting match=\"#\"> <dead-letter-address>DLQ</dead-letter-address> <expiry-address>ExpiryQueue</expiry-address> <redelivery-delay>0</redelivery-delay> <!-- with -1 only the global-max-size is in use for limiting --> <max-size-bytes>-1</max-size-bytes> <message-counter-history-day-limit>10</message-counter-history-day-limit> <address-full-policy>PAGE</address-full-policy> <auto-create-queues>true</auto-create-queues> <auto-create-addresses>true</auto-create-addresses> <auto-create-jms-queues>true</auto-create-jms-queues> <auto-create-jms-topics>true</auto-create-jms-topics> </address-setting> </address-settings> <addresses> <address name=\"DLQ\"> <anycast> <queue name=\"DLQ\" /> </anycast> </address> <address name=\"ExpiryQueue\"> <anycast> <queue name=\"ExpiryQueue\" /> </anycast> </address> </addresses> </core> </configuration>",
"<configuration> <core> <configuration-file-refresh-period>60000</configuration-file-refresh-period> </core> </configuration>",
"<security-settings xmlns=\"urn:activemq:core\"> <security-setting match=\"a1\"> <permission type=\"createNonDurableQueue\" roles=\"a1.1\"/> </security-setting> <security-setting match=\"a2\"> <permission type=\"deleteNonDurableQueue\" roles=\"a2.1\"/> </security-setting> </security-settings>",
"xmlns:xi=\"http://www.w3.org/2001/XInclude\"",
"<configuration ...> <core ...> <xi:include href=\"/opt/my-broker-config/my-security-settings.xml\"/> </core> </configuration>",
"xmllint --noout --xinclude --schema /opt/redhat/amq-broker/amq-broker-7.2.0/schema/artemis-server.xsd /var/opt/amq-broker/mybroker/etc/broker.xml /var/opt/amq-broker/mybroker/etc/broker.xml validates",
"touch -m <broker_instance_dir> /etc/broker.xml",
"<install_dir> /bin/artemis create mybroker"
] | https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.10/html/configuring_amq_broker/overview-configuring |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/jboss_eap_xp_upgrade_and_migration_guide/making-open-source-more-inclusive |
21.13. virt-diff: Listing the Differences between Virtual Machine Files | 21.13. virt-diff: Listing the Differences between Virtual Machine Files The virt-diff command-line tool can be used to lists the differences between files in two virtual machines disk images. The output shows the changes to a virtual machine's disk images after it has been running. The command can also be used to show the difference between overlays. Note You can use virt-diff safely on live guest virtual machines, because it only needs read-only access. This tool finds the differences in file names, file sizes, checksums, extended attributes, file content and more between the running virtual machine and the selected image. Note The virt-diff command does not check the boot loader, unused space between partitions or within file systems, or "hidden" sectors. Therefore, it is recommended that you do not use this as a security or forensics tool. To install virt-diff , run one of the following commands: # yum install /usr/bin/virt-diff or # yum install libguestfs-tools-c To specify two guests, you have to use the -a or -d option for the first guest, and the -A or -D option for the second guest. For example: USD virt-diff -a old.img -A new.img You can also use names known to libvirt . For example: USD virt-diff -d oldguest -D newguest The following command options are available to use with virt-diff : Table 21.3. virt-diff options Command Description Example --help Displays a brief help entry about a particular command or about the virt-diff utility. For additional help, see the virt-diff man page. virt-diff --help -a [ file ] or --add [ file ] Adds the specified file , which should be a disk image from the first virtual machine. If the virtual machine has multiple block devices, you must supply all of them with separate -a options. The format of the disk image is auto-detected. To override this and force a particular format, use the --format option. virt-customize --add /dev/vms/original.img -A /dev/vms/new.img -a [ URI ] or --add [ URI ] Adds a remote disk. The URI format is compatible with guestfish. For more information, see Section 21.4.2, "Adding Files with guestfish" . virt-diff -a rbd://example.com[:port]/pool/newdisk -A rbd://example.com[:port]/pool/olddisk --all Same as --extra-stats --times --uids --xattrs . virt-diff --all --atime By default, virt-diff ignores changes in file access times, since those are unlikely to be interesting. Use the --atime option to show access time differences. virt-diff --atime -A [ file ] Adds the specified file or URI , which should be a disk image from the second virtual machine. virt-diff --add /dev/vms/original.img -A /dev/vms/new.img -c [ URI ] or --connect [ URI ] Connects to the given URI, if using libvirt . If omitted, then it connects to the default libvirt hypervisor. If you specify guest block devices directly ( virt-diff -a ), then libvirt is not used at all. virt-diff -c qemu:///system --csv Provides the results in a comma-separated values (CSV) format. This format can be imported easily into databases and spreadsheets. For further information, see Note . virt-diff --csv -d [ guest ] or --domain [ guest ] Adds all the disks from the specified guest virtual machine as the first guest virtual machine. Domain UUIDs can be used instead of domain names. USD virt-diff --domain 90df2f3f-8857-5ba9-2714-7d95907b1c9e -D [ guest ] Adds all the disks from the specified guest virtual machine as the second guest virtual machine. Domain UUIDs can be used instead of domain names. virt-diff --D 90df2f3f-8857-5ba9-2714-7d95907b1cd4 --extra-stats Displays extra statistics. virt-diff --extra-stats --format or --format=[ raw | qcow2 ] The default for the -a / -A option is to auto-detect the format of the disk image. Using this forces the disk format for -a / -A options that follow on the command line. Using --format auto switches back to auto-detection for subsequent -a options (see the -a command above). virt-diff --format raw -a new.img -A old.img forces raw format (no auto-detection) for new.img and old.img, but virt-diff --format raw -a new.img --format auto -a old.img forces raw format (no auto-detection) for new.img and reverts to auto-detection for old.img . If you have untrusted raw-format guest disk images, you should use this option to specify the disk format. This avoids a possible security problem with malicious guests. -h or --human-readable Displays file sizes in human-readable format. virt-diff -h --time-days Displays time fields for changed files as days before now (negative if in the future). Note that 0 in the output means between 86,399 seconds (23 hours, 59 minutes, and 59 seconds) before now and 86,399 seconds in the future. virt-diff --time-days -v or --verbose Enables verbose messages for debugging purposes. virt-diff --verbose -V or --version Displays the virt-diff version number and exits. virt-diff -V -x Enables tracing of libguestfs API calls. virt-diff -x Note The comma-separated values (CSV) format can be difficult to parse. Therefore, it is recommended that for shell scripts, you should use csvtool and for other languages, use a CSV processing library (such as Text::CSV for Perl or Python's built-in csv library). In addition, most spreadsheets and databases can import CSV directly. For more information, including additional options, see libguestfs.org . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-Guest_virtual_machine_disk_access_with_offline_tools-Using_virt_diff |
Chapter 9. Granting sudo access to an IdM user on an IdM client | Chapter 9. Granting sudo access to an IdM user on an IdM client Learn more about granting sudo access to users in Identity Management. 9.1. Sudo access on an IdM client System administrators can grant sudo access to allow non-root users to execute administrative commands that are normally reserved for the root user. Consequently, when users need to perform an administrative command normally reserved for the root user, they precede that command with sudo . After entering their password, the command is executed as if they were the root user. To execute a sudo command as another user or group, such as a database service account, you can configure a RunAs alias for a sudo rule. If a Red Hat Enterprise Linux (RHEL) 8 host is enrolled as an Identity Management (IdM) client, you can specify sudo rules defining which IdM users can perform which commands on the host in the following ways: Locally in the /etc/sudoers file Centrally in IdM You can create a central sudo rule for an IdM client using the command line (CLI) and the IdM Web UI. You can also configure password-less authentication for sudo using the Generic Security Service Application Programming Interface (GSSAPI), the native way for UNIX-based operating systems to access and authenticate Kerberos services. You can use the pam_sss_gss.so Pluggable Authentication Module (PAM) to invoke GSSAPI authentication via the SSSD service, allowing users to authenticate to the sudo command with a valid Kerberos ticket. Additional resources Managing sudo access 9.2. Granting sudo access to an IdM user on an IdM client using the CLI In Identity Management (IdM), you can grant sudo access for a specific command to an IdM user account on a specific IdM host. First, add a sudo command and then create a sudo rule for one or more commands. For example, complete this procedure to create the idm_user_reboot sudo rule to grant the idm_user account the permission to run the /usr/sbin/reboot command on the idmclient machine. Prerequisites You are logged in as IdM administrator. You have created a user account for idm_user in IdM and unlocked the account by creating a password for the user. For details on adding a new IdM user using the CLI, see Adding users using the command line . No local idm_user account is present on the idmclient host. The idm_user user is not listed in the local /etc/passwd file. Procedure Retrieve a Kerberos ticket as the IdM admin . Add the /usr/sbin/reboot command to the IdM database of sudo commands: Create a sudo rule named idm_user_reboot : Add the /usr/sbin/reboot command to the idm_user_reboot rule: Apply the idm_user_reboot rule to the IdM idmclient host: Add the idm_user account to the idm_user_reboot rule: Optional: Define the validity of the idm_user_reboot rule: To define the time at which a sudo rule starts to be valid, use the ipa sudorule-mod sudo_rule_name command with the --setattr sudonotbefore= DATE option. The DATE value must follow the yyyymmddHHMMSSZ format, with seconds specified explicitly. For example, to set the start of the validity of the idm_user_reboot rule to 31 December 2025 12:34:00, enter: To define the time at which a sudo rule stops being valid, use the --setattr sudonotafter=DATE option. For example, to set the end of the idm_user_reboot rule validity to 31 December 2026 12:34:00, enter: Note Propagating the changes from the server to the client can take a few minutes. Verification Log in to the idmclient host as the idm_user account. Display which sudo rules the idm_user account is allowed to perform. Reboot the machine using sudo . Enter the password for idm_user when prompted: 9.3. Granting sudo access to an AD user on an IdM client using the CLI Identity Management (IdM) system administrators can use IdM user groups to set access permissions, host-based access control, sudo rules, and other controls on IdM users. IdM user groups grant and restrict access to IdM domain resources. You can add both Active Directory (AD) users and AD groups to IdM user groups. To do that: Add the AD users or groups to a non-POSIX external IdM group. Add the non-POSIX external IdM group to an IdM POSIX group. You can then manage the privileges of the AD users by managing the privileges of the POSIX group. For example, you can grant sudo access for a specific command to an IdM POSIX user group on a specific IdM host. Note It is also possible to add AD user groups as members to IdM external groups. This might make it easier to define policies for Windows users, by keeping the user and group management within the single AD realm. Important Do not use ID overrides of AD users for SUDO rules in IdM. ID overrides of AD users represent only POSIX attributes of AD users, not AD users themselves. You can add ID overrides as group members. However, you can only use this functionality to manage IdM resources in the IdM API. The possibility to add ID overrides as group members is not extended to POSIX environments and you therefore cannot use it for membership in sudo or host-based access control (HBAC) rules. Follow this procedure to create the ad_users_reboot sudo rule to grant the [email protected] AD user the permission to run the /usr/sbin/reboot command on the idmclient IdM host, which is normally reserved for the root user. [email protected] is a member of the ad_users_external non-POSIX group, which is, in turn, a member of the ad_users POSIX group. Prerequisites You have obtained the IdM admin Kerberos ticket-granting ticket (TGT). A cross-forest trust exists between the IdM domain and the ad-domain.com AD domain. No local administrator account is present on the idmclient host: the administrator user is not listed in the local /etc/passwd file. Procedure Create the ad_users group that contains the ad_users_external group with the administrator@ad-domain member: Optional: Create or select a corresponding group in the AD domain to use to manage AD users in the IdM realm. You can use multiple AD groups and add them to different groups on the IdM side. Create the ad_users_external group and indicate that it contains members from outside the IdM domain by adding the --external option: Note Ensure that the external group that you specify here is an AD security group with a global or universal group scope as defined in the Active Directory security groups document. For example, the Domain users or Domain admins AD security groups cannot be used because their group scope is domain local . Create the ad_users group: Add the [email protected] AD user to ad_users_external as an external member: The AD user must be identified by a fully-qualified name, such as DOMAIN\user_name or user_name@DOMAIN . The AD identity is then mapped to the AD SID for the user. The same applies to adding AD groups. Add ad_users_external to ad_users as a member: Grant the members of ad_users the permission to run /usr/sbin/reboot on the idmclient host: Add the /usr/sbin/reboot command to the IdM database of sudo commands: Create a sudo rule named ad_users_reboot : Add the /usr/sbin/reboot command to the ad_users_reboot rule: Apply the ad_users_reboot rule to the IdM idmclient host: Add the ad_users group to the ad_users_reboot rule: Note Propagating the changes from the server to the client can take a few minutes. Verification Log in to the idmclient host as [email protected] , an indirect member of the ad_users group: Optional: Display the sudo commands that [email protected] is allowed to execute: Reboot the machine using sudo . Enter the password for [email protected] when prompted: Additional resources Active Directory users and Identity Management groups Include users and groups from a trusted Active Directory domain into SUDO rules 9.4. Granting sudo access to an IdM user on an IdM client using the IdM Web UI In Identity Management (IdM), you can grant sudo access for a specific command to an IdM user account on a specific IdM host. First, add a sudo command and then create a sudo rule for one or more commands. Complete this procedure to create the idm_user_reboot sudo rule to grant the idm_user account the permission to run the /usr/sbin/reboot command on the idmclient machine. Prerequisites You are logged in as IdM administrator. You have created a user account for idm_user in IdM and unlocked the account by creating a password for the user. For details on adding a new IdM user using the command line, see Adding users using the command line . No local idm_user account is present on the idmclient host. The idm_user user is not listed in the local /etc/passwd file. Procedure Add the /usr/sbin/reboot command to the IdM database of sudo commands: Navigate to Policy Sudo Sudo Commands . Click Add in the upper right corner to open the Add sudo command dialog box. Enter the command you want the user to be able to perform using sudo : /usr/sbin/reboot . Figure 9.1. Adding IdM sudo command Click Add . Use the new sudo command entry to create a sudo rule to allow idm_user to reboot the idmclient machine: Navigate to Policy Sudo Sudo rules . Click Add in the upper right corner to open the Add sudo rule dialog box. Enter the name of the sudo rule: idm_user_reboot . Click Add and Edit . Specify the user: In the Who section, check the Specified Users and Groups radio button. In the User category the rule applies to subsection, click Add to open the Add users into sudo rule "idm_user_reboot" dialog box. In the Add users into sudo rule "idm_user_reboot" dialog box in the Available column, check the idm_user checkbox, and move it to the Prospective column. Click Add . Specify the host: In the Access this host section, check the Specified Hosts and Groups radio button. In the Host category this rule applies to subsection, click Add to open the Add hosts into sudo rule "idm_user_reboot" dialog box. In the Add hosts into sudo rule "idm_user_reboot" dialog box in the Available column, check the idmclient.idm.example.com checkbox, and move it to the Prospective column. Click Add . Specify the commands: In the Command category the rule applies to subsection of the Run Commands section, check the Specified Commands and Groups radio button. In the Sudo Allow Commands subsection, click Add to open the Add allow sudo commands into sudo rule "idm_user_reboot" dialog box. In the Add allow sudo commands into sudo rule "idm_user_reboot" dialog box in the Available column, check the /usr/sbin/reboot checkbox, and move it to the Prospective column. Click Add to return to the idm_sudo_reboot page. Figure 9.2. Adding IdM sudo rule Click Save in the top left corner. The new rule is enabled by default. Note Propagating the changes from the server to the client can take a few minutes. Verification Log in to idmclient as idm_user . Reboot the machine using sudo . Enter the password for idm_user when prompted: If the sudo rule is configured correctly, the machine reboots. 9.5. Creating a sudo rule on the CLI that runs a command as a service account on an IdM client In IdM, you can configure a sudo rule with a RunAs alias to run a sudo command as another user or group. For example, you might have an IdM client that hosts a database application, and you need to run commands as the local service account that corresponds to that application. Use this example to create a sudo rule on the command line called run_third-party-app_report to allow the idm_user account to run the /opt/third-party-app/bin/report command as the thirdpartyapp service account on the idmclient host. Prerequisites You are logged in as IdM administrator. You have created a user account for idm_user in IdM and unlocked the account by creating a password for the user. For details on adding a new IdM user using the CLI, see Adding users using the command line . No local idm_user account is present on the idmclient host. The idm_user user is not listed in the local /etc/passwd file. You have a custom application named third-party-app installed on the idmclient host. The report command for the third-party-app application is installed in the /opt/third-party-app/bin/report directory. You have created a local service account named thirdpartyapp to execute commands for the third-party-app application. Procedure Retrieve a Kerberos ticket as the IdM admin . Add the /opt/third-party-app/bin/report command to the IdM database of sudo commands: Create a sudo rule named run_third-party-app_report : Use the --users= <user> option to specify the RunAs user for the sudorule-add-runasuser command: The user (or group specified with the --groups=* option) can be external to IdM, such as a local service account or an Active Directory user. Do not add a % prefix for group names. Add the /opt/third-party-app/bin/report command to the run_third-party-app_report rule: Apply the run_third-party-app_report rule to the IdM idmclient host: Add the idm_user account to the run_third-party-app_report rule: Note Propagating the changes from the server to the client can take a few minutes. Verification Log in to the idmclient host as the idm_user account. Test the new sudo rule: Display which sudo rules the idm_user account is allowed to perform. Run the report command as the thirdpartyapp service account. 9.6. Creating a sudo rule in the IdM WebUI that runs a command as a service account on an IdM client In IdM, you can configure a sudo rule with a RunAs alias to run a sudo command as another user or group. For example, you might have an IdM client that hosts a database application, and you need to run commands as the local service account that corresponds to that application. Use this example to create a sudo rule in the IdM WebUI called run_third-party-app_report to allow the idm_user account to run the /opt/third-party-app/bin/report command as the thirdpartyapp service account on the idmclient host. Prerequisites You are logged in as IdM administrator. You have created a user account for idm_user in IdM and unlocked the account by creating a password for the user. For details on adding a new IdM user using the CLI, see Adding users using the command line . No local idm_user account is present on the idmclient host. The idm_user user is not listed in the local /etc/passwd file. You have a custom application named third-party-app installed on the idmclient host. The report command for the third-party-app application is installed in the /opt/third-party-app/bin/report directory. You have created a local service account named thirdpartyapp to execute commands for the third-party-app application. Procedure Add the /opt/third-party-app/bin/report command to the IdM database of sudo commands: Navigate to Policy Sudo Sudo Commands . Click Add in the upper right corner to open the Add sudo command dialog box. Enter the command: /opt/third-party-app/bin/report . Click Add . Use the new sudo command entry to create the new sudo rule: Navigate to Policy Sudo Sudo rules . Click Add in the upper right corner to open the Add sudo rule dialog box. Enter the name of the sudo rule: run_third-party-app_report . Click Add and Edit . Specify the user: In the Who section, check the Specified Users and Groups radio button. In the User category the rule applies to subsection, click Add to open the Add users into sudo rule "run_third-party-app_report" dialog box. In the Add users into sudo rule "run_third-party-app_report" dialog box in the Available column, check the idm_user checkbox, and move it to the Prospective column. Click Add . Specify the host: In the Access this host section, check the Specified Hosts and Groups radio button. In the Host category this rule applies to subsection, click Add to open the Add hosts into sudo rule "run_third-party-app_report" dialog box. In the Add hosts into sudo rule "run_third-party-app_report" dialog box in the Available column, check the idmclient.idm.example.com checkbox, and move it to the Prospective column. Click Add . Specify the commands: In the Command category the rule applies to subsection of the Run Commands section, check the Specified Commands and Groups radio button. In the Sudo Allow Commands subsection, click Add to open the Add allow sudo commands into sudo rule "run_third-party-app_report" dialog box. In the Add allow sudo commands into sudo rule "run_third-party-app_report" dialog box in the Available column, check the /opt/third-party-app/bin/report checkbox, and move it to the Prospective column. Click Add to return to the run_third-party-app_report page. Specify the RunAs user: In the As Whom section, check the Specified Users and Groups radio button. In the RunAs Users subsection, click Add to open the Add RunAs users into sudo rule "run_third-party-app_report" dialog box. In the Add RunAs users into sudo rule "run_third-party-app_report" dialog box, enter the thirdpartyapp service account in the External box and move it to the Prospective column. Click Add to return to the run_third-party-app_report page. Click Save in the top left corner. The new rule is enabled by default. Figure 9.3. Details of the sudo rule Note Propagating the changes from the server to the client can take a few minutes. Verification Log in to the idmclient host as the idm_user account. Test the new sudo rule: Display which sudo rules the idm_user account is allowed to perform. Run the report command as the thirdpartyapp service account. 9.7. Enabling GSSAPI authentication for sudo on an IdM client Enable Generic Security Service Application Program Interface (GSSAPI) authentication on an IdM client for the sudo and sudo -i commands via the pam_sss_gss.so PAM module. With this configuration, IdM users can authenticate to the sudo command with their Kerberos ticket. Prerequisites You have created a sudo rule for an IdM user that applies to an IdM host. For this example, you have created the idm_user_reboot sudo rule to grant the idm_user account the permission to run the /usr/sbin/reboot command on the idmclient host. You need root privileges to modify the /etc/sssd/sssd.conf file and PAM files in the /etc/pam.d/ directory. Procedure Open the /etc/sssd/sssd.conf configuration file. Add the following entry to the [domain/ <domain_name> ] section. Save and close the /etc/sssd/sssd.conf file. Restart the SSSD service to load the configuration changes. On RHEL 9.2 or later: Optional: Determine if you have selected the sssd authselect profile: If the sssd authselect profile is selected, enable GSSAPI authentication: If the sssd authselect profile is not selected, select it and enable GSSAPI authentication: On RHEL 9.1 or earlier: Open the /etc/pam.d/sudo PAM configuration file. Add the following entry as the first line of the auth section in the /etc/pam.d/sudo file. Save and close the /etc/pam.d/sudo file. Verification Log into the host as the idm_user account. Verify that you have a ticket-granting ticket as the idm_user account. Optional: If you do not have Kerberos credentials for the idm_user account, delete your current Kerberos credentials and request the correct ones. Reboot the machine using sudo , without specifying a password. Additional resources The GSSAPI entry in the IdM terminology listing Granting sudo access to an IdM user on an IdM client using IdM Web UI Granting sudo access to an IdM user on an IdM client using the CLI pam_sss_gss (8) and sssd.conf (5) man pages on your system 9.8. Enabling GSSAPI authentication and enforcing Kerberos authentication indicators for sudo on an IdM client Enable Generic Security Service Application Program Interface (GSSAPI) authentication on an IdM client for the sudo and sudo -i commands via the pam_sss_gss.so PAM module. Additionally, only users who have logged in with a smart card will authenticate to those commands with their Kerberos ticket. Note You can use this procedure as a template to configure GSSAPI authentication with SSSD for other PAM-aware services, and further restrict access to only those users that have a specific authentication indicator attached to their Kerberos ticket. Prerequisites You have created a sudo rule for an IdM user that applies to an IdM host. For this example, you have created the idm_user_reboot sudo rule to grant the idm_user account the permission to run the /usr/sbin/reboot command on the idmclient host. You have configured smart card authentication for the idmclient host. You need root privileges to modify the /etc/sssd/sssd.conf file and PAM files in the /etc/pam.d/ directory. Procedure Open the /etc/sssd/sssd.conf configuration file. Add the following entries to the [domain/ <domain_name> ] section. Save and close the /etc/sssd/sssd.conf file. Restart the SSSD service to load the configuration changes. On RHEL 9.2 or later: Determine if you have selected the sssd authselect profile: Optional: Select the sssd authselect profile: Enable GSSAPI authentication: Configure the system to authenticate only users with smart cards: On RHEL 9.1 or earlier: Open the /etc/pam.d/sudo PAM configuration file. Add the following entry as the first line of the auth section in the /etc/pam.d/sudo file. Save and close the /etc/pam.d/sudo file. Open the /etc/pam.d/sudo-i PAM configuration file. Add the following entry as the first line of the auth section in the /etc/pam.d/sudo-i file. Save and close the /etc/pam.d/sudo-i file. Verification Log into the host as the idm_user account and authenticate with a smart card. Verify that you have a ticket-granting ticket as the smart card user. Display which sudo rules the idm_user account is allowed to perform. Reboot the machine using sudo , without specifying a password. Additional resources SSSD options controlling GSSAPI authentication for PAM services The GSSAPI entry in the IdM terminology listing Configuring Identity Management for smart card authentication Kerberos authentication indicators Granting sudo access to an IdM user on an IdM client using IdM Web UI Granting sudo access to an IdM user on an IdM client using the CLI . pam_sss_gss (8) and sssd.conf (5) man pages on your system 9.9. SSSD options controlling GSSAPI authentication for PAM services You can use the following options for the /etc/sssd/sssd.conf configuration file to adjust the GSSAPI configuration within the SSSD service. pam_gssapi_services GSSAPI authentication with SSSD is disabled by default. You can use this option to specify a comma-separated list of PAM services that are allowed to try GSSAPI authentication using the pam_sss_gss.so PAM module. To explicitly disable GSSAPI authentication, set this option to - . pam_gssapi_indicators_map This option only applies to Identity Management (IdM) domains. Use this option to list Kerberos authentication indicators that are required to grant PAM access to a service. Pairs must be in the format <PAM_service> :_<required_authentication_indicator>_ . Valid authentication indicators are: otp for two-factor authentication radius for RADIUS authentication pkinit for PKINIT, smart card, or certificate authentication hardened for hardened passwords pam_gssapi_check_upn This option is enabled and set to true by default. If this option is enabled, the SSSD service requires that the user name matches the Kerberos credentials. If false , the pam_sss_gss.so PAM module authenticates every user that is able to obtain the required service ticket. Examples The following options enable Kerberos authentication for the sudo and sudo-i services, requires that sudo users authenticated with a one-time password, and user names must match the Kerberos principal. Because these settings are in the [pam] section, they apply to all domains: You can also set these options in individual [domain] sections to overwrite any global values in the [pam] section. The following options apply different GSSAPI settings to each domain: For the idm.example.com domain Enable GSSAPI authentication for the sudo and sudo -i services. Require certificate or smart card authentication authenticators for the sudo command. Require one-time password authentication authenticators for the sudo -i command. Enforce matching user names and Kerberos principals. For the ad.example.com domain Enable GSSAPI authentication only for the sudo service. Do not enforce matching user names and principals. Additional resources Kerberos authentication indicators 9.10. Troubleshooting GSSAPI authentication for sudo If you are unable to authenticate to the sudo service with a Kerberos ticket from IdM, use the following scenarios to troubleshoot your configuration. Prerequisites You have enabled GSSAPI authentication for the sudo service. See Enabling GSSAPI authentication for sudo on an IdM client . You need root privileges to modify the /etc/sssd/sssd.conf file and PAM files in the /etc/pam.d/ directory. Procedure If you see the following error, the Kerberos service might not able to resolve the correct realm for the service ticket based on the host name: In this situation, add the hostname directly to [domain_realm] section in the /etc/krb5.conf Kerberos configuration file: If you see the following error, you do not have any Kerberos credentials: In this situation, retrieve Kerberos credentials with the kinit utility or authenticate with SSSD: If you see either of the following errors in the /var/log/sssd/sssd_pam.log log file, the Kerberos credentials do not match the username of the user currently logged in: In this situation, verify that you authenticated with SSSD, or consider disabling the pam_gssapi_check_upn option in the /etc/sssd/sssd.conf file: For additional troubleshooting, you can enable debugging output for the pam_sss_gss.so PAM module. Add the debug option at the end of all pam_sss_gss.so entries in PAM files, such as /etc/pam.d/sudo and /etc/pam.d/sudo-i : Try to authenticate with the pam_sss_gss.so module and review the console output. In this example, the user did not have any Kerberos credentials. 9.11. Using an Ansible playbook to ensure sudo access for an IdM user on an IdM client In Identity Management (IdM), you can ensure sudo access to a specific command is granted to an IdM user account on a specific IdM host. Complete this procedure to ensure a sudo rule named idm_user_reboot exists. The rule grants idm_user the permission to run the /usr/sbin/reboot command on the idmclient machine. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You have ensured the presence of a user account for idm_user in IdM and unlocked the account by creating a password for the user . For details on adding a new IdM user using the command line, see link: Adding users using the command line . No local idm_user account exists on idmclient . The idm_user user is not listed in the /etc/passwd file on idmclient . Procedure Create an inventory file, for example inventory.file , and define ipaservers in it: Add one or more sudo commands: Create an ensure-reboot-sudocmd-is-present.yml Ansible playbook that ensures the presence of the /usr/sbin/reboot command in the IdM database of sudo commands. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/sudocmd/ensure-sudocmd-is-present.yml file: Run the playbook: Create a sudo rule that references the commands: Create an ensure-sudorule-for-idmuser-on-idmclient-is-present.yml Ansible playbook that uses the sudo command entry to ensure the presence of a sudo rule. The sudo rule allows idm_user to reboot the idmclient machine. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/sudorule/ensure-sudorule-is-present.yml file: Run the playbook: Verification Test that the sudo rule whose presence you have ensured on the IdM server works on idmclient by verifying that idm_user can reboot idmclient using sudo . Note that it can take a few minutes for the changes made on the server to take effect on the client. Log in to idmclient as idm_user . Reboot the machine using sudo . Enter the password for idm_user when prompted: If sudo is configured correctly, the machine reboots. Additional resources See the README-sudocmd.md , README-sudocmdgroup.md , and README-sudorule.md files in the /usr/share/doc/ansible-freeipa/ directory. | [
"kinit admin",
"ipa sudocmd-add /usr/sbin/reboot ------------------------------------- Added Sudo Command \"/usr/sbin/reboot\" ------------------------------------- Sudo Command: /usr/sbin/reboot",
"ipa sudorule-add idm_user_reboot --------------------------------- Added Sudo Rule \"idm_user_reboot\" --------------------------------- Rule name: idm_user_reboot Enabled: TRUE",
"ipa sudorule-add-allow-command idm_user_reboot --sudocmds '/usr/sbin/reboot' Rule name: idm_user_reboot Enabled: TRUE Sudo Allow Commands: /usr/sbin/reboot ------------------------- Number of members added 1 -------------------------",
"ipa sudorule-add-host idm_user_reboot --hosts idmclient.idm.example.com Rule name: idm_user_reboot Enabled: TRUE Hosts: idmclient.idm.example.com Sudo Allow Commands: /usr/sbin/reboot ------------------------- Number of members added 1 -------------------------",
"ipa sudorule-add-user idm_user_reboot --users idm_user Rule name: idm_user_reboot Enabled: TRUE Users: idm_user Hosts: idmclient.idm.example.com Sudo Allow Commands: /usr/sbin/reboot ------------------------- Number of members added 1 -------------------------",
"ipa sudorule-mod idm_user_reboot --setattr sudonotbefore=20251231123400Z",
"ipa sudorule-mod idm_user_reboot --setattr sudonotafter=20261231123400Z",
"[idm_user@idmclient ~]USD sudo -l Matching Defaults entries for idm_user on idmclient : !visiblepw, always_set_home, match_group_by_gid, always_query_group_plugin, env_reset, env_keep=\"COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS\", env_keep+=\"MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE\", env_keep+=\"LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES\", env_keep+=\"LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE\", env_keep+=\"LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY KRB5CCNAME\", secure_path=/sbin\\:/bin\\:/usr/sbin\\:/usr/bin User idm_user may run the following commands on idmclient : (root) /usr/sbin/reboot",
"[idm_user@idmclient ~]USD sudo /usr/sbin/reboot [sudo] password for idm_user:",
"ipa group-add --desc='AD users external map' ad_users_external --external ------------------------------- Added group \"ad_users_external\" ------------------------------- Group name: ad_users_external Description: AD users external map",
"ipa group-add --desc='AD users' ad_users ---------------------- Added group \"ad_users\" ---------------------- Group name: ad_users Description: AD users GID: 129600004",
"ipa group-add-member ad_users_external --external \"[email protected]\" [member user]: [member group]: Group name: ad_users_external Description: AD users external map External member: S-1-5-21-3655990580-1375374850-1633065477-513 ------------------------- Number of members added 1 -------------------------",
"ipa group-add-member ad_users --groups ad_users_external Group name: ad_users Description: AD users GID: 129600004 Member groups: ad_users_external ------------------------- Number of members added 1 -------------------------",
"ipa sudocmd-add /usr/sbin/reboot ------------------------------------- Added Sudo Command \"/usr/sbin/reboot\" ------------------------------------- Sudo Command: /usr/sbin/reboot",
"ipa sudorule-add ad_users_reboot --------------------------------- Added Sudo Rule \"ad_users_reboot\" --------------------------------- Rule name: ad_users_reboot Enabled: True",
"ipa sudorule-add-allow-command ad_users_reboot --sudocmds '/usr/sbin/reboot' Rule name: ad_users_reboot Enabled: True Sudo Allow Commands: /usr/sbin/reboot ------------------------- Number of members added 1 -------------------------",
"ipa sudorule-add-host ad_users_reboot --hosts idmclient.idm.example.com Rule name: ad_users_reboot Enabled: True Hosts: idmclient.idm.example.com Sudo Allow Commands: /usr/sbin/reboot ------------------------- Number of members added 1 -------------------------",
"ipa sudorule-add-user ad_users_reboot --groups ad_users Rule name: ad_users_reboot Enabled: TRUE User Groups: ad_users Hosts: idmclient.idm.example.com Sudo Allow Commands: /usr/sbin/reboot ------------------------- Number of members added 1 -------------------------",
"ssh [email protected]@ipaclient Password:",
"[[email protected]@idmclient ~]USD sudo -l Matching Defaults entries for [email protected] on idmclient : !visiblepw, always_set_home, match_group_by_gid, always_query_group_plugin, env_reset, env_keep=\"COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS\", env_keep+=\"MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE\", env_keep+=\"LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES\", env_keep+=\"LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE\", env_keep+=\"LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY KRB5CCNAME\", secure_path=/sbin\\:/bin\\:/usr/sbin\\:/usr/bin User [email protected] may run the following commands on idmclient : (root) /usr/sbin/reboot",
"[[email protected]@idmclient ~]USD sudo /usr/sbin/reboot [sudo] password for [email protected]:",
"sudo /usr/sbin/reboot [sudo] password for idm_user:",
"kinit admin",
"ipa sudocmd-add /opt/third-party-app/bin/report ---------------------------------------------------- Added Sudo Command \"/opt/third-party-app/bin/report\" ---------------------------------------------------- Sudo Command: /opt/third-party-app/bin/report",
"ipa sudorule-add run_third-party-app_report -------------------------------------------- Added Sudo Rule \"run_third-party-app_report\" -------------------------------------------- Rule name: run_third-party-app_report Enabled: TRUE",
"ipa sudorule-add-runasuser run_third-party-app_report --users= thirdpartyapp Rule name: run_third-party-app_report Enabled: TRUE RunAs External User: thirdpartyapp ------------------------- Number of members added 1 -------------------------",
"ipa sudorule-add-allow-command run_third-party-app_report --sudocmds '/opt/third-party-app/bin/report' Rule name: run_third-party-app_report Enabled: TRUE Sudo Allow Commands: /opt/third-party-app/bin/report RunAs External User: thirdpartyapp ------------------------- Number of members added 1 -------------------------",
"ipa sudorule-add-host run_third-party-app_report --hosts idmclient.idm.example.com Rule name: run_third-party-app_report Enabled: TRUE Hosts: idmclient.idm.example.com Sudo Allow Commands: /opt/third-party-app/bin/report RunAs External User: thirdpartyapp ------------------------- Number of members added 1 -------------------------",
"ipa sudorule-add-user run_third-party-app_report --users idm_user Rule name: run_third-party-app_report Enabled: TRUE Users: idm_user Hosts: idmclient.idm.example.com Sudo Allow Commands: /opt/third-party-app/bin/report RunAs External User: thirdpartyapp ------------------------- Number of members added 1",
"[idm_user@idmclient ~]USD sudo -l Matching Defaults entries for [email protected] on idmclient: !visiblepw, always_set_home, match_group_by_gid, always_query_group_plugin, env_reset, env_keep=\"COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS\", env_keep+=\"MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE\", env_keep+=\"LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES\", env_keep+=\"LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE\", env_keep+=\"LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY KRB5CCNAME\", secure_path=/sbin\\:/bin\\:/usr/sbin\\:/usr/bin User [email protected] may run the following commands on idmclient: (thirdpartyapp) /opt/third-party-app/bin/report",
"[idm_user@idmclient ~]USD sudo -u thirdpartyapp /opt/third-party-app/bin/report [sudo] password for [email protected]: Executing report Report successful.",
"[idm_user@idmclient ~]USD sudo -l Matching Defaults entries for [email protected] on idmclient: !visiblepw, always_set_home, match_group_by_gid, always_query_group_plugin, env_reset, env_keep=\"COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS\", env_keep+=\"MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE\", env_keep+=\"LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES\", env_keep+=\"LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE\", env_keep+=\"LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY KRB5CCNAME\", secure_path=/sbin\\:/bin\\:/usr/sbin\\:/usr/bin User [email protected] may run the following commands on idmclient: (thirdpartyapp) /opt/third-party-app/bin/report",
"[idm_user@idmclient ~]USD sudo -u thirdpartyapp /opt/third-party-app/bin/report [sudo] password for [email protected]: Executing report Report successful.",
"[domain/ <domain_name> ] pam_gssapi_services = sudo, sudo-i",
"systemctl restart sssd",
"authselect current Profile ID: sssd",
"authselect enable-feature with-gssapi",
"authselect select sssd with-gssapi",
"#%PAM-1.0 auth sufficient pam_sss_gss.so auth include system-auth account include system-auth password include system-auth session include system-auth",
"ssh -l [email protected] localhost [email protected]'s password:",
"[idmuser@idmclient ~]USD klist Ticket cache: KCM:1366201107 Default principal: [email protected] Valid starting Expires Service principal 01/08/2021 09:11:48 01/08/2021 19:11:48 krbtgt/[email protected] renew until 01/15/2021 09:11:44",
"[idm_user@idmclient ~]USD kdestroy -A [idm_user@idmclient ~]USD kinit [email protected] Password for [email protected] :",
"[idm_user@idmclient ~]USD sudo /usr/sbin/reboot",
"[domain/ <domain_name> ] pam_gssapi_services = sudo, sudo-i pam_gssapi_indicators_map = sudo:pkinit, sudo-i:pkinit",
"systemctl restart sssd",
"authselect current Profile ID: sssd",
"authselect select sssd",
"authselect enable-feature with-gssapi",
"authselect with-smartcard-required",
"#%PAM-1.0 auth sufficient pam_sss_gss.so auth include system-auth account include system-auth password include system-auth session include system-auth",
"#%PAM-1.0 auth sufficient pam_sss_gss.so auth include sudo account include sudo password include sudo session optional pam_keyinit.so force revoke session include sudo",
"ssh -l [email protected] localhost PIN for smart_card",
"[idm_user@idmclient ~]USD klist Ticket cache: KEYRING:persistent:1358900015:krb_cache_TObtNMd Default principal: [email protected] Valid starting Expires Service principal 02/15/2021 16:29:48 02/16/2021 02:29:48 krbtgt/[email protected] renew until 02/22/2021 16:29:44",
"[idm_user@idmclient ~]USD sudo -l Matching Defaults entries for idmuser on idmclient : !visiblepw, always_set_home, match_group_by_gid, always_query_group_plugin, env_reset, env_keep=\"COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS\", env_keep+=\"MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE\", env_keep+=\"LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES\", env_keep+=\"LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE\", env_keep+=\"LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY KRB5CCNAME\", secure_path=/sbin\\:/bin\\:/usr/sbin\\:/usr/bin User idm_user may run the following commands on idmclient : (root) /usr/sbin/reboot",
"[idm_user@idmclient ~]USD sudo /usr/sbin/reboot",
"[pam] pam_gssapi_services = sudo , sudo-i pam_gssapi_indicators_map = sudo:otp pam_gssapi_check_upn = true",
"[domain/ idm.example.com ] pam_gssapi_services = sudo, sudo-i pam_gssapi_indicators_map = sudo:pkinit , sudo-i:otp pam_gssapi_check_upn = true [domain/ ad.example.com ] pam_gssapi_services = sudo pam_gssapi_check_upn = false",
"Server not found in Kerberos database",
"[idm-user@idm-client ~]USD cat /etc/krb5.conf [domain_realm] .example.com = EXAMPLE.COM example.com = EXAMPLE.COM server.example.com = EXAMPLE.COM",
"No Kerberos credentials available",
"[idm-user@idm-client ~]USD kinit [email protected] Password for [email protected] :",
"User with UPN [ <UPN> ] was not found. UPN [ <UPN> ] does not match target user [ <username> ].",
"[idm-user@idm-client ~]USD cat /etc/sssd/sssd.conf pam_gssapi_check_upn = false",
"cat /etc/pam.d/sudo #%PAM-1.0 auth sufficient pam_sss_gss.so debug auth include system-auth account include system-auth password include system-auth session include system-auth",
"cat /etc/pam.d/sudo-i #%PAM-1.0 auth sufficient pam_sss_gss.so debug auth include sudo account include sudo password include sudo session optional pam_keyinit.so force revoke session include sudo",
"[idm-user@idm-client ~]USD sudo ls -l /etc/sssd/sssd.conf pam_sss_gss: Initializing GSSAPI authentication with SSSD pam_sss_gss: Switching euid from 0 to 1366201107 pam_sss_gss: Trying to establish security context pam_sss_gss: SSSD User name: [email protected] pam_sss_gss: User domain: idm.example.com pam_sss_gss: User principal: pam_sss_gss: Target name: [email protected] pam_sss_gss: Using ccache: KCM: pam_sss_gss: Acquiring credentials, principal name will be derived pam_sss_gss: Unable to read credentials from [KCM:] [maj:0xd0000, min:0x96c73ac3] pam_sss_gss: GSSAPI: Unspecified GSS failure. Minor code may provide more information pam_sss_gss: GSSAPI: No credentials cache found pam_sss_gss: Switching euid from 1366200907 to 0 pam_sss_gss: System error [5]: Input/output error",
"[ipaservers] server.idm.example.com",
"--- - name: Playbook to manage sudo command hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure sudo command is present - ipasudocmd: ipaadmin_password: \"{{ ipaadmin_password }}\" name: /usr/sbin/reboot state: present",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory /ensure-reboot-sudocmd-is-present.yml",
"--- - name: Tests hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure a sudorule is present granting idm_user the permission to run /usr/sbin/reboot on idmclient - ipasudorule: ipaadmin_password: \"{{ ipaadmin_password }}\" name: idm_user_reboot description: A test sudo rule. allow_sudocmd: /usr/sbin/reboot host: idmclient.idm.example.com user: idm_user state: present",
"ansible-playbook -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory /ensure-sudorule-for-idmuser-on-idmclient-is-present.yml",
"sudo /usr/sbin/reboot [sudo] password for idm_user:"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_idm_users_groups_hosts_and_access_control_rules/granting-sudo-access-to-an-idm-user-on-an-idm-client_managing-users-groups-hosts |
Appendix A. Using your subscription | Appendix A. Using your subscription Integration is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal. Accessing your account Go to access.redhat.com . If you do not already have an account, create one. Log in to your account. Activating a subscription Go to access.redhat.com . Navigate to My Subscriptions . Navigate to Activate a subscription and enter your 16-digit activation number. Downloading zip and tar files To access zip or tar files, use the customer portal to find the relevant files for download. If you are using RPM packages, this step is not required. Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads . Scroll down to INTEGRATION AND AUTOMATION . Click Red Hat Integration to display the Red Hat Integration downloads page. Click the Download link for your component. Revised on 2024-03-22 13:11:14 UTC | null | https://docs.redhat.com/en/documentation/red_hat_integration/2023.q4/html/release_notes_for_red_hat_integration_2023.q4/using_your_subscription |
Chapter 7. Adding additional storage to your instance | Chapter 7. Adding additional storage to your instance Some cloud instances do not have enough storage for the RHEL AI end-to-end workflow in the default disk. You can add a directory that holds additional data. 7.1. Adding a data storage directory to your instance By default RHEL AI holds configuration data in the USDHOME directory. You can change this default to a different directory for holding InstructLab data. Prerequisites You have a Red Hat Enterprise Linux AI instance You added an extra storage disk to your instance Procedure You can configure the ILAB_HOME environment variable by writing it to the USDHOME/.bash_profile file by running the following commands: USD echo 'export ILAB_HOME=/mnt' >> USDHOME/.bash_profile You can make that change effective by reloading the USDHOME/.bash_profile file with the following command: USD source USDHOME/.bash_profile You then need to create a containers directory with the following command: USD mkdir /mnt/.config/containers Copy over the storage.conf file to the containers directory with the following command: USD cp /etc/skel/.config/containers/storage.conf /mnt/.config/containers/ | [
"echo 'export ILAB_HOME=/mnt' >> USDHOME/.bash_profile",
"source USDHOME/.bash_profile",
"mkdir /mnt/.config/containers",
"cp /etc/skel/.config/containers/storage.conf /mnt/.config/containers/"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.4/html/installing/add_additional_storage |
Chapter 46. ProbeUploadService | Chapter 46. ProbeUploadService 46.1. GetExistingProbes POST /v1/probeupload/getexisting 46.1.1. Description 46.1.2. Parameters 46.1.2.1. Query Parameters Name Description Required Default Pattern filesToCheck String - null 46.1.3. Return Type V1GetExistingProbesResponse 46.1.4. Content Type application/json 46.1.5. Responses Table 46.1. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetExistingProbesResponse 0 An unexpected error response. GooglerpcStatus 46.1.6. Samples 46.1.7. Common object reference 46.1.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 46.1.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 46.1.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 46.1.7.3. V1GetExistingProbesResponse Field Name Required Nullable Type Description Format existingFiles List of V1ProbeUploadManifestFile 46.1.7.4. V1ProbeUploadManifestFile Field Name Required Nullable Type Description Format name String size String int64 crc32 Long int64 | [
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }"
] | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/api_reference/probeuploadservice |
4.80. gpm | 4.80. gpm 4.80.1. RHBA-2011:1092 - gpm bug fix update Updated gpm packages that fix one bug are now available for Red Hat Enterprise Linux 6. The gpm packages contain a program handling mouse services on a system console device. Bug Fix BZ# 684920 Prior to this update, it was not possible to build the gpm packages on the supported platforms if the emacs package was installed. This problem has been resolved with this update and no longer occurs. All users of gpm are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/gpm |
Python SDK Guide | Python SDK Guide Red Hat Virtualization 4.4 Using the Red Hat Virtualization Python SDK Red Hat Virtualization Documentation Team Red Hat Customer Content Services [email protected] Abstract This guide describes how to install and work with version 4 of the Red Hat Virtualization Python software development kit. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/python_sdk_guide/index |
Chapter 7. Performing operations with the Shared File Systems service (manila) | Chapter 7. Performing operations with the Shared File Systems service (manila) Cloud users can create and manage shares from the available share types in the Shared File Systems service (manila). 7.1. Discovering share types As a cloud user, you must specify a share type when you create a share. Procedure Discover the available share types: The command output lists the name and ID of the share type. 7.2. Creating NFS or native CephFS shares Create an NFS or native CephFS share to read and write data. To create an NFS or native CephFS share, use a command similar to the following: Replace the following values: sharetype applies settings associated with the specified share type. Optional: If not supplied, the default share type is used. sharename is the name of the share. Optional: Shares are not required to have a name, nor is the name guaranteed to be unique. proto is the share protocol you want to use. For CephFS with NFS, proto is nfs . For native CephFS, proto is cephfs . For NetApp and Dell EMC storage back ends, proto is nfs or cifs . GB is the size of the share in gigabytes. For example, in Section 6.2, "Creating share types" , the cloud administrator created a default share type with a CephFS back end and another share type named netapp with a NetApp back end. Procedure Create a 10 GB NFS share named share-01 . This example does not specify the optional share type because it uses the available default share type created by the cloud administrator. It also uses the shared storage network configured by the cloud administrator: Create a 15 GB native CephFS share named share-02 : Create a 20 GB NFS share named share-03 and specify a custom share type and share network: 7.3. Listing shares and exporting information To verify that you successfully created the shares, complete the following steps. Procedure List the shares: View the export locations of the share: View the parameters for the share: Note You use the export location to mount the share in Section 7.8.2, "Mounting NFS or native CephFS" . 7.4. Creating a snapshot of data on a shared file system A snapshot is a read-only, point-in-time copy of data on a share. You can use a snapshot to recover data lost through accidental data deletion or file system corruption. Snapshots are more space efficient than backups, and they do not impact the performance of the Shared File Systems service (manila). Prerequisites The snapshot_support parameter must equal true on the parent share. You can run the following command to verify: Procedure As a cloud user, create a snapshot of a share: Replace <share> with the name or ID of the share for which you want to create a snapshot. Optional: Replace <snapshot_name> with the name of the snapshot. Example output Confirm that you created the snapshot: Replace <share> with the name or ID of the share from which you created the snapshot. 7.4.1. Creating a share from a snapshot You can create a share from a snapshot. If the parent share the snapshot was created from has a share type with driver_handles_share_servers set to true , the new share is created on the same share network as the parent. Note If the share type of the parent share has driver_handles_share_servers set to true , you cannot change the share network for the share you create from the snapshot. Prerequisites The create_share_from_snapshot_support share attribute is set to true . For more information about share types, see Comparing common capabilities of share types . The status attribute of the snapshot is set to available . Procedure Retrieve the ID of the share snapshot that contains the data that you require for your new share: A share created from a snapshot can be larger, but not smaller, than the snapshot. Retrieve the size of the snapshot: Create a share from a snapshot: Replace <share_protocol> with the protocol, such as NFS. Replace <size> with the size of the share to be created, in GiB. Replace <snapshot_id> with the ID of the snapshot. Replace <name> with the name of the new share. List the shares to confirm that the share was created successfully: View the properties of the new share: Verification After you create a snapshot, confirm that the snapshot is available. List the snapshots to confirm that they are available: 7.4.2. Deleting a snapshot When you create a snapshot of a share, you cannot delete the share until you delete all of the snapshots created from that share. Procedure Identify the snapshot you want to delete and retrieve its ID: Delete the snapshot: Note Repeat this step for each snapshot that you want to delete. After you delete the snapshot, run the following command to confirm that you deleted the snapshot: 7.5. Connecting to a shared network to access shares When the driver_handles_share_servers parameter equals false, shares are exported to the shared provider network that the administrator made available. As an end user, you must connect your client, such as a Compute instance, to the shared provider network to access your shares. In this example procedure, the shared provider network is called StorageNFS. StorageNFS is configured when director deploys the Shared File Systems service with the CephFS through NFS back end. Follow similar steps to connect to the network made available by your cloud administrator. Note In the example procedure, the IP address family version of the client is not important. The steps in this procedure use IPv4 addressing, but the steps are identical for IPv6. Procedure Create a security group for the StorageNFS port that allows packets to egress the port, but which does not allow ingress packets from unestablished connections: Create a port on the StorageNFS network with security enforced by the no-ingress security group. Note StorageNFSSubnet assigned IP address 172.17.5.160 to nfs-port0 . Add nfs-port0 to a Compute instance. In addition to its private and floating addresses, the Compute instance is assigned a port with the IP address 172.17.5.160 on the StorageNFS network that you can use to mount NFS shares when access is granted to that address for the share in question. Note You might need to adjust the networking configuration on the Compute instance and restart the services for the Compute instance to activate an interface with this address. 7.6. Configuring an IPv6 interface between the network and an instance When the shared network to which shares are exported uses IPv6 addressing, you might experience an issue with DHCPv6 on the secondary interface. If this issue occurs, configure an IPv6 interface manually on the instance. Prerequisites Connection to a shared network to access shares Procedure Log in to the instance. Configure the IPv6 interface address: Activate the interface: Ping the IPv6 address in the export location of the share to test interface connectivity: Alternatively, verify that you can reach the NFS server through Telnet: 7.7. Granting share access for end-user clients You must grant end-user clients access to the share so that users can read data from and write data to the share. You grant a client compute instance access to an NFS share through the IP address of the instance. The user rules for CIFS shares and cephx rules for CephFS shares follow a similar pattern. With user and cephx access types, you can use the same clientidentifier across multiple clients, if required. Before you can mount a share on a client, such as a compute instance, you must grant the client access to the share by using a command similar to the following command: Replace the following values: share : The share name or ID of the share created in Section 7.2, "Creating NFS or native CephFS shares" . accesstype : The type of access to be requested on the share. Some types include: user : Use to authenticate by user or group name. ip : Use to authenticate an instance through its IP address. cephx : Use to authenticate by native CephFS client username. Note The type of access depends on the protocol of the share. For CIFS, you can use user . For NFS shares, you must use ip . For native CephFS shares, you must use cephx . accesslevel : Optional; the default is rw . rw : Read-write access to shares. ro : Read-only access to shares. clientidentifier : Varies depending on accesstype . Use an IP address for ip accesstype . Use a CIFS user or group for user accesstype . Use a username string for cephx accesstype . 7.7.1. Granting access to an NFS share You provide access to NFS shares through IP addresses. Note In the example procedure, the IP address family version of the client is not important. The steps in this procedure use IPv4 addressing, but the steps are identical for IPv6. Procedure Retrieve the IP address of the client compute instance where you plan to mount the share. Make sure that you pick the IP address that corresponds to the network that can reach the shares. In this example, it is the IP address of the StorageNFS network: Note Access to the share has its own ID ( accessid ). Verify that the access configuration was successful: 7.7.2. Granting access to a native CephFS share You provide access to native CephFS shares through Ceph client usernames. The Shared File Systems service (manila) prevents the use of pre-existing Ceph users so you must create unique Ceph client usernames. To mount a share, you need a Ceph client username and an access key. You can retrieve access keys by using the Shared File Systems service API. By default, access keys are visible to all users in a project namespace. You can provide the same user with access to different shares in the project namespace. Users can then access the shares by using the CephFS kernel client on the client machine. Important Use the native CephFS driver with trusted clients only. For information about native CephFS back-end security, see Native CephFS back-end security in Deploying Red Hat Ceph Storage and Red Hat OpenStack Platform together with director . Procedure Grant users access to the native CephFS share: Replace <share-02> with either the share name or the share ID. Replace <user-01> with the cephx user. Collect the access key for the user: 7.7.3. Revoking access to a share The owner of a share can revoke access to the share for any reason. Complete the following steps to revoke previously-granted access to a share. Procedure Revoke access to a share: Replace <share> with either the share name or the share ID. For example: Note If you have an existing client that has read-write permissions, you must revoke their access to a share and add a read-only rule if you want the client to have read-only permissions. 7.8. Mounting shares on compute instances After you grant share access to clients, the clients can mount and use the shares. Any type of client can access shares as long as there is network connectivity to the client. The steps used to mount an NFS share on a virtual compute instance are similar to the steps to mount an NFS share on a bare-metal compute instance. For more information about how to mount shares on OpenShift containers, see Product Documentation for OpenShift Container Platform . Note Client packages for the different protocols must be installed on the Compute instance that mounts the shares. For example, for the Shared File Systems service with CephFS through NFS, the NFS client packages must support NFS 4.1. 7.8.1. Listing share export locations Retrieve the export locations of shares so that you can mount a share. Procedure Retrieve the export locations of a share: When multiple export locations exist, choose one for which the value of the preferred metadata field equals True . If no preferred locations exist, you can use any export location. 7.8.2. Mounting NFS or native CephFS After you create NFS or native CephFS shares and grant share access to end-user clients, users can mount the shares on the client to enable access to data. Any type of client can access shares as long as there is network connectivity to the client. Prerequisites To mount NFS shares, the nfs-utils package must be installed on the client machine. To mount native CephFS shares, the ceph-common package must be installed on the client machine. Users access native CephFS shares by using the CephFS kernel client on the client machine. Procedure Log in to the instance: To mount an NFS share, refer to the following example for sample syntax: Replace <172.17.5.13:/volumes/_nogroup/e840b4ae-6a04-49ee-9d6e-67d4999fbc01> with the export location of the share. Retrieve the export location as described in Section 7.8.1, "Listing share export locations" . To mount a native CephFS share, refer to the following example for sample syntax: Replace <192.168.1.7:6789,192.168.1.8:6789,192.168.1.9:6789:/volumes/_nogroup/4c55ad20-9c55-4a5e-9233-8ac64566b98c> with the export location of the share. Retrieve the export location as described in Section 7.8.1, "Listing share export locations" . Replace <user-01> with the cephx user who has access to the share. Replace the secret value with the access key that you collected in Section 7.7.2, "Granting access to a native CephFS share" . Verification Verify that the mount command succeeded: 7.9. Deleting shares The Shared File Systems service (manila) provides no protections to prevent you from deleting your data. The Shared File Systems service does not check whether clients are connected or workloads are running. When you delete a share, you cannot retrieve it. Warning Back up your data before you delete a share. Prerequisites If you created snapshots from a share, you must delete all of the snapshots and replicas before you can delete the share. For more information, see Deleting a snapshot . Procedure Delete a share: Replace <share> with either the share name or the share ID. 7.10. Listing resource limits of the Shared File Systems service As a cloud user, you can list the current resource limits. This can help you plan workloads and prepare for any action based on your resource consumption. Procedure List the resource limits and current resource consumption for the project: 7.11. Troubleshooting operation failures In the event that Shared File Systems (manila) operations, such as create share or create share group, fail asynchronously, as an end user, you can run queries from the command line for more information about the errors. 7.11.1. Fixing create share or create share group failures In this example, the goal of the end user is to create a share to host software libraries on several virtual machines. The example deliberately introduces two share creation failures to illustrate how to use the command line to retrieve user support messages. Procedure To create the share, you can use a share type that specifies some capabilities that you want the share to have. Cloud administrators can create share types. View the available share types: In this example, two share types are available. To use a share type that specifies the driver_handles_share_servers=True capability, you must create a share network on which to export the share. Create a share network from a private project network. Create the share: View the status of the share: In this example, an error occurred during the share creation. To view the user support message, run the message-list command. Use the --resource-id to filter to the specific share you want to find out about. In the User Message column, notice that the Shared File Systems service failed to create the share because of a capabilities mismatch. To view more message information, run the message-show command, followed by the ID of the message from the message-list command: As the cloud user, you can check capabilities through the share type so you can review the share types available. The difference between the two share types is the value of driver_handles_share_servers : Create a share with the other available share type: In this example, the second share creation attempt fails. View the user support message: The service does not expect a share network for the share type that you used. Without consulting the administrator, you can discover that the administrator has not made available a storage back end that supports exporting shares directly on to your private neutron network. Create the share without the share-network parameter: Ensure that the share was created successfully: Delete the shares and support messages: 7.11.2. Debugging share mounting failures If you experience an issue when you mount shares, use these verification steps to identify the root cause. Procedure Verify the access control list of the share to ensure that the rule that corresponds to your client is correct and has been successfully applied. In a successful rule, the state attribute equals active . If the share type parameter is configured to driver_handles_share_servers=False , copy the hostname or IP address from the export location and ping it to confirm connectivity to the NAS server: Note The IP address is written in universal address format (uaddr), which adds two extra octets (8.1) to represent the NFS service port, 2049. If these verification steps fail, there might be a network connectivity issue, or your shared file system back-end storage has failed. Collect the log files and contact Red Hat Support. | [
"manila type-list",
"manila create [--share-type <sharetype>] [--name <sharename>] proto GB",
"(user) USD manila create --name share-01 nfs 10",
"(user) USD manila create --name share-02 cephfs 15",
"(user) USD manila create --name share-03 --share-type netapp --share-network mynet nfs 20",
"(user) USD manila list +--------------------------------------+----------+-----+-----------+ | ID | Name | ... | Status +--------------------------------------+----------+-----+-----------+ | 8c3bedd8-bc82-4100-a65d-53ec51b5fe81 | share-01 | ... | available +--------------------------------------+----------+-----+-----------+",
"(user) USD manila share-export-location-list share-01 +------------------------------------------------------------------ | Path | 172.17.5.13:/volumes/_nogroup/e840b4ae-6a04-49ee-9d6e-67d4999fbc01 +------------------------------------------------------------------",
"(user) USD manila share-export-location-show <id>",
"manila show | grep snapshot_support",
"manila snapshot-create [--name <snapshot_name>] <share>",
"+-------------------+--------------------------------------+ | Property | Value | +-------------------+--------------------------------------+ | id | dbdcb91b-82ba-407e-a23d-44ffca4da04c | | share_id | ee7059aa-5887-4b87-b03e-d4f0c27ed735 | | share_size | 1 | | created_at | 2022-01-07T14:20:55.541084 | | status | creating | | name | snapshot_name | | description | None | | size | 1 | | share_proto | NFS | | provider_location | None | | user_id | 6d414c62237841dcbe63d3707c1cdd90 | | project_id | 041ff9e24eba469491d770ad8666682d | +-------------------+--------------------------------------+",
"manila snapshot-list --share-id <share>",
"manila snapshot-list",
"manila snapshot-show <snapshot-id>",
"manila create <share_protocol> <size> --snapshot-id <snapshot_id> --name <name>",
"manila list",
"manila show <name>",
"manila snapshot-list",
"manila snapshot-list",
"manila snapshot-delete <snapshot>",
"manila snapshot-list",
"(user) USD openstack security group create no-ingress -f yaml created_at: '2018-09-19T08:19:58Z' description: no-ingress id: 66f67c24-cd8b-45e2-b60f-9eaedc79e3c5 name: no-ingress project_id: 1e021e8b322a40968484e1af538b8b63 revision_number: 2 rules: 'created_at=''2018-09-19T08:19:58Z'', direction=''egress'', ethertype=''IPv4'', id=''6c7f643f-3715-4df5-9fef-0850fb6eaaf2'', updated_at=''2018-09-19T08:19:58Z'' created_at=''2018-09-19T08:19:58Z'', direction=''egress'', ethertype=''IPv6'', id=''a8ca1ac2-fbe5-40e9-ab67-3e55b7a8632a'', updated_at=''2018-09-19T08:19:58Z''' updated_at: '2018-09-19T08:19:58Z'",
"(user) USD openstack port create nfs-port0 --network StorageNFS --security-group no-ingress -f yaml admin_state_up: UP allowed_address_pairs: '' binding_host_id: null binding_profile: null binding_vif_details: null binding_vif_type: null binding_vnic_type: normal created_at: '2018-09-19T08:03:02Z' data_plane_status: null description: '' device_id: '' device_owner: '' dns_assignment: null dns_name: null extra_dhcp_opts: '' fixed_ips: ip_address='172.17.5.160', subnet_id='7bc188ae-aab3-425b-a894-863e4b664192' id: 7a91cbbc-8821-4d20-a24c-99c07178e5f7 ip_address: null mac_address: fa:16:3e:be:41:6f name: nfs-port0 network_id: cb2cbc5f-ea92-4c2d-beb8-d9b10e10efae option_name: null option_value: null port_security_enabled: true project_id: 1e021e8b322a40968484e1af538b8b63 qos_policy_id: null revision_number: 6 security_group_ids: 66f67c24-cd8b-45e2-b60f-9eaedc79e3c5 status: DOWN subnet_id: null tags: '' trunk_details: null updated_at: '2018-09-19T08:03:03Z'",
"(user) USD openstack server add port instance0 nfs-port0 (user) USD openstack server list -f yaml - Flavor: m1.micro ID: 0b878c11-e791-434b-ab63-274ecfc957e8 Image: manila-test Name: demo-instance0 Networks: demo-network=172.20.0.4, 10.0.0.53; StorageNFS=172.17.5.160 Status: ACTIVE",
"sudo ip address add fd00:fd00:fd00:7000::c/64 dev eth1",
"sudo ip link set dev eth1 up",
"ping -6 fd00:fd00:fd00:7000::21",
"sudo dnf install -y telnet telnet fd00:fd00:fd00:7000::21 2049",
"manila access-allow <share> <accesstype> --access-level <accesslevel> <clientidentifier>",
"(user) USD openstack server list -f yaml - Flavor: m1.micro ID: 0b878c11-e791-434b-ab63-274ecfc957e8 Image: manila-test Name: demo-instance0 Networks: demo-network=172.20.0.4, 10.0.0.53; StorageNFS=172.17.5.160 Status: ACTIVE (user) USD manila access-allow share-01 ip 172.17.5.160",
"+-----------------+---------------------------------------+ | Property | Value | +-----------------+---------------------------------------+ | access_key | None | share_id | db3bedd8-bc82-4100-a65d-53ec51b5cba3 | created_at | 2018-09-17T21:57:42.000000 | updated_at | None | access_type | ip | access_to | 172.17.5.160 | access_level | rw | state | queued_to_apply | id | 875c6251-c17e-4c45-8516-fe0928004fff +-----------------+---------------------------------------+",
"(user) USD manila access-list share-01 +--------------+-------------+--------------+--------------+--------+ | id | access_type | access_to | access_level | state | +--------------+-------------+--------------+--------------+--------+ | 875c6251-... | ip | 172.17.5.160 | rw | active | +--------------+------------+--------------+--------------+---------+",
"manila access-allow <share-02> cephx <user-01>",
"manila access-list <share-02>",
"manila access-deny <share> <accessid>",
"(user) USD manila access-list share-01 +--------------+-------------+--------------+--------------+--------+ | id | access_type | access_to | access_level | state | +--------------+-------------+--------------+--------------+--------+ | 875c6251-... | ip | 172.17.5.160 | rw | active | +--------------+-------------+--------------+--------------+--------+ (user) USD manila access-deny share-01 875c6251-c17e-4c45-8516-fe0928004fff (user) USD manila access-list share-01 +--------------+------------+--------------+--------------+--------+ | id | access_type| access_to | access_level | state | +--------------+------------+--------------+--------------+--------+ +--------------+------------+--------------+--------------+--------+",
"(user) USD manila share-export-location-list share-01",
"(user) USD openstack server ssh demo-instance0 --login user",
"mount -t nfs -v <172.17.5.13:/volumes/_nogroup/e840b4ae-6a04-49ee-9d6e-67d4999fbc01> /mnt",
"mount -t ceph \\ <192.168.1.7:6789,192.168.1.8:6789,192.168.1.9:6789:/volumes/_nogroup/4c55ad20-9c55-4a5e-9233-8ac64566b98c> -o name=<user-01>,secret='<AQA8+ANW/<4ZWNRAAOtWJMFPEihBA1unFImJczA==>'",
"df -k",
"manila delete <share>",
"manila absolute-limits +------------------------------+-------+ | Name | Value | +------------------------------+-------+ | maxTotalReplicaGigabytes | 1000 | | maxTotalShareGigabytes | 1000 | | maxTotalShareGroupSnapshots | 50 | | maxTotalShareGroups | 49 | | maxTotalShareNetworks | 10 | | maxTotalShareReplicas | 100 | | maxTotalShareSnapshots | 50 | | maxTotalShares | 50 | | maxTotalSnapshotGigabytes | 1000 | | totalReplicaGigabytesUsed | 22 | | totalShareGigabytesUsed | 25 | | totalShareGroupSnapshotsUsed | 0 | | totalShareGroupsUsed | 9 | | totalShareNetworksUsed | 2 | | totalShareReplicasUsed | 9 | | totalShareSnapshotsUsed | 4 | | totalSharesUsed | 12 | | totalSnapshotGigabytesUsed | 4 | +------------------------------+-------+",
"clouduser1@client:~USD manila type-list +--------------------------------------+-------------+------------+------------+--------------------------------------+--------------------------------------------+-------------+ | ID | Name | visibility | is_default | required_extra_specs | optional_extra_specs | Description | +--------------------------------------+-------------+------------+------------+--------------------------------------+--------------------------------------------+-------------+ | 1cf5d45a-61b3-44d1-8ec7-89a21f51a4d4 | dhss_false | public | YES | driver_handles_share_servers : False | create_share_from_snapshot_support : True | None | | | | | | | mount_snapshot_support : False | | | | | | | | revert_to_snapshot_support : False | | | | | | | | snapshot_support : True | | | 277c1089-127f-426e-9b12-711845991ea1 | dhss_true | public | - | driver_handles_share_servers : True | create_share_from_snapshot_support : True | None | | | | | | | mount_snapshot_support : False | | | | | | | | revert_to_snapshot_support : False | | | | | | | | snapshot_support : True | | +--------------------------------------+-------------+------------+------------+--------------------------------------+--------------------------------------------+-------------+",
"clouduser1@client:~USD openstack subnet list +--------------------------------------+---------------------+--------------------------------------+---------------------+ | ID | Name | Network | Subnet | +--------------------------------------+---------------------+--------------------------------------+---------------------+ | 78c6ac57-bba7-4922-ab81-16cde31c2d06 | private-subnet | 74d5cfb3-5dd0-43f7-b1b2-5b544cb16212 | 10.0.0.0/26 | | a344682c-718d-4825-a87a-3622b4d3a771 | ipv6-private-subnet | 74d5cfb3-5dd0-43f7-b1b2-5b544cb16212 | fd36:18fc:a8e9::/64 | +--------------------------------------+---------------------+--------------------------------------+---------------------+ clouduser1@client:~USD manila share-network-create --name mynet --neutron-net-id 74d5cfb3-5dd0-43f7-b1b2-5b544cb16212 --neutron-subnet-id 78c6ac57-bba7-4922-ab81-16cde31c2d06 +-------------------+--------------------------------------+ | Property | Value | +-------------------+--------------------------------------+ | network_type | None | | name | mynet | | segmentation_id | None | | created_at | 2018-10-09T21:32:22.485399 | | neutron_subnet_id | 78c6ac57-bba7-4922-ab81-16cde31c2d06 | | updated_at | None | | mtu | None | | gateway | None | | neutron_net_id | 74d5cfb3-5dd0-43f7-b1b2-5b544cb16212 | | ip_version | None | | cidr | None | | project_id | cadd7139bc3148b8973df097c0911016 | | id | 0b0fc320-d4b5-44a1-a1ae-800c56de550c | | description | None | +-------------------+--------------------------------------+ clouduser1@client:~USD manila share-network-list +--------------------------------------+-------+ | id | name | +--------------------------------------+-------+ | 6c7ef9ef-3591-48b6-b18a-71a03059edd5 | mynet | +--------------------------------------+-------+",
"clouduser1@client:~USD manila create nfs 1 --name software_share --share-network mynet --share-type dhss_true +---------------------------------------+--------------------------------------+ | Property | Value | +---------------------------------------+--------------------------------------+ | status | creating | | share_type_name | dhss_true | | description | None | | availability_zone | None | | share_network_id | 6c7ef9ef-3591-48b6-b18a-71a03059edd5 | | share_server_id | None | | share_group_id | None | | host | | | revert_to_snapshot_support | False | | access_rules_status | active | | snapshot_id | None | | create_share_from_snapshot_support | False | | is_public | False | | task_state | None | | snapshot_support | False | | id | 243f3a51-0624-4bdd-950e-7ed190b53b67 | | size | 1 | | source_share_group_snapshot_member_id | None | | user_id | 61aef4895b0b41619e67ae83fba6defe | | name | software_share | | share_type | 277c1089-127f-426e-9b12-711845991ea1 | | has_replicas | False | | replication_type | None | | created_at | 2018-10-09T21:12:21.000000 | | share_proto | NFS | | mount_snapshot_support | False | | project_id | cadd7139bc3148b8973df097c0911016 | | metadata | {} | +---------------------------------------+--------------------------------------+",
"clouduser1@client:~USD manila list +--------------------------------------+----------------+------+-------------+--------+-----------+-----------------+------+-------------------+ | ID | Name | Size | Share Proto | Status | Is Public | Share Type Name | Host | Availability Zone | +--------------------------------------+----------------+------+-------------+--------+-----------+-----------------+------+-------------------+ | 243f3a51-0624-4bdd-950e-7ed190b53b67 | software_share | 1 | NFS | error | False | dhss_true | | None | +--------------------------------------+----------------+------+-------------+--------+-----------+-----------------+------+-------------------+",
"clouduser1@client:~USD manila message-list +--------------------------------------+---------------+--------------------------------------+-----------+----------------------------------------------------------------------------------------------------------+-----------+----------------------------+ | ID | Resource Type | Resource ID | Action ID | User Message | Detail ID | Created At | +--------------------------------------+---------------+--------------------------------------+-----------+----------------------------------------------------------------------------------------------------------+-----------+----------------------------+ | 7d411c3c-46d9-433f-9e21-c04ca30b209c | SHARE | 243f3a51-0624-4bdd-950e-7ed190b53b67 | 001 | allocate host: No storage could be allocated for this share request, Capabilities filter didn't succeed. | 008 | 2018-10-09T21:12:21.000000 | +--------------------------------------+---------------+--------------------------------------+-----------+----------------------------------------------------------------------------------------------------------+-----------+----------------------------+",
"clouduser1@client:~USD manila message-show 7d411c3c-46d9-433f-9e21-c04ca30b209c +---------------+----------------------------------------------------------------------------------------------------------+ | Property | Value | +---------------+----------------------------------------------------------------------------------------------------------+ | request_id | req-0a875292-6c52-458b-87d4-1f945556feac | | detail_id | 008 | | expires_at | 2018-11-08T21:12:21.000000 | | resource_id | 243f3a51-0624-4bdd-950e-7ed190b53b67 | | user_message | allocate host: No storage could be allocated for this share request, Capabilities filter didn't succeed. | | created_at | 2018-10-09T21:12:21.000000 | | message_level | ERROR | | id | 7d411c3c-46d9-433f-9e21-c04ca30b209c | | resource_type | SHARE | | action_id | 001 | +---------------+----------------------------------------------------------------------------------------------------------+",
"clouduser1@client:~USD manila type-list +--------------------------------------+-------------+------------+------------+--------------------------------------+--------------------------------------------+-------------+ | ID | Name | visibility | is_default | required_extra_specs | optional_extra_specs | Description | +--------------------------------------+-------------+------------+------------+--------------------------------------+--------------------------------------------+-------------+ | 1cf5d45a-61b3-44d1-8ec7-89a21f51a4d4 | dhss_false | public | YES | driver_handles_share_servers : False | create_share_from_snapshot_support : True | None | | | | | | | mount_snapshot_support : False | | | | | | | | revert_to_snapshot_support : False | | | | | | | | snapshot_support : True | | | 277c1089-127f-426e-9b12-711845991ea1 | dhss_true | public | - | driver_handles_share_servers : True | create_share_from_snapshot_support : True | None | | | | | | | mount_snapshot_support : False | | | | | | | | revert_to_snapshot_support : False | | | | | | | | snapshot_support : True | | +--------------------------------------+-------------+------------+------------+--------------------------------------+--------------------------------------------+-------------+",
"clouduser1@client:~USD manila create nfs 1 --name software_share --share-network mynet --share-type dhss_false +---------------------------------------+--------------------------------------+ | Property | Value | +---------------------------------------+--------------------------------------+ | status | creating | | share_type_name | dhss_false | | description | None | | availability_zone | None | | share_network_id | 6c7ef9ef-3591-48b6-b18a-71a03059edd5 | | share_group_id | None | | revert_to_snapshot_support | False | | access_rules_status | active | | snapshot_id | None | | create_share_from_snapshot_support | True | | is_public | False | | task_state | None | | snapshot_support | True | | id | 2d03d480-7cba-4122-ac9d-edc59c8df698 | | size | 1 | | source_share_group_snapshot_member_id | None | | user_id | 5c7bdb6eb0504d54a619acf8375c08ce | | name | software_share | | share_type | 1cf5d45a-61b3-44d1-8ec7-89a21f51a4d4 | | has_replicas | False | | replication_type | None | | created_at | 2018-10-09T21:24:40.000000 | | share_proto | NFS | | mount_snapshot_support | False | | project_id | cadd7139bc3148b8973df097c0911016 | | metadata | {} | +---------------------------------------+--------------------------------------+",
"clouduser1@client:~USD manila list +--------------------------------------+----------------+------+-------------+--------+-----------+-----------------+------+-------------------+ | ID | Name | Size | Share Proto | Status | Is Public | Share Type Name | Host | Availability Zone | +--------------------------------------+----------------+------+-------------+--------+-----------+-----------------+------+-------------------+ | 2d03d480-7cba-4122-ac9d-edc59c8df698 | software_share | 1 | NFS | error | False | dhss_false | | nova | | 243f3a51-0624-4bdd-950e-7ed190b53b67 | software_share | 1 | NFS | error | False | dhss_true | | None | +--------------------------------------+----------------+------+-------------+--------+-----------+-----------------+------+-------------------+ clouduser1@client:~USD manila message-list +--------------------------------------+---------------+--------------------------------------+-----------+----------------------------------------------------------------------------------------------------------+-----------+----------------------------+ | ID | Resource Type | Resource ID | Action ID | User Message | Detail ID | Created At | +--------------------------------------+---------------+--------------------------------------+-----------+----------------------------------------------------------------------------------------------------------+-----------+----------------------------+ | ed7e02a2-0cdb-4ff9-b64f-e4d2ec1ef069 | SHARE | 2d03d480-7cba-4122-ac9d-edc59c8df698 | 002 | create: Driver does not expect share-network to be provided with current configuration. | 003 | 2018-10-09T21:24:40.000000 | | 7d411c3c-46d9-433f-9e21-c04ca30b209c | SHARE | 243f3a51-0624-4bdd-950e-7ed190b53b67 | 001 | allocate host: No storage could be allocated for this share request, Capabilities filter didn't succeed. | 008 | 2018-10-09T21:12:21.000000 | +--------------------------------------+---------------+--------------------------------------+-----------+----------------------------------------------------------------------------------------------------------+-----------+----------------------------+",
"clouduser1@client:~USD manila create nfs 1 --name software_share --share-type dhss_false +---------------------------------------+--------------------------------------+ | Property | Value | +---------------------------------------+--------------------------------------+ | status | creating | | share_type_name | dhss_false | | description | None | | availability_zone | None | | share_network_id | None | | share_group_id | None | | revert_to_snapshot_support | False | | access_rules_status | active | | snapshot_id | None | | create_share_from_snapshot_support | True | | is_public | False | | task_state | None | | snapshot_support | True | | id | 4d3d7fcf-5fb7-4209-90eb-9e064659f46d | | size | 1 | | source_share_group_snapshot_member_id | None | | user_id | 5c7bdb6eb0504d54a619acf8375c08ce | | name | software_share | | share_type | 1cf5d45a-61b3-44d1-8ec7-89a21f51a4d4 | | has_replicas | False | | replication_type | None | | created_at | 2018-10-09T21:25:40.000000 | | share_proto | NFS | | mount_snapshot_support | False | | project_id | cadd7139bc3148b8973df097c0911016 | | metadata | {} | +---------------------------------------+--------------------------------------+",
"clouduser1@client:~USD manila list +--------------------------------------+----------------+------+-------------+-----------+-----------+-----------------+------+-------------------+ | ID | Name | Size | Share Proto | Status | Is Public | Share Type Name | Host | Availability Zone | +--------------------------------------+----------------+------+-------------+-----------+-----------+-----------------+------+-------------------+ | 4d3d7fcf-5fb7-4209-90eb-9e064659f46d | software_share | 1 | NFS | available | False | dhss_false | | nova | | 2d03d480-7cba-4122-ac9d-edc59c8df698 | software_share | 1 | NFS | error | False | dhss_false | | nova | | 243f3a51-0624-4bdd-950e-7ed190b53b67 | software_share | 1 | NFS | error | False | dhss_true | | None | +--------------------------------------+----------------+------+-------------+-----------+-----------+-----------------+------+-------------------+",
"clouduser1@client:~USD manila message-list +--------------------------------------+---------------+--------------------------------------+-----------+----------------------------------------------------------------------------------------------------------+-----------+----------------------------+ | ID | Resource Type | Resource ID | Action ID | User Message | Detail ID | Created At | +--------------------------------------+---------------+--------------------------------------+-----------+----------------------------------------------------------------------------------------------------------+-----------+----------------------------+ | ed7e02a2-0cdb-4ff9-b64f-e4d2ec1ef069 | SHARE | 2d03d480-7cba-4122-ac9d-edc59c8df698 | 002 | create: Driver does not expect share-network to be provided with current configuration. | 003 | 2018-10-09T21:24:40.000000 | | 7d411c3c-46d9-433f-9e21-c04ca30b209c | SHARE | 243f3a51-0624-4bdd-950e-7ed190b53b67 | 001 | allocate host: No storage could be allocated for this share request, Capabilities filter didn't succeed. | 008 | 2018-10-09T21:12:21.000000 | +--------------------------------------+---------------+--------------------------------------+-----------+----------------------------------------------------------------------------------------------------------+-----------+----------------------------+ clouduser1@client:~USD manila delete 2d03d480-7cba-4122-ac9d-edc59c8df698 243f3a51-0624-4bdd-950e-7ed190b53b67 clouduser1@client:~USD manila message-delete ed7e02a2-0cdb-4ff9-b64f-e4d2ec1ef069 7d411c3c-46d9-433f-9e21-c04ca30b209c clouduser1@client:~USD manila message-list +----+---------------+-------------+-----------+--------------+-----------+------------+ | ID | Resource Type | Resource ID | Action ID | User Message | Detail ID | Created At | +----+---------------+-------------+-----------+--------------+-----------+------------+ +----+---------------+-------------+-----------+--------------+-----------+------------+",
"manila access-list share-01",
"ping -c 1 172.17.5.13 PING 172.17.5.13 (172.17.5.13) 56(84) bytes of data. 64 bytes from 172.17.5.13: icmp_seq=1 ttl=64 time=0.048 ms--- 172.17.5.13 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 7.851/7.851/7.851/0.000 ms If using the NFS protocol, you may verify that the NFS server is ready to respond to NFS rpcs on the proper port: rpcinfo -T tcp -a 172.17.5.13.8.1 100003 4 program 100003 version 4 ready and waiting"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/storage_guide/assembly_manila-performing-operations-with-the-shared-file-systems-service_assembly-swift |
Chapter 7. Installing a cluster on vSphere in a restricted network | Chapter 7. Installing a cluster on vSphere in a restricted network In OpenShift Container Platform 4.12, you can install a cluster on VMware vSphere infrastructure in a restricted network by creating an internal mirror of the installation release content. Note OpenShift Container Platform supports deploying a cluster to a single VMware vCenter only. Deploying a cluster with machines/machine sets on multiple vCenters is not supported. 7.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You created a registry on your mirror host and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. You provisioned persistent storage for your cluster. To deploy a private image registry, your storage must provide the ReadWriteMany access mode. The OpenShift Container Platform installer requires access to port 443 on the vCenter and ESXi hosts. You verified that port 443 is accessible. If you use a firewall, you confirmed with the administrator that port 443 is accessible. Control plane nodes must be able to reach vCenter and ESXi hosts on port 443 for the installation to succeed. If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. Note If you are configuring a proxy, be sure to also review this site list. 7.2. About installations in restricted networks In OpenShift Container Platform 4.12, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. 7.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 7.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.12, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 7.4. VMware vSphere infrastructure requirements You must install an OpenShift Container Platform cluster on one of the following versions of a VMware vSphere instance that meets the requirements for the components that you use: Version 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later Version 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later You can host the VMware vSphere infrastructure on-premise or on a VMware Cloud Verified provider that meets the requirements outlined in the following table: Table 7.1. Version requirements for vSphere virtual environments Virtual environment product Required version VMware virtual hardware 15 or later vSphere ESXi hosts 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later vCenter host 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later Important Installing a cluster on VMware vSphere versions 7.0 and 7.0 Update 1 is deprecated. These versions are still fully supported, but all vSphere 6.x versions are no longer supported. Version 4.12 of OpenShift Container Platform requires VMware virtual hardware version 15 or later. To update the hardware version for your vSphere virtual machines, see the "Updating hardware on nodes running in vSphere" article in the Updating clusters section. Table 7.2. Minimum supported vSphere version for VMware components Component Minimum supported versions Description Hypervisor vSphere 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; vSphere 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later with virtual hardware version 15 This hypervisor version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. For more information about supported hardware on the latest version of Red Hat Enterprise Linux (RHEL) that is compatible with RHCOS, see Hardware on the Red Hat Customer Portal. Storage with in-tree drivers vSphere 7.0 Update 2 or later; 8.0 Update 1 or later This plugin creates vSphere storage by using the in-tree storage drivers for vSphere included in OpenShift Container Platform. Optional: Networking (NSX-T) vSphere 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; vSphere 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later For more information about the compatibility of NSX and OpenShift Container Platform, see the Release Notes section of VMware's NSX container plugin documentation . Important You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation. 7.5. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Review the following details about the required network ports. Table 7.3. Ports used for all-machine to all-machine communications Protocol Port Description VRRP N/A Required for keepalived ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 virtual extensible LAN (VXLAN) 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 7.4. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 7.5. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports 7.6. VMware vSphere CSI Driver Operator requirements To install the vSphere CSI Driver Operator, the following requirements must be met: VMware vSphere version: 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later vCenter version: 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later Virtual machines of hardware version 15 or later No third-party vSphere CSI driver already installed in the cluster If a third-party vSphere CSI driver is present in the cluster, OpenShift Container Platform does not overwrite it. The presence of a third-party vSphere CSI driver prevents OpenShift Container Platform from updating to OpenShift Container Platform 4.13 or later. Note The VMware vSphere CSI Driver Operator is supported only on clusters deployed with platform: vsphere in the installation manifest. Additional resources To remove a third-party vSphere CSI driver, see Removing a third-party vSphere CSI Driver . To update the hardware version for your vSphere nodes, see Updating hardware on nodes running in vSphere . 7.7. vCenter requirements Before you install an OpenShift Container Platform cluster on your vCenter that uses infrastructure that the installer provisions, you must prepare your environment. Required vCenter account privileges To install an OpenShift Container Platform cluster in a vCenter, the installation program requires access to an account with privileges to read and create the required resources. Using an account that has global administrative privileges is the simplest way to access all of the necessary permissions. If you cannot use an account with global administrative privileges, you must create roles to grant the privileges necessary for OpenShift Container Platform cluster installation. While most of the privileges are always required, some are required only if you plan for the installation program to provision a folder to contain the OpenShift Container Platform cluster on your vCenter instance, which is the default behavior. You must create or amend vSphere roles for the specified objects to grant the required privileges. An additional role is required if the installation program is to create a vSphere virtual machine folder. Example 7.1. Roles and privileges required for installation in vSphere API vSphere object for role When required Required privileges in vSphere API vSphere vCenter Always Cns.Searchable InventoryService.Tagging.AttachTag InventoryService.Tagging.CreateCategory InventoryService.Tagging.CreateTag InventoryService.Tagging.DeleteCategory InventoryService.Tagging.DeleteTag InventoryService.Tagging.EditCategory InventoryService.Tagging.EditTag Sessions.ValidateSession StorageProfile.Update StorageProfile.View vSphere vCenter Cluster If VMs will be created in the cluster root Host.Config.Storage Resource.AssignVMToPool VApp.AssignResourcePool VApp.Import VirtualMachine.Config.AddNewDisk vSphere vCenter Resource Pool If an existing resource pool is provided Host.Config.Storage Resource.AssignVMToPool VApp.AssignResourcePool VApp.Import VirtualMachine.Config.AddNewDisk vSphere Datastore Always Datastore.AllocateSpace Datastore.Browse Datastore.FileManagement InventoryService.Tagging.ObjectAttachable vSphere Port Group Always Network.Assign Virtual Machine Folder Always InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone VirtualMachine.Provisioning.MarkAsTemplate VirtualMachine.Provisioning.DeployTemplate vSphere vCenter Datacenter If the installation program creates the virtual machine folder. For UPI, VirtualMachine.Inventory.Create and VirtualMachine.Inventory.Delete privileges are optional if your cluster does not use the Machine API. InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone VirtualMachine.Provisioning.DeployTemplate VirtualMachine.Provisioning.MarkAsTemplate Folder.Create Folder.Delete Example 7.2. Roles and privileges required for installation in vCenter graphical user interface (GUI) vSphere object for role When required Required privileges in vCenter GUI vSphere vCenter Always Cns.Searchable "vSphere Tagging"."Assign or Unassign vSphere Tag" "vSphere Tagging"."Create vSphere Tag Category" "vSphere Tagging"."Create vSphere Tag" vSphere Tagging"."Delete vSphere Tag Category" "vSphere Tagging"."Delete vSphere Tag" "vSphere Tagging"."Edit vSphere Tag Category" "vSphere Tagging"."Edit vSphere Tag" Sessions."Validate session" "Profile-driven storage"."Profile-driven storage update" "Profile-driven storage"."Profile-driven storage view" vSphere vCenter Cluster If VMs will be created in the cluster root Host.Configuration."Storage partition configuration" Resource."Assign virtual machine to resource pool" VApp."Assign resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add new disk" vSphere vCenter Resource Pool If an existing resource pool is provided Host.Configuration."Storage partition configuration" Resource."Assign virtual machine to resource pool" VApp."Assign resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add new disk" vSphere Datastore Always Datastore."Allocate space" Datastore."Browse datastore" Datastore."Low level file operations" "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" vSphere Port Group Always Network."Assign network" Virtual Machine Folder Always "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" Resource."Assign virtual machine to resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add existing disk" "Virtual machine"."Change Configuration"."Add new disk" "Virtual machine"."Change Configuration"."Add or remove device" "Virtual machine"."Change Configuration"."Advanced configuration" "Virtual machine"."Change Configuration"."Set annotation" "Virtual machine"."Change Configuration"."Change CPU count" "Virtual machine"."Change Configuration"."Extend virtual disk" "Virtual machine"."Change Configuration"."Acquire disk lease" "Virtual machine"."Change Configuration"."Modify device settings" "Virtual machine"."Change Configuration"."Change Memory" "Virtual machine"."Change Configuration"."Remove disk" "Virtual machine"."Change Configuration".Rename "Virtual machine"."Change Configuration"."Reset guest information" "Virtual machine"."Change Configuration"."Change resource" "Virtual machine"."Change Configuration"."Change Settings" "Virtual machine"."Change Configuration"."Upgrade virtual machine compatibility" "Virtual machine".Interaction."Guest operating system management by VIX API" "Virtual machine".Interaction."Power off" "Virtual machine".Interaction."Power on" "Virtual machine".Interaction.Reset "Virtual machine"."Edit Inventory"."Create new" "Virtual machine"."Edit Inventory"."Create from existing" "Virtual machine"."Edit Inventory"."Remove" "Virtual machine".Provisioning."Clone virtual machine" "Virtual machine".Provisioning."Mark as template" "Virtual machine".Provisioning."Deploy template" vSphere vCenter Datacenter If the installation program creates the virtual machine folder. For UPI, VirtualMachine.Inventory.Create and VirtualMachine.Inventory.Delete privileges are optional if your cluster does not use the Machine API. "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" Resource."Assign virtual machine to resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add existing disk" "Virtual machine"."Change Configuration"."Add new disk" "Virtual machine"."Change Configuration"."Add or remove device" "Virtual machine"."Change Configuration"."Advanced configuration" "Virtual machine"."Change Configuration"."Set annotation" "Virtual machine"."Change Configuration"."Change CPU count" "Virtual machine"."Change Configuration"."Extend virtual disk" "Virtual machine"."Change Configuration"."Acquire disk lease" "Virtual machine"."Change Configuration"."Modify device settings" "Virtual machine"."Change Configuration"."Change Memory" "Virtual machine"."Change Configuration"."Remove disk" "Virtual machine"."Change Configuration".Rename "Virtual machine"."Change Configuration"."Reset guest information" "Virtual machine"."Change Configuration"."Change resource" "Virtual machine"."Change Configuration"."Change Settings" "Virtual machine"."Change Configuration"."Upgrade virtual machine compatibility" "Virtual machine".Interaction."Guest operating system management by VIX API" "Virtual machine".Interaction."Power off" "Virtual machine".Interaction."Power on" "Virtual machine".Interaction.Reset "Virtual machine"."Edit Inventory"."Create new" "Virtual machine"."Edit Inventory"."Create from existing" "Virtual machine"."Edit Inventory"."Remove" "Virtual machine".Provisioning."Clone virtual machine" "Virtual machine".Provisioning."Deploy template" "Virtual machine".Provisioning."Mark as template" Folder."Create folder" Folder."Delete folder" Additionally, the user requires some ReadOnly permissions, and some of the roles require permission to propogate the permissions to child objects. These settings vary depending on whether or not you install the cluster into an existing folder. Example 7.3. Required permissions and propagation settings vSphere object When required Propagate to children Permissions required vSphere vCenter Always False Listed required privileges vSphere vCenter Datacenter Existing folder False ReadOnly permission Installation program creates the folder True Listed required privileges vSphere vCenter Cluster Existing resource pool False ReadOnly permission VMs in cluster root True Listed required privileges vSphere vCenter Datastore Always False Listed required privileges vSphere Switch Always False ReadOnly permission vSphere Port Group Always False Listed required privileges vSphere vCenter Virtual Machine Folder Existing folder True Listed required privileges vSphere vCenter Resource Pool Existing resource pool True Listed required privileges For more information about creating an account with only the required privileges, see vSphere Permissions and User Management Tasks in the vSphere documentation. Using OpenShift Container Platform with vMotion If you intend on using vMotion in your vSphere environment, consider the following before installing an OpenShift Container Platform cluster. OpenShift Container Platform generally supports compute-only vMotion, where generally implies that you meet all VMware best practices for vMotion. To help ensure the uptime of your compute and control plane nodes, ensure that you follow the VMware best practices for vMotion, and use VMware anti-affinity rules to improve the availability of OpenShift Container Platform during maintenance or hardware issues. For more information about vMotion and anti-affinity rules, see the VMware vSphere documentation for vMotion networking requirements and VM anti-affinity rules . Using Storage vMotion can cause issues and is not supported. If you are using vSphere volumes in your pods, migrating a VM across datastores, either manually or through Storage vMotion, causes invalid references within OpenShift Container Platform persistent volume (PV) objects that can result in data loss. OpenShift Container Platform does not support selective migration of VMDKs across datastores, using datastore clusters for VM provisioning or for dynamic or static provisioning of PVs, or using a datastore that is part of a datastore cluster for dynamic or static provisioning of PVs. Cluster resources When you deploy an OpenShift Container Platform cluster that uses installer-provisioned infrastructure, the installation program must be able to create several resources in your vCenter instance. A standard OpenShift Container Platform installation creates the following vCenter resources: 1 Folder 1 Tag category 1 Tag Virtual machines: 1 template 1 temporary bootstrap node 3 control plane nodes 3 compute machines Although these resources use 856 GB of storage, the bootstrap node is destroyed during the cluster installation process. A minimum of 800 GB of storage is required to use a standard cluster. If you deploy more compute machines, the OpenShift Container Platform cluster will use more storage. Cluster limits Available resources vary between clusters. The number of possible clusters within a vCenter is limited primarily by available storage space and any limitations on the number of required resources. Be sure to consider both limitations to the vCenter resources that the cluster creates and the resources that you require to deploy a cluster, such as IP addresses and networks. Networking requirements You can use Dynamic Host Configuration Protocol (DHCP) for the network and configure the DHCP server to set persistent IP addresses to machines in your cluster. In the DHCP lease, you must configure the DHCP to use the default gateway. Note You do not need to use the DHCP for the network if you want to provision nodes with static IP addresses. If you are installing to a restricted environment, the VM in your restricted network must have access to vCenter so that it can provision and manage nodes, persistent volume claims (PVCs), and other resources. Note Ensure that each OpenShift Container Platform node in the cluster has access to a Network Time Protocol (NTP) server that is discoverable by DHCP. Installation is possible without an NTP server. However, asynchronous server clocks can cause errors, which the NTP server prevents. Additionally, you must create the following networking resources before you install the OpenShift Container Platform cluster: Required IP Addresses An installer-provisioned vSphere installation requires two static IP addresses: The API address is used to access the cluster API. The Ingress address is used for cluster ingress traffic. You must provide these IP addresses to the installation program when you install the OpenShift Container Platform cluster. DNS records You must create DNS records for two static IP addresses in the appropriate DNS server for the vCenter instance that hosts your OpenShift Container Platform cluster. In each record, <cluster_name> is the cluster name and <base_domain> is the cluster base domain that you specify when you install the cluster. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 7.6. Required DNS records Component Record Description API VIP api.<cluster_name>.<base_domain>. This DNS A/AAAA or CNAME record must point to the load balancer for the control plane machines. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Ingress VIP *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that points to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. 7.8. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 , ppc64le , and s390x architectures. do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 7.9. Adding vCenter root CA certificates to your system trust Because the installation program requires access to your vCenter's API, you must add your vCenter's trusted root CA certificates to your system trust before you install an OpenShift Container Platform cluster. Procedure From the vCenter home page, download the vCenter's root CA certificates. Click Download trusted root CA certificates in the vSphere Web Services SDK section. The <vCenter>/certs/download.zip file downloads. Extract the compressed file that contains the vCenter root CA certificates. The contents of the compressed file resemble the following file structure: Add the files for your operating system to the system trust. For example, on a Fedora operating system, run the following command: # cp certs/lin/* /etc/pki/ca-trust/source/anchors Update your system trust. For example, on a Fedora operating system, run the following command: # update-ca-trust extract 7.10. Creating the RHCOS image for restricted network installations Download the Red Hat Enterprise Linux CoreOS (RHCOS) image to install OpenShift Container Platform on a restricted network VMware vSphere environment. Prerequisites Obtain the OpenShift Container Platform installation program. For a restricted network installation, the program is on your mirror registry host. Procedure Log in to the Red Hat Customer Portal's Product Downloads page . Under Version , select the most recent release of OpenShift Container Platform 4.12 for RHEL 8. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Download the Red Hat Enterprise Linux CoreOS (RHCOS) - vSphere image. Upload the image you downloaded to a location that is accessible from the bastion server. The image is now available for a restricted installation. Note the image name or location for use in OpenShift Container Platform deployment. 7.11. VMware vSphere region and zone enablement You can deploy an OpenShift Container Platform cluster to multiple vSphere datacenters that run in a single VMware vCenter. Each datacenter can run multiple clusters. This configuration reduces the risk of a hardware failure or network outage that can cause your cluster to fail. To enable regions and zones, you must define multiple failure domains for your OpenShift Container Platform cluster. Important VMware vSphere region and zone enablement is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The default installation configuration deploys a cluster to a single vSphere datacenter. If you want to deploy a cluster to multiple vSphere datacenters, you must create an installation configuration file that enables the region and zone feature. The default install-config.yaml file includes vcenters and failureDomains fields, where you can specify multiple vSphere datacenters and clusters for your OpenShift Container Platform cluster. You can leave these fields blank if you want to install an OpenShift Container Platform cluster in a vSphere environment that consists of single datacenter. The following list describes terms associated with defining zones and regions for your cluster: Failure domain: Establishes the relationships between a region and zone. You define a failure domain by using vCenter objects, such as a datastore object. A failure domain defines the vCenter location for OpenShift Container Platform cluster nodes. Region: Specifies a vCenter datacenter. You define a region by using a tag from the openshift-region tag category. Zone: Specifies a vCenter cluster. You define a zone by using a tag from the openshift-zone tag category. Note If you plan on specifying more than one failure domain in your install-config.yaml file, you must create tag categories, zone tags, and region tags in advance of creating the configuration file. You must create a vCenter tag for each vCenter datacenter, which represents a region. Additionally, you must create a vCenter tag for each cluster than runs in a datacenter, which represents a zone. After you create the tags, you must attach each tag to their respective datacenters and clusters. The following table outlines an example of the relationship among regions, zones, and tags for a configuration with multiple vSphere datacenters running in a single VMware vCenter. Table 7.7. Example of a configuration with multiple vSphere datacenters that run in a single VMware vCenter Datacenter (region) Cluster (zone) Tags us-east us-east-1 us-east-1a us-east-1b us-east-2 us-east-2a us-east-2b us-west us-west-1 us-west-1a us-west-1b us-west-2 us-west-2a us-west-2b 7.12. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on VMware vSphere. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. Have the imageContentSources values that were generated during mirror registry creation. Obtain the contents of the certificate for your mirror registry. Retrieve a Red Hat Enterprise Linux CoreOS (RHCOS) image and upload it to an accessible location. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select vsphere as the platform to target. Specify the name of your vCenter instance. Specify the user name and password for the vCenter account that has the required permissions to create the cluster. The installation program connects to your vCenter instance. Select the data center in your vCenter instance to connect to. Select the default vCenter datastore to use. Select the vCenter cluster to install the OpenShift Container Platform cluster in. The installation program uses the root resource pool of the vSphere cluster as the default resource pool. Select the network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. Enter the virtual IP address that you configured for control plane API access. Enter the virtual IP address that you configured for cluster ingress. Enter the base domain. This base domain must be the same one that you used in the DNS records that you configured. Enter a descriptive name for your cluster. The cluster name you enter must match the cluster name you specified when configuring the DNS records. Paste the pull secret from the Red Hat OpenShift Cluster Manager . In the install-config.yaml file, set the value of platform.vsphere.clusterOSImage to the image location or name. For example: platform: vsphere: clusterOSImage: http://mirror.example.com/images/rhcos-43.81.201912131630.0-vmware.x86_64.ova?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Add the image content resources, which resemble the following YAML excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release For these values, use the imageContentSources that you recorded during mirror registry creation. Make any other modifications to the install-config.yaml file that you require. You can find more information about the available parameters in the Installation configuration parameters section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 7.12.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 7.12.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 7.8. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters and hyphens ( - ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 7.12.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 7.9. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 7.12.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 7.10. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . featureSet Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 , ppc64le , and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . 7.12.1.4. Additional VMware vSphere configuration parameters Additional VMware vSphere configuration parameters are described in the following table. Note The platform.vsphere parameter prefixes each parameter listed in the table. Table 7.11. Additional VMware vSphere cluster parameters Parameter Description Values vCenter The fully-qualified hostname or IP address of the vCenter server. String username The user name to use to connect to the vCenter instance with. This user must have at least the roles and privileges that are required for static or dynamic persistent volume provisioning in vSphere. String password The password for the vCenter user name. String datacenter The name of the data center to use in the vCenter instance. String defaultDatastore The name of the default datastore to use for provisioning volumes. String folder Optional. The absolute path of an existing folder where the installation program creates the virtual machines. If you do not provide this value, the installation program creates a folder that is named with the infrastructure ID in the data center virtual machine folder. String, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name> . resourcePool Optional. The absolute path of an existing resource pool where the installation program creates the virtual machines. If you do not specify a value, the installation program installs the resources in the root of the cluster under /<datacenter_name>/host/<cluster_name>/Resources . String, for example, /<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>/<optional_nested_resource_pool_name> . network The network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. String cluster The vCenter cluster to install the OpenShift Container Platform cluster in. String apiVIPs The virtual IP (VIP) address that you configured for control plane API access. Note In OpenShift Container Platform 4.12 and later, the apiVIP configuration setting is deprecated. Instead, use a List format to enter a value in the apiVIPs configuration setting. An IP address, for example 128.0.0.1 . ingressVIPs The virtual IP (VIP) address that you configured for cluster ingress. Note In OpenShift Container Platform 4.12 and later, the ingressVIP configuration setting is deprecated. Instead, use a List format to enter a value in the ingressVIPs configuration setting. An IP address, for example 128.0.0.1 . diskType Optional. The disk provisioning method. This value defaults to the vSphere default storage policy if not set. Valid values are thin , thick , or eagerZeroedThick . 7.12.1.5. Optional VMware vSphere machine pool configuration parameters Optional VMware vSphere machine pool configuration parameters are described in the following table. Note The platform.vsphere parameter prefixes each parameter listed in the table. Table 7.12. Optional VMware vSphere machine pool parameters Parameter Description Values clusterOSImage The location from which the installation program downloads the RHCOS image. You must set this parameter to perform an installation in a restricted network. An HTTP or HTTPS URL, optionally with a SHA-256 checksum. For example, https://mirror.openshift.com/images/rhcos-<version>-vmware.<architecture>.ova . osDisk.diskSizeGB The size of the disk in gigabytes. Integer cpus The total number of virtual processor cores to assign a virtual machine. The value of platform.vsphere.cpus must be a multiple of platform.vsphere.coresPerSocket value. Integer coresPerSocket The number of cores per socket in a virtual machine. The number of virtual sockets on the virtual machine is platform.vsphere.cpus / platform.vsphere.coresPerSocket . The default value for control plane nodes and worker nodes is 4 and 2 , respectively. Integer memoryMB The size of a virtual machine's memory in megabytes. Integer 7.12.1.6. Region and zone enablement configuration parameters To use the region and zone enablement feature, you must specify region and zone enablement parameters in your installation file. Important Before you modify the install-config.yaml file to configure a region and zone enablement environment, read the "VMware vSphere region and zone enablement" and the "Configuring regions and zones for a VMware vCenter" sections. Note The platform.vsphere parameter prefixes each parameter listed in the table. Table 7.13. Region and zone enablement parameters Parameter Description Values failureDomains Establishes the relationships between a region and zone. You define a failure domain by using vCenter objects, such as a datastore object. A failure domain defines the vCenter location for OpenShift Container Platform cluster nodes. String failureDomains.name The name of the failure domain. The machine pools use this name to reference the failure domain. String failureDomains.server Specifies the fully-qualified hostname or IP address of the VMware vCenter server, so that a client can access failure domain resources. You must apply the server role to the vSphere vCenter server location. String failureDomains.region You define a region by using a tag from the openshift-region tag category. The tag must be attached to the vCenter datacenter. String failureDomains.zone You define a zone by using a tag from the openshift-zone tag category. The tag must be attached to the vCenter datacenter. String failureDomains.topology.computeCluster This parameter defines the compute cluster associated with the failure domain. If you do not define this parameter in your configuration, the compute cluster takes the value of platform.vsphere.cluster and platform.vsphere.datacenter . String failureDomains.topology.folder The absolute path of an existing folder where the installation program creates the virtual machines. If you do not define this parameter in your configuration, the folder takes the value of platform.vsphere.folder . String failureDomains.topology.datacenter Defines the datacenter where OpenShift Container Platform virtual machines (VMs) operate. If you do not define this parameter in your configuration, the datacenter defaults to platform.vsphere.datacenter . String failureDomains.topology.datastore Specifies the path to a vSphere datastore that stores virtual machines files for a failure domain. You must apply the datastore role to the vSphere vCenter datastore location. String failureDomains.topology.networks Lists any network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. If you do not define this parameter in your configuration, the network takes the value of platform.vsphere.network . String failureDomains.topology.resourcePool Optional: The absolute path of an existing resource pool where the installation program creates the virtual machines, for example, /<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>/<optional_nested_resource_pool_name> . If you do not specify a value, the installation program installs the resources in the root of the cluster under /<datacenter_name>/host/<cluster_name>/Resources . String 7.12.2. Sample install-config.yaml file for an installer-provisioned VMware vSphere cluster You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 name: worker replicas: 3 platform: vsphere: 3 cpus: 2 coresPerSocket: 2 memoryMB: 8192 osDisk: diskSizeGB: 120 controlPlane: 4 name: master replicas: 3 platform: vsphere: 5 cpus: 4 coresPerSocket: 2 memoryMB: 16384 osDisk: diskSizeGB: 120 metadata: name: cluster 6 platform: vsphere: vcenter: your.vcenter.server username: username password: password datacenter: datacenter defaultDatastore: datastore folder: folder resourcePool: resource_pool 7 diskType: thin 8 network: VM_Network cluster: vsphere_cluster_name 9 apiVIPs: - api_vip ingressVIPs: - ingress_vip clusterOSImage: http://mirror.example.com/images/rhcos-47.83.202103221318-0-vmware.x86_64.ova 10 fips: false pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 11 sshKey: 'ssh-ed25519 AAAA...' additionalTrustBundle: | 12 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 13 - mirrors: - <mirror_host_name>:<mirror_port>/<repo_name>/release source: <source_image_1> - mirrors: - <mirror_host_name>:<mirror_port>/<repo_name>/release-images source: <source_image_2> 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 5 Optional: Provide additional configuration for the machine pool parameters for the compute and control plane machines. 6 The cluster name that you specified in your DNS records. 7 Optional: Provide an existing resource pool for machine creation. If you do not specify a value, the installation program uses the root resource pool of the vSphere cluster. 8 The vSphere disk provisioning method. 9 The vSphere cluster to install the OpenShift Container Platform cluster in. 10 The location of the Red Hat Enterprise Linux CoreOS (RHCOS) image that is accessible from the bastion server. 11 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. 12 Provide the contents of the certificate file that you used for your mirror registry. 13 Provide the imageContentSources section from the output of the command to mirror the repository. Note In OpenShift Container Platform 4.12 and later, the apiVIP and ingressVIP configuration settings are deprecated. Instead, use a list format to enter values in the apiVIPs and ingressVIPs configuration settings. 7.12.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. You must include vCenter's IP address and the IP range that you use for its machines. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 7.12.4. Configuring regions and zones for a VMware vCenter You can modify the default installation configuration file to deploy an OpenShift Container Platform cluster to multiple vSphere datacenters that run in a single VMware vCenter. Important VMware vSphere region and zone enablement is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Important The example uses the govc command. The govc command is an open source command available from VMware. The govc command is not available from Red Hat. Red Hat Support does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website. Prerequisites You have an existing install-config.yaml installation configuration file. Important You must specify at least one failure domain for your OpenShift Container Platform cluster, so that you can provision datacenter objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different datacenters, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your OpenShift Container Platform cluster. Note You cannot change a failure domain after you installed an OpenShift Container Platform cluster on the VMware vSphere platform. You can add additional failure domains after cluster installation. Procedure Enter the following govc command-line tool commands to create the openshift-region and openshift-zone vCenter tag categories: Important If you specify different names for the openshift-region and openshift-zone vCenter tag categories, the installation of the OpenShift Container Platform cluster fails. USD govc tags.category.create -d "OpenShift region" openshift-region USD govc tags.category.create -d "OpenShift zone" openshift-zone To create a region tag for each region vSphere datacenter where you want to deploy your cluster, enter the following command in your terminal: USD govc tags.create -c <region_tag_category> <region_tag> To create a zone tag for each vSphere cluster where you want to deploy your cluster, enter the following command: USD govc tags.create -c <zone_tag_category> <zone_tag> Attach region tags to each vCenter datacenter object by entering the following command: USD govc tags.attach -c <region_tag_category> <region_tag_1> /<datacenter_1> Attach the zone tags to each vCenter datacenter object by entering the following command: USD govc tags.attach -c <zone_tag_category> <zone_tag_1> /<datacenter_1>/host/vcs-mdcnc-workload-1 Change to the directory that contains the installation program and initialize the cluster deployment according to your chosen installation requirements. Sample install-config.yaml file with multiple datacenters defined in a vSphere center apiVersion: v1 baseDomain: example.com featureSet: TechPreviewNoUpgrade 1 compute: name: worker replicas: 3 vsphere: zones: 2 - "<machine_pool_zone_1>" - "<machine_pool_zone_2>" controlPlane: name: master replicas: 3 vsphere: zones: 3 - "<machine_pool_zone_1>" - "<machine_pool_zone_2>" metadata: name: cluster platform: vsphere: vcenter: <vcenter_server> 4 username: <username> 5 password: <password> 6 datacenter: datacenter 7 defaultDatastore: datastore 8 folder: "/<datacenter_name>/vm/<folder_name>/<subfolder_name>" 9 cluster: cluster 10 resourcePool: "/<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>" 11 diskType: thin failureDomains: 12 - name: <machine_pool_zone_1> 13 region: <region_tag_1> 14 zone: <zone_tag_1> 15 topology: 16 datacenter: <datacenter1> 17 computeCluster: "/<datacenter1>/host/<cluster1>" 18 resourcePool: "/<datacenter1>/host/<cluster1>/Resources/<resourcePool1>" 19 networks: 20 - <VM_Network1_name> datastore: "/<datacenter1>/datastore/<datastore1>" 21 - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> topology: datacenter: <datacenter2> computeCluster: "/<datacenter2>/host/<cluster2>" networks: - <VM_Network2_name> datastore: "/<datacenter2>/datastore/<datastore2>" resourcePool: "/<datacenter2>/host/<cluster2>/Resources/<resourcePool2>" folder: "/<datacenter2>/vm/<folder2>" # ... 1 You must define set the TechPreviewNoUpgrade as the value for this parameter, so that you can use the VMware vSphere region and zone enablement feature. 2 3 An optional parameter for specifying a vCenter cluster. You define a zone by using a tag from the openshift-zone tag category. If you do not define this parameter, nodes will be distributed among all defined failure-domains. 4 5 6 7 8 9 10 11 The default vCenter topology. The installation program uses this topology information to deploy the bootstrap node. Additionally, the topology defines the default datastore for vSphere persistent volumes. 12 Establishes the relationships between a region and zone. You define a failure domain by using vCenter objects, such as a datastore object. A failure domain defines the vCenter location for OpenShift Container Platform cluster nodes. If you do not define this parameter, the installation program uses the default vCenter topology. 13 Defines the name of the failure domain. Each failure domain is referenced in the zones parameter to scope a machine pool to the failure domain. 14 You define a region by using a tag from the openshift-region tag category. The tag must be attached to the vCenter datacenter. 15 You define a zone by using a tag from the openshift-zone tag category. The tag must be attached to the vCenter datacenter. 16 Specifies the vCenter resources associated with the failure domain. 17 An optional parameter for defining the vSphere datacenter that is associated with a failure domain. If you do not define this parameter, the installation program uses the default vCenter topology. 18 An optional parameter for stating the absolute file path for the compute cluster that is associated with the failure domain. If you do not define this parameter, the installation program uses the default vCenter topology. 19 An optional parameter for the installer-provisioned infrastructure. The parameter sets the absolute path of an existing resource pool where the installation program creates the virtual machines, for example, /<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>/<optional_nested_resource_pool_name> . If you do not specify a value, resources are installed in the root of the cluster /example_datacenter/host/example_cluster/Resources . 20 An optional parameter that lists any network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. If you do not define this parameter, the installation program uses the default vCenter topology. 21 An optional parameter for specifying a datastore to use for provisioning volumes. If you do not define this parameter, the installation program uses the default vCenter topology. 7.13. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Verify the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 7.14. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.12. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.12 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 7.15. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 7.16. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 7.17. Creating registry storage After you install the cluster, you must create storage for the Registry Operator. 7.17.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . When this has completed, you must configure storage. 7.17.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 7.17.2.1. Configuring registry storage for VMware vSphere As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites Cluster administrator permissions. A cluster on VMware vSphere. Persistent storage provisioned for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have "100Gi" capacity. Important Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: 1 1 Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when you replicate to more than one replica. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m 7.18. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.12, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 7.19. Services for an external load balancer You can configure an OpenShift Container Platform cluster to use an external load balancer in place of the default load balancer. Important Configuring an external load balancer depends on your vendor's load balancer. The information and examples in this section are for guideline purposes only. Consult the vendor documentation for more specific information about the vendor's load balancer. Red Hat supports the following services for an external load balancer: Ingress Controller OpenShift API OpenShift MachineConfig API You can choose whether you want to configure one or all of these services for an external load balancer. Configuring only the Ingress Controller service is a common configuration option. To better understand each service, view the following diagrams: Figure 7.1. Example network workflow that shows an Ingress Controller operating in an OpenShift Container Platform environment Figure 7.2. Example network workflow that shows an OpenShift API operating in an OpenShift Container Platform environment Figure 7.3. Example network workflow that shows an OpenShift MachineConfig API operating in an OpenShift Container Platform environment The following configuration options are supported for external load balancers: Use a node selector to map the Ingress Controller to a specific set of nodes. You must assign a static IP address to each node in this set, or configure each node to receive the same IP address from the Dynamic Host Configuration Protocol (DHCP). Infrastructure nodes commonly receive this type of configuration. Target all IP addresses on a subnet. This configuration can reduce maintenance overhead, because you can create and destroy nodes within those networks without reconfiguring the load balancer targets. If you deploy your ingress pods by using a machine set on a smaller network, such as a /27 or /28 , you can simplify your load balancer targets. Tip You can list all IP addresses that exist in a network by checking the machine config pool's resources. Before you configure an external load balancer for your OpenShift Container Platform cluster, consider the following information: For a front-end IP address, you can use the same IP address for the front-end IP address, the Ingress Controller's load balancer, and API load balancer. Check the vendor's documentation for this capability. For a back-end IP address, ensure that an IP address for an OpenShift Container Platform control plane node does not change during the lifetime of the external load balancer. You can achieve this by completing one of the following actions: Assign a static IP address to each control plane node. Configure each node to receive the same IP address from the DHCP every time the node requests a DHCP lease. Depending on the vendor, the DHCP lease might be in the form of an IP reservation or a static DHCP assignment. Manually define each node that runs the Ingress Controller in the external load balancer for the Ingress Controller back-end service. For example, if the Ingress Controller moves to an undefined node, a connection outage can occur. 7.19.1. Configuring an external load balancer You can configure an OpenShift Container Platform cluster to use an external load balancer in place of the default load balancer. Important Before you configure an external load balancer, ensure that you read the "Services for an external load balancer" section. Read the following prerequisites that apply to the service that you want to configure for your external load balancer. Note MetalLB, that runs on a cluster, functions as an external load balancer. OpenShift API prerequisites You defined a front-end IP address. TCP ports 6443 and 22623 are exposed on the front-end IP address of your load balancer. Check the following items: Port 6443 provides access to the OpenShift API service. Port 22623 can provide ignition startup configurations to nodes. The front-end IP address and port 6443 are reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address and port 22623 are reachable only by OpenShift Container Platform nodes. The load balancer backend can communicate with OpenShift Container Platform control plane nodes on port 6443 and 22623. Ingress Controller prerequisites You defined a front-end IP address. TCP ports 443 and 80 are exposed on the front-end IP address of your load balancer. The front-end IP address, port 80 and port 443 are be reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address, port 80 and port 443 are reachable to all nodes that operate in your OpenShift Container Platform cluster. The load balancer backend can communicate with OpenShift Container Platform nodes that run the Ingress Controller on ports 80, 443, and 1936. Prerequisite for health check URL specifications You can configure most load balancers by setting health check URLs that determine if a service is available or unavailable. OpenShift Container Platform provides these health checks for the OpenShift API, Machine Configuration API, and Ingress Controller backend services. The following examples demonstrate health check specifications for the previously listed backend services: Example of a Kubernetes API health check specification Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of a Machine Config API health check specification Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of an Ingress Controller health check specification Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10 Procedure Configure the HAProxy Ingress Controller, so that you can enable access to the cluster from your load balancer on ports 6443, 443, and 80: Example HAProxy configuration #... listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2 # ... Use the curl CLI command to verify that the external load balancer and its resources are operational: Verify that the cluster machine configuration API is accessible to the Kubernetes API server resource, by running the following command and observing the response: USD curl https://<loadbalancer_ip_address>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that the cluster machine configuration API is accessible to the Machine config server resource, by running the following command and observing the output: USD curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that the controller is accessible to the Ingress Controller resource on port 80, by running the following command and observing the output: USD curl -I -L -H "Host: console-openshift-console.apps.<cluster_name>.<base_domain>" http://<load_balancer_front_end_IP_address> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache Verify that the controller is accessible to the Ingress Controller resource on port 443, by running the following command and observing the output: USD curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private Configure the DNS records for your cluster to target the front-end IP addresses of the external load balancer. You must update records to your DNS server for the cluster API and applications over the load balancer. Examples of modified DNS records <load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End <load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End Important DNS propagation might take some time for each DNS record to become available. Ensure that each DNS record propagates before validating each record. Use the curl CLI command to verify that the external load balancer and DNS record configuration are operational: Verify that you can access the cluster API, by running the following command and observing the output: USD curl https://api.<cluster_name>.<base_domain>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that you can access the cluster machine configuration, by running the following command and observing the output: USD curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that you can access each cluster application on port, by running the following command and observing the output: USD curl http://console-openshift-console.apps.<cluster_name>.<base_domain -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private Verify that you can access each cluster application on port 443, by running the following command and observing the output: USD curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private 7.20. steps Customize your cluster . If necessary, you can opt out of remote health reporting . If necessary, see Registering your disconnected cluster Set up your registry and configure registry storage . | [
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"certs ├── lin │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 ├── mac │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 └── win ├── 108f4d17.0.crt ├── 108f4d17.r1.crl ├── 7e757f6a.0.crt ├── 8e4f8471.0.crt └── 8e4f8471.r0.crl 3 directories, 15 files",
"cp certs/lin/* /etc/pki/ca-trust/source/anchors",
"update-ca-trust extract",
"./openshift-install create install-config --dir <installation_directory> 1",
"platform: vsphere: clusterOSImage: http://mirror.example.com/images/rhcos-43.81.201912131630.0-vmware.x86_64.ova?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d",
"pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'",
"additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----",
"imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 name: worker replicas: 3 platform: vsphere: 3 cpus: 2 coresPerSocket: 2 memoryMB: 8192 osDisk: diskSizeGB: 120 controlPlane: 4 name: master replicas: 3 platform: vsphere: 5 cpus: 4 coresPerSocket: 2 memoryMB: 16384 osDisk: diskSizeGB: 120 metadata: name: cluster 6 platform: vsphere: vcenter: your.vcenter.server username: username password: password datacenter: datacenter defaultDatastore: datastore folder: folder resourcePool: resource_pool 7 diskType: thin 8 network: VM_Network cluster: vsphere_cluster_name 9 apiVIPs: - api_vip ingressVIPs: - ingress_vip clusterOSImage: http://mirror.example.com/images/rhcos-47.83.202103221318-0-vmware.x86_64.ova 10 fips: false pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 11 sshKey: 'ssh-ed25519 AAAA...' additionalTrustBundle: | 12 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 13 - mirrors: - <mirror_host_name>:<mirror_port>/<repo_name>/release source: <source_image_1> - mirrors: - <mirror_host_name>:<mirror_port>/<repo_name>/release-images source: <source_image_2>",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"govc tags.category.create -d \"OpenShift region\" openshift-region",
"govc tags.category.create -d \"OpenShift zone\" openshift-zone",
"govc tags.create -c <region_tag_category> <region_tag>",
"govc tags.create -c <zone_tag_category> <zone_tag>",
"govc tags.attach -c <region_tag_category> <region_tag_1> /<datacenter_1>",
"govc tags.attach -c <zone_tag_category> <zone_tag_1> /<datacenter_1>/host/vcs-mdcnc-workload-1",
"apiVersion: v1 baseDomain: example.com featureSet: TechPreviewNoUpgrade 1 compute: name: worker replicas: 3 vsphere: zones: 2 - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" controlPlane: name: master replicas: 3 vsphere: zones: 3 - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" metadata: name: cluster platform: vsphere: vcenter: <vcenter_server> 4 username: <username> 5 password: <password> 6 datacenter: datacenter 7 defaultDatastore: datastore 8 folder: \"/<datacenter_name>/vm/<folder_name>/<subfolder_name>\" 9 cluster: cluster 10 resourcePool: \"/<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>\" 11 diskType: thin failureDomains: 12 - name: <machine_pool_zone_1> 13 region: <region_tag_1> 14 zone: <zone_tag_1> 15 topology: 16 datacenter: <datacenter1> 17 computeCluster: \"/<datacenter1>/host/<cluster1>\" 18 resourcePool: \"/<datacenter1>/host/<cluster1>/Resources/<resourcePool1>\" 19 networks: 20 - <VM_Network1_name> datastore: \"/<datacenter1>/datastore/<datastore1>\" 21 - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> topology: datacenter: <datacenter2> computeCluster: \"/<datacenter2>/host/<cluster2>\" networks: - <VM_Network2_name> datastore: \"/<datacenter2>/datastore/<datastore2>\" resourcePool: \"/<datacenter2>/host/<cluster2>/Resources/<resourcePool2>\" folder: \"/<datacenter2>/vm/<folder2>\"",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim: 1",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m",
"Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10",
"# listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2",
"curl https://<loadbalancer_ip_address>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl -I -L -H \"Host: console-openshift-console.apps.<cluster_name>.<base_domain>\" http://<load_balancer_front_end_IP_address>",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache",
"curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain>",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"<load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"<load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"curl https://api.<cluster_name>.<base_domain>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl http://console-openshift-console.apps.<cluster_name>.<base_domain -I -L --insecure",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_vsphere/installing-restricted-networks-installer-provisioned-vsphere |
Appendix B. Red Hat Trusted Profile Analyzer with other services values file template | Appendix B. Red Hat Trusted Profile Analyzer with other services values file template Red Hat's Trusted Profile Analyzer (RHTPA) with other services values file template for use by the RHTPA Helm chart. Template appDomain: USDAPP_DOMAIN_URL tracing: {} ingress: className: openshift-default storage: endpoint: S3_ENDPOINT_URL accessKey: valueFrom: secretKeyRef: name: s3-credentials key: user secretKey: valueFrom: secretKeyRef: name: s3-credentials key: password eventBus: type: kafka bootstrapServers: AMQ_ENDPOINT_URL :9092 config: securityProtocol: SASL_PLAINTEXT username: " USER_NAME " password: valueFrom: secretKeyRef: name: kafka-credentials key: client_password mechanism: SCRAM-SHA-512 oidc: issuerUrl: OIDC_ISSUER_URL clients: frontend: clientId: FRONTEND_CLIENT_ID walker: clientId: WALKER_CLIENT_ID clientSecret: valueFrom: secretKeyRef: name: oidc-walker key: client-secret bombastic: bucket: bombastic-default topics: failed: bombastic-failed-default indexed: bombastic-indexed-default stored: bombastic-stored-default vexination: bucket: vexination-default topics: failed: vexination-failed-default indexed: vexination-indexed-default stored: vexination-stored-default v11y: bucket: v11y-default topics: failed: v11y-failed-default indexed: v11y-indexed-default stored: v11y-stored-default guac: database: name: valueFrom: secretKeyRef: name: postgresql-credentials key: db.name host: valueFrom: secretKeyRef: name: postgresql-credentials key: db.host port: valueFrom: secretKeyRef: name: postgresql-credentials key: db.port username: valueFrom: secretKeyRef: name: postgresql-credentials key: db.user password: valueFrom: secretKeyRef: name: postgresql-credentials key: db.password initDatabase: name: valueFrom: secretKeyRef: name: postgresql-admin-credentials key: db.name host: valueFrom: secretKeyRef: name: postgresql-admin-credentials key: db.host port: valueFrom: secretKeyRef: name: postgresql-admin-credentials key: db.port username: valueFrom: secretKeyRef: name: postgresql-admin-credentials key: db.user password: valueFrom: secretKeyRef: name: postgresql-admin-credentials key: db.password | [
"appDomain: USDAPP_DOMAIN_URL tracing: {} ingress: className: openshift-default storage: endpoint: S3_ENDPOINT_URL accessKey: valueFrom: secretKeyRef: name: s3-credentials key: user secretKey: valueFrom: secretKeyRef: name: s3-credentials key: password eventBus: type: kafka bootstrapServers: AMQ_ENDPOINT_URL :9092 config: securityProtocol: SASL_PLAINTEXT username: \" USER_NAME \" password: valueFrom: secretKeyRef: name: kafka-credentials key: client_password mechanism: SCRAM-SHA-512 oidc: issuerUrl: OIDC_ISSUER_URL clients: frontend: clientId: FRONTEND_CLIENT_ID walker: clientId: WALKER_CLIENT_ID clientSecret: valueFrom: secretKeyRef: name: oidc-walker key: client-secret bombastic: bucket: bombastic-default topics: failed: bombastic-failed-default indexed: bombastic-indexed-default stored: bombastic-stored-default vexination: bucket: vexination-default topics: failed: vexination-failed-default indexed: vexination-indexed-default stored: vexination-stored-default v11y: bucket: v11y-default topics: failed: v11y-failed-default indexed: v11y-indexed-default stored: v11y-stored-default guac: database: name: valueFrom: secretKeyRef: name: postgresql-credentials key: db.name host: valueFrom: secretKeyRef: name: postgresql-credentials key: db.host port: valueFrom: secretKeyRef: name: postgresql-credentials key: db.port username: valueFrom: secretKeyRef: name: postgresql-credentials key: db.user password: valueFrom: secretKeyRef: name: postgresql-credentials key: db.password initDatabase: name: valueFrom: secretKeyRef: name: postgresql-admin-credentials key: db.name host: valueFrom: secretKeyRef: name: postgresql-admin-credentials key: db.host port: valueFrom: secretKeyRef: name: postgresql-admin-credentials key: db.port username: valueFrom: secretKeyRef: name: postgresql-admin-credentials key: db.user password: valueFrom: secretKeyRef: name: postgresql-admin-credentials key: db.password"
] | https://docs.redhat.com/en/documentation/red_hat_trusted_profile_analyzer/1/html/deployment_guide/rhtpa-with-other-services-values-file-template_deploy |
Chapter 5. Installing a cluster on IBM Power Virtual Server into an existing VPC | Chapter 5. Installing a cluster on IBM Power Virtual Server into an existing VPC In OpenShift Container Platform version 4.16, you can install a cluster into an existing Virtual Private Cloud (VPC) on IBM Cloud(R). The installation program provisions the rest of the required infrastructure, which you can then further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 5.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an IBM Cloud(R) account to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. You configured the ccoctl utility before you installed the cluster. For more information, see Configuring the Cloud Credential Operator utility . 5.2. About using a custom VPC In OpenShift Container Platform 4.16, you can deploy a cluster using an existing IBM(R) Virtual Private Cloud (VPC). Because the installation program cannot know what other components are in your existing subnets, it cannot choose subnet CIDRs and so forth. You must configure networking for the subnets to which you will install the cluster. 5.2.1. Requirements for using your VPC You must correctly configure the existing VPC and its subnets before you install the cluster. The installation program does not create a VPC or VPC subnet in this scenario. The installation program cannot: Subdivide network ranges for the cluster to use Set route tables for the subnets Set VPC options like DHCP Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. 5.2.2. VPC validation The VPC and all of the subnets must be in an existing resource group. The cluster is deployed to this resource group. As part of the installation, specify the following in the install-config.yaml file: The name of the resource group The name of VPC The name of the VPC subnet To ensure that the subnets that you provide are suitable, the installation program confirms that all of the subnets you specify exists. Note Subnet IDs are not supported. 5.2.3. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: ICMP Ingress is allowed to the entire network. TCP port 22 Ingress (SSH) is allowed to the entire network. Control plane TCP 6443 Ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 Ingress (MCS) is allowed to the entire network. 5.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.16, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 5.4. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 5.5. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 5.6. Exporting the API key You must set the API key you created as a global variable; the installation program ingests the variable during startup to set the API key. Prerequisites You have created either a user API key or service ID API key for your IBM Cloud(R) account. Procedure Export your API key for your account as a global variable: USD export IBMCLOUD_API_KEY=<api_key> Important You must set the variable name exactly as specified; the installation program expects the variable name to be present during startup. 5.7. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Enter a descriptive name for your cluster. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for IBM Power(R) Virtual Server 5.7.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 5.1. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 5.7.2. Sample customized install-config.yaml file for IBM Power Virtual Server You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com compute: 1 2 - architecture: ppc64le hyperthreading: Enabled 3 name: worker platform: powervs: smtLevel: 8 4 replicas: 3 controlPlane: 5 6 architecture: ppc64le hyperthreading: Enabled 7 name: master platform: powervs: smtLevel: 8 8 replicas: 3 metadata: creationTimestamp: null name: example-cluster-existing-vpc networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/24 networkType: OVNKubernetes 10 serviceNetwork: - 172.30.0.0/16 platform: powervs: userID: ibm-user-id powervsResourceGroup: "ibmcloud-resource-group" region: powervs-region vpcRegion : vpc-region vpcName: name-of-existing-vpc 11 vpcSubnets: 12 - powervs-region-example-subnet-1 zone: powervs-zone serviceInstanceGUID: "powervs-region-service-instance-guid" credentialsMode: Manual publish: External 13 pullSecret: '{"auths": ...}' 14 fips: false sshKey: ssh-ed25519 AAAA... 15 1 5 If you do not provide these parameters and values, the installation program provides the default value. 2 6 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Both sections currently define a single machine pool. Only one control plane pool is used. 3 7 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. 4 8 The smtLevel specifies the level of SMT to set to the control plane and compute machines. The supported values are 1, 2, 4, 8, 'off' and 'on' . The default value is 8. The smtLevel 'off' sets SMT to off and smtlevel 'on' sets SMT to the default value 8 on the cluster nodes. Note When simultaneous multithreading (SMT), or hyperthreading is not enabled, one vCPU is equivalent to one physical core. When enabled, total vCPUs is computed as (Thread(s) per core * Core(s) per socket) * Socket(s). The smtLevel controls the threads per core. Lower SMT levels may require additional assigned cores when deploying the cluster nodes. You can do this by setting the 'processors' parameter in the install-config.yaml file to an appropriate value to meet the requirements for deploying OpenShift Container Platform successfully. 9 The machine CIDR must contain the subnets for the compute machines and control plane machines. 10 The cluster network plugin for installation. The supported value is OVNKubernetes . 11 Specify the name of an existing VPC. 12 Specify the name of the existing VPC subnet. The subnets must belong to the VPC that you specified. Specify a subnet for each availability zone in the region. 13 Specify how to publish the user-facing endpoints of your cluster. 14 Required. The installation program prompts you for this value. 15 Provide the sshKey value that you use to access the machines in your cluster. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 5.7.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 5.8. Manually creating IAM Installing the cluster requires that the Cloud Credential Operator (CCO) operate in manual mode. While the installation program configures the CCO for manual mode, you must specify the identity and access management secrets for you cloud provider. You can use the Cloud Credential Operator (CCO) utility ( ccoctl ) to create the required IBM Cloud(R) resources. Prerequisites You have configured the ccoctl binary. You have an existing install-config.yaml file. Procedure Edit the install-config.yaml configuration file so that it contains the credentialsMode parameter set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: ppc64le hyperthreading: Enabled 1 This line is added to set the credentialsMode parameter to Manual . To generate the manifests, run the following command from the directory that contains the installation program: USD ./openshift-install create manifests --dir <installation_directory> From the directory that contains the installation program, set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: "1.0" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer Create the service ID for each credential request, assign the policies defined, create an API key, and generate the secret: USD ccoctl ibmcloud create-service-id \ --credentials-requests-dir=<path_to_credential_requests_directory> \ 1 --name=<cluster_name> \ 2 --output-dir=<installation_directory> \ 3 --resource-group-name=<resource_group_name> 4 1 Specify the directory containing the files for the component CredentialsRequest objects. 2 Specify the name of the OpenShift Container Platform cluster. 3 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 4 Optional: Specify the name of the resource group used for scoping the access policies. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. If an incorrect resource group name is provided, the installation fails during the bootstrap phase. To find the correct resource group name, run the following command: USD grep resourceGroup <installation_directory>/manifests/cluster-infrastructure-02-config.yml Verification Ensure that the appropriate secrets were generated in your cluster's manifests directory. 5.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 5.10. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.16. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.16 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 5.11. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources Accessing the web console 5.12. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.16, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources About remote health monitoring 5.13. steps Customize your cluster Optional: Opt out of remote health reporting | [
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"export IBMCLOUD_API_KEY=<api_key>",
"./openshift-install create install-config --dir <installation_directory> 1",
"apiVersion: v1 baseDomain: example.com compute: 1 2 - architecture: ppc64le hyperthreading: Enabled 3 name: worker platform: powervs: smtLevel: 8 4 replicas: 3 controlPlane: 5 6 architecture: ppc64le hyperthreading: Enabled 7 name: master platform: powervs: smtLevel: 8 8 replicas: 3 metadata: creationTimestamp: null name: example-cluster-existing-vpc networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/24 networkType: OVNKubernetes 10 serviceNetwork: - 172.30.0.0/16 platform: powervs: userID: ibm-user-id powervsResourceGroup: \"ibmcloud-resource-group\" region: powervs-region vpcRegion : vpc-region vpcName: name-of-existing-vpc 11 vpcSubnets: 12 - powervs-region-example-subnet-1 zone: powervs-zone serviceInstanceGUID: \"powervs-region-service-instance-guid\" credentialsMode: Manual publish: External 13 pullSecret: '{\"auths\": ...}' 14 fips: false sshKey: ssh-ed25519 AAAA... 15",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: ppc64le hyperthreading: Enabled",
"./openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer",
"ccoctl ibmcloud create-service-id --credentials-requests-dir=<path_to_credential_requests_directory> \\ 1 --name=<cluster_name> \\ 2 --output-dir=<installation_directory> \\ 3 --resource-group-name=<resource_group_name> 4",
"grep resourceGroup <installation_directory>/manifests/cluster-infrastructure-02-config.yml",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin"
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_ibm_power_virtual_server/installing-ibm-powervs-vpc |
Chapter 5. Troubleshooting notification failures | Chapter 5. Troubleshooting notification failures The notifications service event log enables Notifications administrators to see when notifications are not working properly. The event log provides a list of all triggered events on the Red Hat Hybrid Cloud Console account, and actions taken (as configured in the associated behavior group) for the past 14 days. In the Action taken column, each event shows the notification method highlighted in green or red to indicate the status of the message transmission. The filterable event log is a useful troubleshooting tool to see a failed notification event and identify potential issues with endpoints. The event log can answer questions related to receipt of emails. By showing the email action for an event as green, the event log enables a Notifications administrator to confirm that emails were sent successfully. Even when notifications are configured properly, individual users on the Hybrid Cloud Console account must configure their user preferences to receive emails. Prerequisites You are logged in to the Hybrid Cloud Console as a user with Notifications administrator or Organization Administrator permissions. Procedure In the Hybrid Cloud Console, navigate to Settings > Notifications > Event Log . Filter the events list by event, application, application bundle, action type, or action status. Select the time frame to show events from today, yesterday, the last seven days, the last 14 days (default), or set a custom range within the last 14 days. Sort the Date and time column in ascending or descending order. Navigate to Settings > Notifications > Configure Events , and verify or change settings by event. Ask users to check their user preferences for receiving email notifications. Even when notifications are configured properly, individual users on the Hybrid Cloud Console account must configure their user preferences to receive emails. Additional resources For more information about network and firewall configuration, see Firewall Configuration for accessing Red Hat Insights / Hybrid Cloud Console Integrations & Notifications . To configure your personal preferences for receiving notifications, see Configuring user preferences for email notifications . | null | https://docs.redhat.com/en/documentation/red_hat_hybrid_cloud_console/1-latest/html/configuring_notifications_on_the_red_hat_hybrid_cloud_console_with_fedramp/proc-troubleshoot_notifications |
Chapter 24. Managing routing endpoints | Chapter 24. Managing routing endpoints The JMX Navigator view lets you add or delete routing endpoints. Important These changes are not persistent across routing context restarts. 24.1. Adding a routing endpoint Overview When testing a new scenario, you might want to add a new endpoint to a routing context. Procedure To add an endpoint to a routing context: In the JMX Navigator view, under the routing context node, select the Endpoints child to which you want to add an endpoint. Right-click the selected node to open the context menu, and then select Create Endpoint . In the Create Endpoint dialog, enter a URL that defines the new endpoint, for example, file://target/messages/validOrders . Click OK . Right-click the routing context node, and select Refresh . The new destination appears in the JMX Navigator view under the Endpoints node, in a folder that corresponds to the type of endpoint it is, for example, file . Related topics Section 24.2, "Deleting a routing endpoint" 24.2. Deleting a routing endpoint Overview When testing failover scenarios or other scenarios that involve handling failures, it is helpful to be able to remove an endpoint from a routing context. Procedure To delete a routing endpoint: In the JMX Navigator view, select the endpoint you want delete. Right-click the selected endpoint to open the context menu, and then select Delete Endpoint . The tooling deletes the endpoint. To remove the deleted endpoint from the view, right-click the Endpoints node, and select Refresh . The endpoint disappears from the JMX Navigator view. Note To remove the endpoint's node from the Project Explorer view without rerunning the project, you need to explicitly delete it by right-clicking the node and selecting Delete . To remove it from view, refresh the project display. Related topics Section 24.1, "Adding a routing endpoint" | null | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/tooling_user_guide/RiderManageEndpoints |
Chapter 44. Kafka Sink | Chapter 44. Kafka Sink Send data to Kafka topics. The Kamelet is able to understand the following headers to be set: key / ce-key : as message key partition-key / ce-partitionkey : as message partition key Both the headers are optional. 44.1. Configuration Options The following table summarizes the configuration options available for the kafka-sink Kamelet: Property Name Description Type Default Example bootstrapServers * Brokers Comma separated list of Kafka Broker URLs string password * Password Password to authenticate to kafka string topic * Topic Names Comma separated list of Kafka topic names string user * Username Username to authenticate to Kafka string saslMechanism SASL Mechanism The Simple Authentication and Security Layer (SASL) Mechanism used. string "PLAIN" securityProtocol Security Protocol Protocol used to communicate with brokers. SASL_PLAINTEXT, PLAINTEXT, SASL_SSL and SSL are supported string "SASL_SSL" Note Fields marked with an asterisk (*) are mandatory. 44.2. Dependencies At runtime, the `kafka-sink Kamelet relies upon the presence of the following dependencies: camel:kafka camel:kamelet 44.3. Usage This section describes how you can use the kafka-sink . 44.3.1. Knative Sink You can use the kafka-sink Kamelet as a Knative sink by binding it to a Knative object. kafka-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: kafka-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: kafka-sink properties: bootstrapServers: "The Brokers" password: "The Password" topic: "The Topic Names" user: "The Username" 44.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 44.3.1.2. Procedure for using the cluster CLI Save the kafka-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f kafka-sink-binding.yaml 44.3.1.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind channel:mychannel kafka-sink -p "sink.bootstrapServers=The Brokers" -p "sink.password=The Password" -p "sink.topic=The Topic Names" -p "sink.user=The Username" This command creates the KameletBinding in the current namespace on the cluster. 44.3.2. Kafka Sink You can use the kafka-sink Kamelet as a Kafka sink by binding it to a Kafka topic. kafka-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: kafka-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: kafka-sink properties: bootstrapServers: "The Brokers" password: "The Password" topic: "The Topic Names" user: "The Username" 44.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 44.3.2.2. Procedure for using the cluster CLI Save the kafka-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f kafka-sink-binding.yaml 44.3.2.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic kafka-sink -p "sink.bootstrapServers=The Brokers" -p "sink.password=The Password" -p "sink.topic=The Topic Names" -p "sink.user=The Username" This command creates the KameletBinding in the current namespace on the cluster. 44.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/kafka-sink.kamelet.yaml | [
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: kafka-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: kafka-sink properties: bootstrapServers: \"The Brokers\" password: \"The Password\" topic: \"The Topic Names\" user: \"The Username\"",
"apply -f kafka-sink-binding.yaml",
"kamel bind channel:mychannel kafka-sink -p \"sink.bootstrapServers=The Brokers\" -p \"sink.password=The Password\" -p \"sink.topic=The Topic Names\" -p \"sink.user=The Username\"",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: kafka-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: kafka-sink properties: bootstrapServers: \"The Brokers\" password: \"The Password\" topic: \"The Topic Names\" user: \"The Username\"",
"apply -f kafka-sink-binding.yaml",
"kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic kafka-sink -p \"sink.bootstrapServers=The Brokers\" -p \"sink.password=The Password\" -p \"sink.topic=The Topic Names\" -p \"sink.user=The Username\""
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.9/html/kamelets_reference/kafka-sink |
Appendix B. iSCSI Disks | Appendix B. iSCSI Disks Internet Small Computer System Interface (iSCSI) is a protocol that allows computers to communicate with storage devices by SCSI requests and responses carried over TCP/IP. Because iSCSI is based on the standard SCSI protocols, it uses some terminology from SCSI. The device on the SCSI bus to which requests get sent, and which answers these requests, is known as the target and the device issuing requests is known as the initiator . In other words, an iSCSI disk is a target and the iSCSI software equivalent of a SCSI controller or SCSI Host Bus Adapter (HBA) is called an initiator. This appendix only covers Linux as an iSCSI initiator; how Linux uses iSCSI disks, but not how Linux hosts iSCSI disks. Linux has a software iSCSI initiator in the kernel that takes the place and form of a SCSI HBA driver and therefore allows Linux to use iSCSI disks. However, as iSCSI is a fully network-based protocol, iSCSI initiator support requires more than just the ability to send SCSI packets over the network. Before Linux can use an iSCSI target, Linux must find the target on the network and make a connection to it. In some cases, Linux must send authentication information to gain access to the target. Linux must also detect any failure of the network connection and must establish a new connection, including logging in again if necessary. The discovery, connection, and logging in is handled in user space by the iscsiadm utility, while errors are handled, also in user space, by the iscsid utility. Both iscsiadm and iscsid are part of the iscsi-initiator-utils package under Red Hat Enterprise Linux. B.1. iSCSI Disks in Anaconda The Anaconda installation program can discover and log in to iSCSI disks in two ways: When Anaconda starts, it checks if the BIOS or add-on boot ROMs of the system support iSCSI Boot Firmware Table (iBFT), a BIOS extension for systems which can boot from iSCSI. If the BIOS supports iBFT, Anaconda will read the iSCSI target information for the configured boot disk from the BIOS and log in to this target, making it available as an installation target. Important To connect automatically to an iSCSI target, a network device for accessing the target needs to be activated. The recommended way to do so is to use ip=ibft boot option. You can discover and add iSCSI targets manually in the graphical user interface in anaconda . From the main menu, the Installation Summary screen, click the Installation Destination option. Then click the Add a disk in the Specialized & Network Disks section of the screen. A tabbed list of available storage devices appears. In the lower right corner, click the Add iSCSI Target button and proceed with the discovery process. See Section 8.15.1, "The Storage Devices Selection Screen" for more information. Important Restriction: The /boot partition cannot be placed on iSCSI targets that have been manually added using this method - an iSCSI target containing a /boot partition must be configured for use with iBFT. However, in instances where the installed system is expected to boot from iSCSI with iBFT configuration provided by a method other than firmware iBFT, for example using iPXE, the /boot partition restriction can be disabled using the inst.nonibftiscsiboot installer boot option. While Anaconda uses iscsiadm to find and log into iSCSI targets, iscsiadm automatically stores any information about these targets in the iscsiadm iSCSI database. Anaconda then copies this database to the installed system and marks any iSCSI targets not used for / so that the system will automatically log in to them when it starts. If / is placed on an iSCSI target, initrd will log into this target and Anaconda does not include this target in start up scripts to avoid multiple attempts to log into the same target. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/appe-iscsi-disks |
5.7. Storage Management Day-to-Day | 5.7. Storage Management Day-to-Day System administrators must pay attention to storage in the course of their day-to-day routine. There are various issues that should be kept in mind: Monitoring free space Disk quota issues File-related issues Directory-related issues Backup-related issues Performance-related issues Adding/removing storage The following sections discuss each of these issues in more detail. 5.7.1. Monitoring Free Space Making sure there is sufficient free space available should be at the top of every system administrator's daily task list. The reason why regular, frequent free space checking is so important is because free space is so dynamic; there can be more than enough space one moment, and almost none the . In general, there are three reasons for insufficient free space: Excessive usage by a user Excessive usage by an application Normal growth in usage These reasons are explored in more detail in the following sections. 5.7.1.1. Excessive Usage by a User Different people have different levels of neatness. Some people would be horrified to see a speck of dust on a table, while others would not think twice about having a collection of last year's pizza boxes stacked by the sofa. It is the same with storage: Some people are very frugal in their storage usage and never leave any unneeded files hanging around. Some people never seem to find the time to get rid of files that are no longer needed. Many times where a user is responsible for using large amounts of storage, it is the second type of person that is found to be responsible. 5.7.1.1.1. Handling a User's Excessive Usage This is one area in which a system administrator needs to summon all the diplomacy and social skills they can muster. Quite often discussions over disk space become emotional, as people view enforcement of disk usage restrictions as making their job more difficult (or impossible), that the restrictions are unreasonably small, or that they just do not have the time to clean up their files. The best system administrators take many factors into account in such a situation. Are the restrictions equitable and reasonable for the type of work being done by this person? Does the person seem to be using their disk space appropriately? Can you help the person reduce their disk usage in some way (by creating a backup CD-ROM of all emails over one year old, for example)? Your job during the conversation is to attempt to discover if this is, in fact, the case while making sure that someone that has no real need for that much storage cleans up their act. In any case, the thing to do is to keep the conversation on a professional, factual level. Try to address the user's issues in a polite manner ("I understand you are very busy, but everyone else in your department has the same responsibility to not waste storage, and their average utilization is less than half of yours.") while moving the conversation toward the matter at hand. Be sure to offer assistance if a lack of knowledge/experience seems to be the problem. Approaching the situation in a sensitive but firm manner is often better than using your authority as system administrator to force a certain outcome. For example, you might find that sometimes a compromise between you and the user is necessary. This compromise can take one of three forms: Provide temporary space Make archival backups Give up You might find that the user can reduce their usage if they have some amount of temporary space that they can use without restriction. People that often take advantage of this situation find that it allows them to work without worrying about space until they get to a logical stopping point, at which time they can perform some housekeeping, and determine what files in temporary storage are really needed or not. Warning If you offer this situation to a user, do not fall into the trap of allowing this temporary space to become permanent space. Make it very clear that the space being offered is temporary, and that no guarantees can be made as to data retention; no backups of any data in temporary space are ever made. In fact, many administrators often underscore this fact by automatically deleting any files in temporary storage that are older than a certain age (a week, for example). Other times, the user may have many files that are so obviously old that it is unlikely continuous access to them is needed. Make sure you determine that this is, in fact, the case. Sometimes individual users are responsible for maintaining an archive of old data; in these instances, you should make a point of assisting them in that task by providing multiple backups that are treated no differently from your data center's archival backups. However, there are times when the data is of dubious value. In these instances you might find it best to offer to make a special backup for them. You then back up the old data, and give the user the backup media, explaining that they are responsible for its safekeeping, and if they ever need access to any of the data, to ask you (or your organization's operations staff -- whatever is appropriate for your organization) to restore it. There are a few things to keep in mind so that this does not backfire on you. First and foremost is to not include files that are likely to need restoring; do not select files that are too new. , make sure that you are able to perform a restoration if one ever is requested. This means that the backup media should be of a type that you are reasonably sure will be used in your data center for the foreseeable future. Note Your choice of backup media should also take into consideration those technologies that can enable the user to handle data restoration themselves. For example, even though backing up several gigabytes onto CD-R media is more work than issuing a single command and spinning it off to a 20GB tape cartridge, consider that the user can then be able to access the data on CD-R whenever they want -- without ever involving you. 5.7.1.2. Excessive Usage by an Application Sometimes an application is responsible for excessive usage. The reasons for this can vary, but can include: Enhancements in the application's functionality require more storage An increase in the number of users using the application The application fails to clean up after itself, leaving no-longer-needed temporary files on disk The application is broken, and the bug is causing it to use more storage than it should Your task is to determine which of the reasons from this list apply to your situation. Being aware of the status of the applications used in your data center should help you eliminate several of these reasons, as should your awareness of your users' processing habits. What remains to be done is often a bit of detective work into where the storage has gone. This should narrow down the field substantially. At this point you must then take the appropriate steps, be it the addition of storage to support an increasingly-popular application, contacting the application's developers to discuss its file handling characteristics, or writing scripts to clean up after the application. 5.7.1.3. Normal Growth in Usage Most organizations experience some level of growth over the long term. Because of this, it is normal to expect storage utilization to increase at a similar pace. In nearly all circumstances, ongoing monitoring can reveal the average rate of storage utilization at your organization; this rate can then be used to determine the time at which additional storage should be procured before your free space actually runs out. If you are in the position of unexpectedly running out of free space due to normal growth, you have not been doing your job. However, sometimes large additional demands on your systems' storage can come up unexpectedly. Your organization may have merged with another, necessitating rapid changes in the IT infrastructure (and therefore, storage). A new high-priority project may have literally sprung up overnight. Changes to an existing application may have resulted in greatly increased storage needs. No matter what the reason, there are times when you will be taken by surprise. To plan for these instances, try to configure your storage architecture for maximum flexibility. Keeping spare storage on-hand (if possible) can alleviate the impact of such unplanned events. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s1-storage-dtd |
21.4. kvm_stat | 21.4. kvm_stat The kvm_stat command is a python script which retrieves runtime statistics from the kvm kernel module. The kvm_stat command can be used to diagnose guest behavior visible to kvm . In particular, performance related issues with guests. Currently, the reported statistics are for the entire system; the behavior of all running guests is reported. To run this script you need to install the qemu-kvm-tools package. The kvm_stat command requires that the kvm kernel module is loaded and debugfs is mounted. If either of these features are not enabled, the command will output the required steps to enable debugfs or the kvm module. For example: Mount debugfs if required: kvm_stat Output The kvm_stat command outputs statistics for all guests and the host. The output is updated until the command is terminated (using Ctrl + c or the q key). Explanation of variables: efer_reload The number of Extended Feature Enable Register (EFER) reloads. exits The count of all VMEXIT calls. fpu_reload The number of times a VMENTRY reloaded the FPU state. The fpu_reload is incremented when a guest is using the Floating Point Unit (FPU). halt_exits Number of guest exits due to halt calls. This type of exit is usually seen when a guest is idle. halt_wakeup Number of wakeups from a halt . host_state_reload Count of full reloads of the host state (currently tallies MSR setup and guest MSR reads). hypercalls Number of guest hypervisor service calls. insn_emulation Number of guest instructions emulated by the host. insn_emulation_fail Number of failed insn_emulation attempts. io_exits Number of guest exits from I/O port accesses. irq_exits Number of guest exits due to external interrupts. irq_injections Number of interrupts sent to guests. irq_window Number of guest exits from an outstanding interrupt window. largepages Number of large pages currently in use. mmio_exits Number of guest exits due to memory mapped I/O (MMIO) accesses. mmu_cache_miss Number of KVM MMU shadow pages created. mmu_flooded Detection count of excessive write operations to an MMU page. This counts detected write operations not of individual write operations. mmu_pde_zapped Number of page directory entry (PDE) destruction operations. mmu_pte_updated Number of page table entry (PTE) destruction operations. mmu_pte_write Number of guest page table entry (PTE) write operations. mmu_recycled Number of shadow pages that can be reclaimed. mmu_shadow_zapped Number of invalidated shadow pages. mmu_unsync Number of non-synchronized pages which are not yet unlinked. nmi_injections Number of Non-maskable Interrupt (NMI) injections to the guest. nmi_window Number of guest exits from (outstanding) Non-maskable Interrupt (NMI) windows. pf_fixed Number of fixed (non-paging) page table entry (PTE) maps. pf_guest Number of page faults injected into guests. remote_tlb_flush Number of remote (sibling CPU) Translation Lookaside Buffer (TLB) flush requests. request_irq Number of guest interrupt window request exits. signal_exits Number of guest exits due to pending signals from the host. tlb_flush Number of tlb_flush operations performed by the hypervisor. Note The output information from the kvm_stat command is exported by the KVM hypervisor as pseudo files located in the /sys/kernel/debug/kvm/ directory. | [
"kvm_stat Please mount debugfs ('mount -t debugfs debugfs /sys/kernel/debug') and ensure the kvm modules are loaded",
"mount -t debugfs debugfs /sys/kernel/debug",
"kvm_stat kvm statistics efer_reload 94 0 exits 4003074 31272 fpu_reload 1313881 10796 halt_exits 14050 259 halt_wakeup 4496 203 host_state_reload 1638354 24893 hypercalls 0 0 insn_emulation 1093850 1909 insn_emulation_fail 0 0 invlpg 75569 0 io_exits 1596984 24509 irq_exits 21013 363 irq_injections 48039 1222 irq_window 24656 870 largepages 0 0 mmio_exits 11873 0 mmu_cache_miss 42565 8 mmu_flooded 14752 0 mmu_pde_zapped 58730 0 mmu_pte_updated 6 0 mmu_pte_write 138795 0 mmu_recycled 0 0 mmu_shadow_zapped 40358 0 mmu_unsync 793 0 nmi_injections 0 0 nmi_window 0 0 pf_fixed 697731 3150 pf_guest 279349 0 remote_tlb_flush 5 0 request_irq 0 0 signal_exits 1 0 tlb_flush 200190 0"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-kvm_stat-script |
Chapter 22. Scheduler [config.openshift.io/v1] | Chapter 22. Scheduler [config.openshift.io/v1] Description Scheduler holds cluster-wide config information to run the Kubernetes Scheduler and influence its placement decisions. The canonical name for this config is cluster . Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 22.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object status holds observed values from the cluster. They may not be overridden. 22.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description defaultNodeSelector string defaultNodeSelector helps set the cluster-wide default node selector to restrict pod placement to specific nodes. This is applied to the pods created in all namespaces and creates an intersection with any existing nodeSelectors already set on a pod, additionally constraining that pod's selector. For example, defaultNodeSelector: "type=user-node,region=east" would set nodeSelector field in pod spec to "type=user-node,region=east" to all pods created in all namespaces. Namespaces having project-wide node selectors won't be impacted even if this field is set. This adds an annotation section to the namespace. For example, if a new namespace is created with node-selector='type=user-node,region=east', the annotation openshift.io/node-selector: type=user-node,region=east gets added to the project. When the openshift.io/node-selector annotation is set on the project the value is used in preference to the value we are setting for defaultNodeSelector field. For instance, openshift.io/node-selector: "type=user-node,region=west" means that the default of "type=user-node,region=east" set in defaultNodeSelector would not be applied. mastersSchedulable boolean MastersSchedulable allows masters nodes to be schedulable. When this flag is turned on, all the master nodes in the cluster will be made schedulable, so that workload pods can run on them. The default value for this field is false, meaning none of the master nodes are schedulable. Important Note: Once the workload pods start running on the master nodes, extreme care must be taken to ensure that cluster-critical control plane components are not impacted. Please turn on this field after doing due diligence. policy object DEPRECATED: the scheduler Policy API has been deprecated and will be removed in a future release. policy is a reference to a ConfigMap containing scheduler policy which has user specified predicates and priorities. If this ConfigMap is not available scheduler will default to use DefaultAlgorithmProvider. The namespace for this configmap is openshift-config. profile string profile sets which scheduling profile should be set in order to configure scheduling decisions for new pods. Valid values are "LowNodeUtilization", "HighNodeUtilization", "NoScoring" Defaults to "LowNodeUtilization" 22.1.2. .spec.policy Description DEPRECATED: the scheduler Policy API has been deprecated and will be removed in a future release. policy is a reference to a ConfigMap containing scheduler policy which has user specified predicates and priorities. If this ConfigMap is not available scheduler will default to use DefaultAlgorithmProvider. The namespace for this configmap is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 22.1.3. .status Description status holds observed values from the cluster. They may not be overridden. Type object 22.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/schedulers DELETE : delete collection of Scheduler GET : list objects of kind Scheduler POST : create a Scheduler /apis/config.openshift.io/v1/schedulers/{name} DELETE : delete a Scheduler GET : read the specified Scheduler PATCH : partially update the specified Scheduler PUT : replace the specified Scheduler /apis/config.openshift.io/v1/schedulers/{name}/status GET : read status of the specified Scheduler PATCH : partially update status of the specified Scheduler PUT : replace status of the specified Scheduler 22.2.1. /apis/config.openshift.io/v1/schedulers Table 22.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Scheduler Table 22.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 22.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Scheduler Table 22.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 22.5. HTTP responses HTTP code Reponse body 200 - OK SchedulerList schema 401 - Unauthorized Empty HTTP method POST Description create a Scheduler Table 22.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 22.7. Body parameters Parameter Type Description body Scheduler schema Table 22.8. HTTP responses HTTP code Reponse body 200 - OK Scheduler schema 201 - Created Scheduler schema 202 - Accepted Scheduler schema 401 - Unauthorized Empty 22.2.2. /apis/config.openshift.io/v1/schedulers/{name} Table 22.9. Global path parameters Parameter Type Description name string name of the Scheduler Table 22.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a Scheduler Table 22.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 22.12. Body parameters Parameter Type Description body DeleteOptions schema Table 22.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Scheduler Table 22.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 22.15. HTTP responses HTTP code Reponse body 200 - OK Scheduler schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Scheduler Table 22.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 22.17. Body parameters Parameter Type Description body Patch schema Table 22.18. HTTP responses HTTP code Reponse body 200 - OK Scheduler schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Scheduler Table 22.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 22.20. Body parameters Parameter Type Description body Scheduler schema Table 22.21. HTTP responses HTTP code Reponse body 200 - OK Scheduler schema 201 - Created Scheduler schema 401 - Unauthorized Empty 22.2.3. /apis/config.openshift.io/v1/schedulers/{name}/status Table 22.22. Global path parameters Parameter Type Description name string name of the Scheduler Table 22.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified Scheduler Table 22.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 22.25. HTTP responses HTTP code Reponse body 200 - OK Scheduler schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Scheduler Table 22.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 22.27. Body parameters Parameter Type Description body Patch schema Table 22.28. HTTP responses HTTP code Reponse body 200 - OK Scheduler schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Scheduler Table 22.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 22.30. Body parameters Parameter Type Description body Scheduler schema Table 22.31. HTTP responses HTTP code Reponse body 200 - OK Scheduler schema 201 - Created Scheduler schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/config_apis/scheduler-config-openshift-io-v1 |
20.16.9.7. PCI passthrough | 20.16.9.7. PCI passthrough A PCI network device (specified by the source element) is directly assigned to the guest virtual machine using generic device passthrough, after first optionally setting the device's MAC address to the configured value, and associating the device with an 802.1Qbh capable switch using an optionally specified virtualport element (see the examples of virtualport given above for type='direct' network devices). Note that - due to limitations in standard single-port PCI ethernet card driver design - only SR-IOV (Single Root I/O Virtualization) virtual function (VF) devices can be assigned in this manner; to assign a standard single-port PCI or PCIe ethernet card to a guest virtual machine, use the traditional hostdev device definition Note that this "intelligent passthrough" of network devices is very similar to the functionality of a standard hostdev device, the difference being that this method allows specifying a MAC address and virtualport for the passed-through device. If these capabilities are not required, if you have a standard single-port PCI, PCIe, or USB network card that does not support SR-IOV (and hence would anyway lose the configured MAC address during reset after being assigned to the guest virtual machine domain), or if you are using a version of libvirt older than 0.9.11, you should use standard hostdev to assign the device to the guest virtual machine instead of interface type='hostdev'/ . ... <devices> <interface type='hostdev'> <driver name='vfio'/> <source> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </source> <mac address='52:54:00:6d:90:02'> <virtualport type='802.1Qbh'> <parameters profileid='finance'/> </virtualport> </interface> </devices> ... Figure 20.44. Devices - network interfaces- PCI passthrough | [
"<devices> <interface type='hostdev'> <driver name='vfio'/> <source> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </source> <mac address='52:54:00:6d:90:02'> <virtualport type='802.1Qbh'> <parameters profileid='finance'/> </virtualport> </interface> </devices>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sub-section-libvirt-dom-xml-devices-network-interfaces-pci-passthrough |
Chapter 6. September 2024 | Chapter 6. September 2024 6.1. Product-wide updates 6.1.1. Published blogs and resources Blog: Managing image mode for RHEL with Red Hat Insights by Shane McDowell (September 17, 2024) Blog: InterSystems IRIS operations made easy with Red Hat Insights by Jaylin Zhou (September 20, 2024) Partnership: IBM X-Force Threat Intelligence Index 2024 6.2. Red Hat Insights for Red Hat Enterprise Linux 6.2.1. Advisor New recommendations Red Hat Insights' advisor service now detects and recommends solutions for critical issues. Here is a list of newly released solutions: Leapp fails to upgrade RHEL 7 systems to RHEL 8 when the openssl11-libs package is installed from EPEL repository Kdump cannot save vmcore via remote target when the accelerated networking NIC is enabled on Azure Hyper-V systems The performance of the Satellite server degrades when there are too many host facts stored in the PostgreSQL database The GFS2 filesystem failed to stop because the default 60-second stop operation timeout is short Tasks accessing the NFS filesystem hang due to a known issue in the kernel The system experiences decreased security due to an important security vulnerability in CUPS NFS clients slow down when NFS4 server is running with delegation enabled due to a known bug in the running kernel The Leapp upgrade fails when an entry in /etc/fstab is invalid on RHEL 7 The yum fails to install or update the pam package when /var/run is not a soft link or is not owned by root The host-metering client enters a failed state during client starts up due to a corrupted write-ahead log The new kernel installation fails and initramfs does not get generated due to small /boot partition size 6.2.2. Drift Drift service discontinued As of September 30 2024, the drift service, provided in Red Hat Insights for Red Hat Enterprise Linux, has been removed from the product. You can no longer access the drift service from the Hybrid Cloud Console or use the associated API endpoints. For more information about the discontinuation of the drift service, contact: Red Hat customer service 6.2.3. Insights image builder Image builder package recommendations powered by RHEL Lightspeed Image builder now analyzes the packages you have selected and recommends additional, relevant packages. Image builder is available in the Red Hat Insights preview environment. 6.2.4. Inventory Export your inventory as CSV and JSON files You can export your registered systems from inventory using our new export service. Create a request and download your inventory in either CSV or JSON formats. This feature is accessible through both the Red Hat Insights inventory UI and the Export service API, and adheres to the Role Based Access Control (RBAC) permissions you have configured. The export process runs asynchronously in the background. For more details on how to use this feature, visit our inventory product documentation or try it for yourself using preview mode: Viewing and managing system inventory Previewing Hybrid Cloud Console features 6.2.5. Malware detection service Review and set status for malware detection signature matches You can review and set the status for malware detection signature matches at both the system and signature levels. You can also remove irrelevant matches and information from your environment before viewing malware detection results. A new Total matches column is available. You can use this to view the number of matches on a system and the history of those matches. Red Hat Insights retains matches indefinitely, providing you with a robust historical record. 6.2.6. Tasks Live connection status You might have experienced issues when executing task jobs, due to an inactive remote host configuration (RHC) connection. A live connection status is now provided so that you know to fix a connection before executing a job. 6.2.7. Vulnerability Migration of security data source from OVAL to CSAF/VEX Our Red Hat Product Security team is now publishing CSAF data with VEX files. For more information, see the following: CSAF VEX documents now generally available Security Data The vulnerability service is the first to migrate to CSAF and VEX across both the internal and external user base. The migration to CSAF and VEX continues to improve the accuracy of the vulnerability service and the performance of backend processing. Red Hat does not publish OVAL data files for future major RHEL releases, for example, version 10 and later. 6.3. Insights for OpenShift Container Platform Observability intelligence Development preview of incident detection is now available for OpenShift Container Platform. This alert will help you perform root cause analysis. It identifies incidents and initiates debugging. You can see a history of incidents, easily identify critical ones, and reduce the number of signals received while debugging cluster issues. Signals are system messages describing application and operating system activity. For more information about installation and features see the following: How incident detection simplifies OpenShift observability | null | https://docs.redhat.com/en/documentation/red_hat_insights_overview/1-latest/html/release_notes/september-2024 |
Automatically installing RHEL | Automatically installing RHEL Red Hat Enterprise Linux 9 Deploying RHEL on one or more systems from a predefined configuration Red Hat Customer Content Services | [
"dmesg|tail",
"su -",
"dmesg|tail [288954.686557] usb 2-1.8: New USB device strings: Mfr=0, Product=1, SerialNumber=2 [288954.686559] usb 2-1.8: Product: USB Storage [288954.686562] usb 2-1.8: SerialNumber: 000000009225 [288954.712590] usb-storage 2-1.8:1.0: USB Mass Storage device detected [288954.712687] scsi host6: usb-storage 2-1.8:1.0 [288954.712809] usbcore: registered new interface driver usb-storage [288954.716682] usbcore: registered new interface driver uas [288955.717140] scsi 6:0:0:0: Direct-Access Generic STORAGE DEVICE 9228 PQ: 0 ANSI: 0 [288955.717745] sd 6:0:0:0: Attached scsi generic sg4 type 0 [288961.876382] sd 6:0:0:0: sdd Attached SCSI removable disk",
"dd if=/image_directory/image.iso of=/dev/device",
"dd if=/home/testuser/Downloads/rhel-9-x86_64-boot.iso of=/dev/sdd",
"diskutil list /dev/disk0 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *500.3 GB disk0 1: EFI EFI 209.7 MB disk0s1 2: Apple_CoreStorage 400.0 GB disk0s2 3: Apple_Boot Recovery HD 650.0 MB disk0s3 4: Apple_CoreStorage 98.8 GB disk0s4 5: Apple_Boot Recovery HD 650.0 MB disk0s5 /dev/disk1 #: TYPE NAME SIZE IDENTIFIER 0: Apple_HFS YosemiteHD *399.6 GB disk1 Logical Volume on disk0s1 8A142795-8036-48DF-9FC5-84506DFBB7B2 Unlocked Encrypted /dev/disk2 #: TYPE NAME SIZE IDENTIFIER 0: FDisk_partition_scheme *8.1 GB disk2 1: Windows_NTFS SanDisk USB 8.1 GB disk2s1",
"diskutil unmountDisk /dev/disknumber Unmount of all volumes on disknumber was successful",
"sudo dd if= /Users/user_name/Downloads/rhel-9-x86_64-boot.iso of= /dev/rdisk2 bs= 512K status= progress",
"dnf install nfs-utils",
"/ exported_directory / clients",
"/rhel9-install *",
"systemctl start nfs-server.service",
"systemctl reload nfs-server.service",
"mkdir /mnt/rhel9-install/",
"mount -o loop,ro -t iso9660 /image_directory/image.iso /mnt/rhel9-install/",
"cp -r /mnt/rhel9-install/ /var/www/html/",
"systemctl start httpd.service",
"systemctl enable firewalld",
"systemctl start firewalld",
"firewall-cmd --add-port min_port - max_port /tcp --permanent firewall-cmd --add-service ftp --permanent",
"firewall-cmd --reload",
"mkdir /mnt/rhel9-install",
"mount -o loop,ro -t iso9660 /image-directory/image.iso /mnt/rhel9-install",
"mkdir /var/ftp/rhel9-install cp -r /mnt/rhel9-install/ /var/ftp/",
"restorecon -r /var/ftp/rhel9-install find /var/ftp/rhel9-install -type f -exec chmod 444 {} \\; find /var/ftp/rhel9-install -type d -exec chmod 755 {} \\;",
"systemctl start vsftpd.service",
"systemctl restart vsftpd.service",
"systemctl enable vsftpd",
"dnf install dhcp-server",
"option architecture-type code 93 = unsigned integer 16; subnet 192.168.124.0 netmask 255.255.255.0 { option routers 192.168.124.1 ; option domain-name-servers 192.168.124.1 ; range 192.168.124.100 192.168.124.200 ; class \"pxeclients\" { match if substring (option vendor-class-identifier, 0, 9) = \"PXEClient\"; next-server 192.168.124.2 ; if option architecture-type = 00:07 { filename \"redhat/EFI/BOOT/BOOTX64.EFI\"; } else { filename \"pxelinux/pxelinux.0\"; } } class \"httpclients\" { match if substring (option vendor-class-identifier, 0, 10) = \"HTTPClient\"; option vendor-class-identifier \"HTTPClient\"; filename \"http:// 192.168.124.2 /redhat/EFI/BOOT/BOOTX64.EFI\"; } }",
"systemctl enable --now dhcpd",
"dnf install dhcp-server",
"option dhcp6.bootfile-url code 59 = string; option dhcp6.vendor-class code 16 = {integer 32, integer 16, string}; subnet6 fd33:eb1b:9b36::/64 { range6 fd33:eb1b:9b36::64 fd33:eb1b:9b36::c8 ; class \"PXEClient\" { match substring (option dhcp6.vendor-class, 6, 9); } subclass \"PXEClient\" \"PXEClient\" { option dhcp6.bootfile-url \"tftp:// [fd33:eb1b:9b36::2] /redhat/EFI/BOOT/BOOTX64.EFI\"; } class \"HTTPClient\" { match substring (option dhcp6.vendor-class, 6, 10); } subclass \"HTTPClient\" \"HTTPClient\" { option dhcp6.bootfile-url \"http:// [fd33:eb1b:9b36::2] /redhat/EFI/BOOT/BOOTX64.EFI\"; option dhcp6.vendor-class 0 10 \"HTTPClient\"; } }",
"systemctl enable --now dhcpd6",
"IPv6_rpfilter=no",
"dnf install httpd",
"mkdir -p /var/www/html/redhat/",
"mkdir -p /var/www/html/redhat/iso/",
"mount -o loop,ro -t iso9660 path-to-RHEL-DVD.iso /var/www/html/redhat/iso",
"cp -r /var/www/html/redhat/iso/images /var/www/html/redhat/ cp -r /var/www/html/redhat/iso/EFI /var/www/html/redhat/",
"chmod 644 /var/www/html/redhat/EFI/BOOT/grub.cfg",
"set default=\"1\" function load_video { insmod efi_gop insmod efi_uga insmod video_bochs insmod video_cirrus insmod all_video } load_video set gfxpayload=keep insmod gzio insmod part_gpt insmod ext2 set timeout=60 # END /etc/grub.d/00_header # search --no-floppy --set=root -l ' RHEL-9-3-0-BaseOS-x86_64 ' # BEGIN /etc/grub.d/10_linux # menuentry 'Install Red Hat Enterprise Linux 9.3 ' --class fedora --class gnu-linux --class gnu --class os { linuxefi ../../images/pxeboot/vmlinuz inst.repo=http:// 192.168.124.2 /redhat/iso quiet initrdefi ../../images/pxeboot/initrd.img } menuentry 'Test this media & install Red Hat Enterprise Linux 9.3 ' --class fedora --class gnu-linux --class gnu --class os { linuxefi ../../images/pxeboot/vmlinuz inst.repo=http:// 192.168.124.2 /redhat/iso quiet initrdefi ../../images/pxeboot/initrd.img } submenu 'Troubleshooting -->' { menuentry 'Install Red Hat Enterprise Linux 9.3 in text mode' --class fedora --class gnu-linux --class gnu --class os { linuxefi ../../images/pxeboot/vmlinuz inst.repo=http:// 192.168.124.2 /redhat/iso inst.text quiet initrdefi ../../images/pxeboot/initrd.img } menuentry 'Rescue a Red Hat Enterprise Linux system' --class fedora --class gnu-linux --class gnu --class os { linuxefi ../../images/pxeboot/vmlinuz inst.repo=http:// 192.168.124.2 /redhat/iso inst.rescue quiet initrdefi ../../images/pxeboot/initrd.img } }",
"chmod 755 /var/www/html/redhat/EFI/BOOT/BOOTX64.EFI",
"firewall-cmd --zone public --add-port={80/tcp,67/udp,68/udp,546/udp,547/udp}",
"firewall-cmd --reload",
"systemctl enable --now httpd",
"chmod -cR u=rwX,g=rX,o=rX /var/www/html",
"restorecon -FvvR /var/www/html",
"dnf install dhcp-server",
"option architecture-type code 93 = unsigned integer 16; subnet 192.168.124.0 netmask 255.255.255.0 { option routers 192.168.124.1 ; option domain-name-servers 192.168.124.1 ; range 192.168.124.100 192.168.124.200 ; class \"pxeclients\" { match if substring (option vendor-class-identifier, 0, 9) = \"PXEClient\"; next-server 192.168.124.2 ; if option architecture-type = 00:07 { filename \"redhat/EFI/BOOT/BOOTX64.EFI\"; } else { filename \"pxelinux/pxelinux.0\"; } } class \"httpclients\" { match if substring (option vendor-class-identifier, 0, 10) = \"HTTPClient\"; option vendor-class-identifier \"HTTPClient\"; filename \"http:// 192.168.124.2 /redhat/EFI/BOOT/BOOTX64.EFI\"; } }",
"systemctl enable --now dhcpd",
"dnf install dhcp-server",
"option dhcp6.bootfile-url code 59 = string; option dhcp6.vendor-class code 16 = {integer 32, integer 16, string}; subnet6 fd33:eb1b:9b36::/64 { range6 fd33:eb1b:9b36::64 fd33:eb1b:9b36::c8 ; class \"PXEClient\" { match substring (option dhcp6.vendor-class, 6, 9); } subclass \"PXEClient\" \"PXEClient\" { option dhcp6.bootfile-url \"tftp:// [fd33:eb1b:9b36::2] /redhat/EFI/BOOT/BOOTX64.EFI\"; } class \"HTTPClient\" { match substring (option dhcp6.vendor-class, 6, 10); } subclass \"HTTPClient\" \"HTTPClient\" { option dhcp6.bootfile-url \"http:// [fd33:eb1b:9b36::2] /redhat/EFI/BOOT/BOOTX64.EFI\"; option dhcp6.vendor-class 0 10 \"HTTPClient\"; } }",
"systemctl enable --now dhcpd6",
"IPv6_rpfilter=no",
"dnf install tftp-server",
"firewall-cmd --add-service=tftp",
"mount -t iso9660 /path_to_image/name_of_image.iso /mount_point -o loop,ro",
"cp -pr /mount_point/AppStream/Packages/syslinux-tftpboot-version-architecture.rpm /my_local_directory",
"umount /mount_point",
"rpm2cpio syslinux-tftpboot-version-architecture.rpm | cpio -dimv",
"mkdir /var/lib/tftpboot/pxelinux",
"cp /my_local_directory/tftpboot/* /var/lib/tftpboot/pxelinux",
"mkdir /var/lib/tftpboot/pxelinux/pxelinux.cfg",
"default vesamenu.c32 prompt 1 timeout 600 display boot.msg label linux menu label ^Install system menu default kernel images/RHEL-9/vmlinuz append initrd=images/RHEL-9/initrd.img ip=dhcp inst.repo=http:// 192.168.124.2 /RHEL-9/x86_64/iso-contents-root/ label vesa menu label Install system with ^basic video driver kernel images/RHEL-9/vmlinuz append initrd=images/RHEL-9/initrd.img ip=dhcp inst.xdriver=vesa nomodeset inst.repo=http:// 192.168.124.2 /RHEL-9/x86_64/iso-contents-root/ label rescue menu label ^Rescue installed system kernel images/RHEL-9/vmlinuz append initrd=images/RHEL-9/initrd.img inst.rescue inst.repo=http:///192.168.124.2/RHEL-8/x86_64/iso-contents-root/ label local menu label Boot from ^local drive localboot 0xffff",
"mkdir -p /var/lib/tftpboot/pxelinux/images/RHEL-9/ cp /path_to_x86_64_images/pxeboot/{vmlinuz,initrd.img} /var/lib/tftpboot/pxelinux/images/RHEL-9/",
"systemctl enable --now tftp.socket",
"dnf install tftp-server",
"firewall-cmd --add-service=tftp",
"mount -t iso9660 /path_to_image/name_of_image.iso /mount_point -o loop,ro",
"mkdir /var/lib/tftpboot/redhat cp -r /mount_point/EFI /var/lib/tftpboot/redhat/ umount /mount_point",
"chmod -R 755 /var/lib/tftpboot/redhat/",
"set timeout=60 menuentry 'RHEL 9' { linux images/RHEL-9/vmlinuz ip=dhcp inst.repo=http:// 192.168.124.2 /RHEL-9/x86_64/iso-contents-root/ initrd images/RHEL-9/initrd.img }",
"mkdir -p /var/lib/tftpboot/images/RHEL-9/ cp /path_to_x86_64_images/pxeboot/{vmlinuz,initrd.img}/var/lib/tftpboot/images/RHEL-9/",
"systemctl enable --now tftp.socket",
"dnf install tftp-server dhcp-server",
"firewall-cmd --add-service=tftp",
"grub2-mknetdir --net-directory=/var/lib/tftpboot Netboot directory for powerpc-ieee1275 created. Configure your DHCP server to point to /boot/grub2/powerpc-ieee1275/core.elf",
"dnf install grub2-ppc64le-modules",
"set default=0 set timeout=5 echo -e \"\\nWelcome to the Red Hat Enterprise Linux 9 installer!\\n\\n\" menuentry 'Red Hat Enterprise Linux 9' { linux grub2-ppc64/vmlinuz ro ip=dhcp inst.repo=http:// 192.168.124.2 /RHEL-9/x86_64/iso-contents-root/ initrd grub2-ppc64/initrd.img }",
"mount -t iso9660 /path_to_image/name_of_iso/ /mount_point -o loop,ro",
"cp /mount_point/ppc/ppc64/{initrd.img,vmlinuz} /var/lib/tftpboot/grub2-ppc64/",
"subnet 192.168.0.1 netmask 255.255.255.0 { allow bootp; option routers 192.168.0.5; group { #BOOTP POWER clients filename \"boot/grub2/powerpc-ieee1275/core.elf\"; host client1 { hardware ethernet 01:23:45:67:89:ab; fixed-address 192.168.0.112; } } }",
"systemctl enable --now dhcpd",
"systemctl enable --now tftp.socket",
"mokutil --import /usr/share/doc/kernel-keys/USD(uname -r)/kernel-signing-ca.cer",
"mokutil --reset",
"rd.znet=qeth,0.0.0600,0.0.0601,0.0.0602,layer2=1,portno= <number>",
"rd.dasd=0.0.0200 rd.dasd=0.0.0202(ro),0.0.0203(ro:failfast),0.0.0205-0.0.0207",
"rd.zfcp=0.0.4000,0x5005076300C213e9,0x5022000000000000 rd.zfcp=0.0.4000",
"ro ramdisk_size=40000 cio_ignore=all,!condev inst.repo=http://example.com/path/to/repository rd.znet=qeth,0.0.0600,0.0.0601,0.0.0602,layer2=1,portno=0,portname=foo ip=192.168.17.115::192.168.17.254:24:foobar.systemz.example.com:enc600:none nameserver=192.168.17.1 rd.dasd=0.0.0200 rd.dasd=0.0.0202 rd.zfcp=0.0.4000,0x5005076300c213e9,0x5022000000000000 rd.zfcp=0.0.5000,0x5005076300dab3e9,0x5022000000000000 inst.ks=http://example.com/path/to/kickstart",
"images/kernel.img 0x00000000 images/initrd.img 0x02000000 images/genericdvd.prm 0x00010480 images/initrd.addrsize 0x00010408",
"qeth: SUBCHANNELS=\" read_device_bus_id , write_device_bus_id , data_device_bus_id \" lcs or ctc: SUBCHANNELS=\" read_device_bus_id , write_device_bus_id \"",
"SUBCHANNELS=\"0.0.f5f0,0.0.f5f1,0.0.f5f2\"",
"DNS=\"10.1.2.3:10.3.2.1\"",
"SEARCHDNS=\"subdomain.domain:domain\"",
"DASD=\"eb1c,0.0.a000-0.0.a003,eb10-eb14(diag),0.0.ab1c(ro:diag)\"",
"FCP_ n =\" device_bus_ID [ WWPN FCP_LUN ]\"",
"FCP_1=\"0.0.fc00 0x50050763050b073d 0x4020400100000000\" FCP_2=\"0.0.4000\"",
"inst.stage2=http://hostname/path_to_install_tree/ inst.stage2=http://hostname/path_to_install_tree/ inst.stage2=http://hostname/path_to_install_tree/",
"ro ramdisk_size=40000 cio_ignore=all,!condev CMSDASD=\"191\" CMSCONFFILE=\"redhat.conf\" inst.vnc inst.repo=http://example.com/path/to/dvd-contents",
"NETTYPE=\"qeth\" SUBCHANNELS=\"0.0.0600,0.0.0601,0.0.0602\" PORTNAME=\"FOOBAR\" PORTNO=\"0\" LAYER2=\"1\" MACADDR=\"02:00:be:3a:01:f3\" HOSTNAME=\"foobar.systemz.example.com\" IPADDR=\"192.168.17.115\" NETMASK=\"255.255.255.0\" GATEWAY=\"192.168.17.254\" DNS=\"192.168.17.1\" SEARCHDNS=\"systemz.example.com:example.com\" DASD=\"200-203\"",
"logon user here",
"cp ipl cms",
"query disk",
"cp query virtual storage",
"cp query virtual osa",
"cp query virtual dasd",
"cp query virtual fcp",
"dnf install pykickstart",
"ksvalidator -v RHEL9 /path/to/kickstart.ks",
"cat /root/anaconda-ks.cfg",
"dnf install pykickstart",
"ksvalidator -v RHEL9 /path/to/kickstart.ks",
"dnf install pykickstart",
"ksvalidator -v RHEL9 /path/to/kickstart.ks",
"dnf install nfs-utils",
"/ exported_directory / clients",
"/rhel9-install *",
"systemctl start nfs-server.service",
"systemctl reload nfs-server.service",
"dnf install httpd",
"dnf install httpd mod_ssl",
"systemctl start httpd.service",
"dnf install vsftpd",
"systemctl enable firewalld systemctl start firewalld",
"firewall-cmd --add-port min_port - max_port /tcp --permanent firewall-cmd --add-service ftp --permanent firewall-cmd --reload",
"restorecon -r /var/ftp/ your-kickstart-file.ks chmod 444 /var/ftp/ your-kickstart-file.ks",
"systemctl start vsftpd.service",
"systemctl restart vsftpd.service",
"systemctl enable vsftpd",
"lsblk -l -p -o name,rm,ro,hotplug,size,type,mountpoint,uuid",
"umount /dev/xyz",
"lsblk -l -p",
"e2label /dev/xyz OEMDRV",
"xfs_admin -L OEMDRV /dev/xyz",
"umount /dev/xyz",
"append initrd=initrd.img inst.ks=http://10.32.5.1/mnt/archive/RHEL-9/9.x/x86_64/kickstarts/ks.cfg",
"kernel vmlinuz inst.ks=http://10.32.5.1/mnt/archive/RHEL-9/9.x/x86_64/kickstarts/ks.cfg",
"cp link tcpmaint 592 592 acc 592 fm",
"ftp <host> (secure",
"cd / location/of/install-tree /images/ ascii get generic.prm (repl get redhat.exec (repl locsite fix 80 binary get kernel.img (repl get initrd.img (repl quit",
"VMUSER FILELIST A0 V 169 Trunc=169 Size=6 Line=1 Col=1 Alt=0 Cmd Filename Filetype Fm Format Lrecl Records Blocks Date Time REDHAT EXEC B1 V 22 1 1 4/15/10 9:30:40 GENERIC PRM B1 V 44 1 1 4/15/10 9:30:32 INITRD IMG B1 F 80 118545 2316 4/15/10 9:30:25 KERNEL IMG B1 F 80 74541 912 4/15/10 9:30:17",
"redhat",
"cp ipl DASD_device_number loadparm boot_entry_number",
"cp ipl eb1c loadparm 0",
"cp set loaddev portname WWPN lun LUN bootprog boot_entry_number",
"cp set loaddev portname 50050763 050b073d lun 40204011 00000000 bootprog 0",
"query loaddev",
"cp ipl FCP_device",
"cp ipl fc00",
"subscription-manager register --activationkey= <activation_key_name> --org= <organization_ID>",
"The system has been registered with id: 62edc0f8-855b-4184-b1b8-72a9dc793b96",
"subscription-manager syspurpose role --set \"VALUE\"",
"subscription-manager syspurpose role --set \"Red Hat Enterprise Linux Server\"",
"subscription-manager syspurpose role --list",
"subscription-manager syspurpose role --unset",
"subscription-manager syspurpose service-level --set \"VALUE\"",
"subscription-manager syspurpose service-level --set \"Standard\"",
"subscription-manager syspurpose service-level --list",
"subscription-manager syspurpose service-level --unset",
"subscription-manager syspurpose usage --set \"VALUE\"",
"subscription-manager syspurpose usage --set \"Production\"",
"subscription-manager syspurpose usage --list",
"subscription-manager syspurpose usage --unset",
"subscription-manager syspurpose --show",
"man subscription-manager",
"subscription-manager status +-------------------------------------------+ System Status Details +-------------------------------------------+ Overall Status: Current System Purpose Status: Matched",
"subscription-manager status +-------------------------------------------+ System Status Details +-------------------------------------------+ Overall Status: Disabled Content Access Mode is set to Simple Content Access. This host has access to content, regardless of subscription status. System Purpose Status: Disabled",
"CP ATTACH EB1C TO *",
"CP LINK RHEL7X 4B2E 4B2E MR DASD 4B2E LINKED R/W",
"cio_ignore -r device_number",
"cio_ignore -r 4b2e",
"chccwdev -e device_number",
"chccwdev -e 4b2e",
"cd /root # dasdfmt -b 4096 -d cdl -p /dev/disk/by-path/ccw-0.0.4b2e Drive Geometry: 10017 Cylinders * 15 Heads = 150255 Tracks I am going to format the device /dev/disk/by-path/ccw-0.0.4b2e in the following way: Device number of device : 0x4b2e Labelling device : yes Disk label : VOL1 Disk identifier : 0X4B2E Extent start (trk no) : 0 Extent end (trk no) : 150254 Compatible Disk Layout : yes Blocksize : 4096 --->> ATTENTION! <<--- All data of that device will be lost. Type \"yes\" to continue, no will leave the disk untouched: yes cyl 97 of 3338 |#----------------------------------------------| 2%",
"Rereading the partition table Exiting",
"fdasd -a /dev/disk/by-path/ccw-0.0.4b2e reading volume label ..: VOL1 reading vtoc ..........: ok auto-creating one partition for the whole disk writing volume label writing VTOC rereading partition table",
"machine_id=USD(cat /etc/machine-id) kernel_version=USD(uname -r) ls /boot/loader/entries/USDmachine_id-USDkernel_version.conf",
"title Red Hat Enterprise Linux (4.18.0-80.el8.s390x) 8.0 (Ootpa) version 4.18.0-80.el8.s390x linux /boot/vmlinuz-4.18.0-80.el8.s390x initrd /boot/initramfs-4.18.0-80.el8.s390x.img options root=/dev/mapper/vg_devel1-lv_root crashkernel=auto rd.dasd=0.0.0200 rd.dasd=0.0.0207 rd.lvm.lv=vg_devel1/lv_root rd.lvm.lv=vg_devel1/lv_swap cio_ignore=all,!condev rd.znet=qeth,0.0.0a00,0.0.0a01,0.0.0a02,layer2=1,portno=0 id rhel-20181027190514-4.18.0-80.el8.s390x grub_users USDgrub_users grub_arg --unrestricted grub_class kernel",
"title Red Hat Enterprise Linux (4.18.0-80.el8.s390x) 8.0 (Ootpa) version 4.18.0-80.el8.s390x linux /boot/vmlinuz-4.18.0-80.el8.s390x initrd /boot/initramfs-4.18.0-80.el8.s390x.img options root=/dev/mapper/vg_devel1-lv_root crashkernel=auto rd.dasd=0.0.0200 rd.dasd=0.0.0207 rd.dasd=0.0.202b rd.lvm.lv=vg_devel1/lv_root rd.lvm.lv=vg_devel1/lv_swap cio_ignore=all,!condev rd.znet=qeth,0.0.0a00,0.0.0a01,0.0.0a02,layer2=1,portno=0 id rhel-20181027190514-4.18.0-80.el8.s390x grub_users USDgrub_users grub_arg --unrestricted grub_class kernel",
"zipl -V Using config file '/etc/zipl.conf' Using BLS config file '/boot/loader/entries/4ab74e52867b4f998e73e06cf23fd761-4.18.0-80.el8.s390x.conf' Target device information Device..........................: 5e:00 Partition.......................: 5e:01 Device name.....................: dasda Device driver name..............: dasd DASD device number..............: 0201 Type............................: disk partition Disk layout.....................: ECKD/compatible disk layout Geometry - heads................: 15 Geometry - sectors..............: 12 Geometry - cylinders............: 13356 Geometry - start................: 24 File system block size..........: 4096 Physical block size.............: 4096 Device size in physical blocks..: 262152 Building bootmap in '/boot' Building menu 'zipl-automatic-menu' Adding #1: IPL section '4.18.0-80.el8.s390x' (default) initial ramdisk...: /boot/initramfs-4.18.0-80.el8.s390x.img kernel image......: /boot/vmlinuz-4.18.0-80.el8.s390x kernel parmline...: 'root=/dev/mapper/vg_devel1-lv_root crashkernel=auto rd.dasd=0.0.0200 rd.dasd=0.0.0207 rd.dasd=0.0.202b rd.lvm.lv=vg_devel1/lv_root rd.lvm.lv=vg_devel1/lv_swap cio_ignore=all,!condev rd.znet=qeth,0.0.0a00,0.0.0a01,0.0.0a02,layer2=1,portno=0' component address: kernel image....: 0x00010000-0x0049afff parmline........: 0x0049b000-0x0049bfff initial ramdisk.: 0x004a0000-0x01a26fff internal loader.: 0x0000a000-0x0000cfff Preparing boot menu Interactive prompt......: enabled Menu timeout............: 5 seconds Default configuration...: '4.18.0-80.el8.s390x' Preparing boot device: dasda (0201). Syncing disks Done.",
"0.0.0207 0.0.0200 use_diag=1 readonly=1",
"cio_ignore -r device_number",
"cio_ignore -r 021a",
"echo add > /sys/bus/ccw/devices/ dasd-bus-ID /uevent",
"echo add > /sys/bus/ccw/devices/0.0.021a/uevent",
"machine_id=USD(cat /etc/machine-id) kernel_version=USD(uname -r) ls /boot/loader/entries/USDmachine_id-USDkernel_version.conf",
"title Red Hat Enterprise Linux (5.14.0-55.el9.s390x) 9.0 (Plow) version 5.14.0-55.el9.s390x linux /boot/vmlinuz-5.14.0-55.el9.s390x initrd /boot/initramfs-5.14.0-55.el9.s390x.img options root=/dev/mapper/vg_devel1-lv_root crashkernel=auto rd.zfcp=0.0.fc00,0x5105074308c212e9,0x401040a000000000 rd.zfcp=0.0.fcd0,0x5105074308c2aee9,0x401040a000000000 rd.lvm.lv=vg_devel1/lv_root rd.lvm.lv=vg_devel1/lv_swap cio_ignore=all,!condev rd.znet=qeth,0.0.0a00,0.0.0a01,0.0.0a02,layer2=1,portno=0 id rhel-20181027190514-5.14.0-55.el9.s390x grub_users USDgrub_users grub_arg --unrestricted grub_class kernel",
"title Red Hat Enterprise Linux (5.14.0-55.el9.s390x) 9.0 (Plow) version 5.14.0-55.el9.s390x linux /boot/vmlinuz-5.14.0-55.el9.s390x initrd /boot/initramfs-5.14.0-55.el9.s390x.img options root=/dev/mapper/vg_devel1-lv_root crashkernel=auto rd.zfcp=0.0.fc00,0x5105074308c212e9,0x401040a000000000 rd.zfcp=0.0.fcd0,0x5105074308c2aee9,0x401040a000000000 rd.zfcp=0.0.fc00,0x5105074308c212e9,0x401040a300000000 rd.zfcp=0.0.fcd0,0x5105074308c2aee9,0x401040a300000000 rd.lvm.lv=vg_devel1/lv_root rd.lvm.lv=vg_devel1/lv_swap cio_ignore=all,!condev rd.znet=qeth,0.0.0a00,0.0.0a01,0.0.0a02,layer2=1,portno=0 id rhel-20181027190514-5.14.0-55.el9.s390x grub_users USDgrub_users grub_arg --unrestricted grub_class kernel",
"zipl -V Using config file '/etc/zipl.conf' Using BLS config file '/boot/loader/entries/4ab74e52867b4f998e73e06cf23fd761-5.14.0-55.el9.s390x.conf' Run /lib/s390-tools/zipl_helper.device-mapper /boot Target device information Device..........................: fd:00 Partition.......................: fd:01 Device name.....................: dm-0 Device driver name..............: device-mapper Type............................: disk partition Disk layout.....................: SCSI disk layout Geometry - start................: 2048 File system block size..........: 4096 Physical block size.............: 512 Device size in physical blocks..: 10074112 Building bootmap in '/boot/' Building menu 'zipl-automatic-menu' Adding #1: IPL section '5.14.0-55.el9.s390x' (default) kernel image......: /boot/vmlinuz-5.14.0-55.el9.s390x kernel parmline...: 'root=/dev/mapper/vg_devel1-lv_root crashkernel=auto rd.zfcp=0.0.fc00,0x5105074308c212e9,0x401040a000000000 rd.zfcp=0.0.fcd0,0x5105074308c2aee9,0x401040a000000000 rd.zfcp=0.0.fc00,0x5105074308c212e9,0x401040a300000000 rd.zfcp=0.0.fcd0,0x5105074308c2aee9,0x401040a300000000 rd.lvm.lv=vg_devel1/lv_root rd.lvm.lv=vg_devel1/lv_swap cio_ignore=all,!condev rd.znet=qeth,0.0.0a00,0.0.0a01,0.0.0a02,layer2=1,portno=0' initial ramdisk...: /boot/initramfs-5.14.0-55.el9.s390x.img component address: kernel image....: 0x00010000-0x007a21ff parmline........: 0x00001000-0x000011ff initial ramdisk.: 0x02000000-0x028f63ff internal loader.: 0x0000a000-0x0000a3ff Preparing boot device: dm-0. Detected SCSI PCBIOS disk layout. Writing SCSI master boot record. Syncing disks Done.",
"0.0.fc00 0x5105074308c212e9 0x401040a000000000 0.0.fc00 0x5105074308c212e9 0x401040a100000000 0.0.fc00 0x5105074308c212e9 0x401040a300000000 0.0.fcd0 0x5105074308c2aee9 0x401040a000000000 0.0.fcd0 0x5105074308c2aee9 0x401040a100000000 0.0.fcd0 0x5105074308c2aee9 0x401040a300000000 0.0.4000 0.0.5000",
"zfcp_cio_free",
"zfcpconf.sh",
"lsmod | grep qeth qeth_l3 69632 0 qeth_l2 49152 1 qeth 131072 2 qeth_l3,qeth_l2 qdio 65536 3 qeth,qeth_l3,qeth_l2 ccwgroup 20480 1 qeth",
"modprobe qeth",
"cio_ignore -r read_device_bus_id,write_device_bus_id,data_device_bus_id",
"cio_ignore -r 0.0.f500,0.0.f501,0.0.f502",
"znetconf -u Scanning for network devices Device IDs Type Card Type CHPID Drv. ------------------------------------------------------------ 0.0.f500,0.0.f501,0.0.f502 1731/01 OSA (QDIO) 00 qeth 0.0.f503,0.0.f504,0.0.f505 1731/01 OSA (QDIO) 01 qeth 0.0.0400,0.0.0401,0.0.0402 1731/05 HiperSockets 02 qeth",
"znetconf -a f500 Scanning for network devices Successfully configured device 0.0.f500 (encf500)",
"znetconf -a f500 -o portname=myname Scanning for network devices Successfully configured device 0.0.f500 (encf500)",
"echo read_device_bus_id,write_device_bus_id,data_device_bus_id > /sys/bus/ccwgroup/drivers/qeth/group",
"echo 0.0.f500,0.0.f501,0.0.f502 > /sys/bus/ccwgroup/drivers/qeth/group",
"ls /sys/bus/ccwgroup/drivers/qeth/0.0.f500",
"echo 1 > /sys/bus/ccwgroup/drivers/qeth/0.0.f500/online",
"cat /sys/bus/ccwgroup/drivers/qeth/0.0.f500/online 1",
"cat /sys/bus/ccwgroup/drivers/qeth/0.0.f500/if_name encf500",
"lsqeth encf500 Device name : encf500 ------------------------------------------------- card_type : OSD_1000 cdev0 : 0.0.f500 cdev1 : 0.0.f501 cdev2 : 0.0.f502 chpid : 76 online : 1 portname : OSAPORT portno : 0 state : UP (LAN ONLINE) priority_queueing : always queue 0 buffer_count : 16 layer2 : 1 isolation : none",
"cd /etc/NetworkManager/system-connections/ cp enc9a0.nmconnection enc600.nmconnection",
"lsqeth -p devices CHPID interface cardtype port chksum prio-q'ing rtr4 rtr6 lay'2 cnt -------------------------- ----- ---------------- -------------- ---- ------ ---------- ---- ---- ----- ----- 0.0.09a0/0.0.09a1/0.0.09a2 x00 enc9a0 Virt.NIC QDIO 0 sw always_q_2 n/a n/a 1 64 0.0.0600/0.0.0601/0.0.0602 x00 enc600 Virt.NIC QDIO 0 sw always_q_2 n/a n/a 1 64",
"[connection] type=ethernet interface-name=enc600 [ipv4] address1=10.12.20.136/24,10.12.20.1 dns=10.12.20.53; method=manual [ethernet] mac-address=00:53:00:8f:fa:66",
"chown root:root /etc/NetworkManager/system-connections/enc600.nmconnection",
"nmcli connection reload",
"nmcli connection show enc600",
"cio_ignore -r read_device_bus_id,write_device_bus_id,data_device_bus_id",
"cio_ignore -r 0.0.0600,0.0.0601,0.0.0602",
"echo add > /sys/bus/ccw/devices/read-channel/uevent",
"echo add > /sys/bus/ccw/devices/0.0.0600/uevent",
"lsqeth",
"[ipv4] address1=10.12.20.136/24,10.12.20.1 [ipv6] address1=2001:db8:1::1,2001:db8:1::fffe",
"nmcli connection up enc600",
"ip addr show enc600 3: enc600: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 3c:97:0e:51:38:17 brd ff:ff:ff:ff:ff:ff 10.12.20.136/24 brd 10.12.20.1 scope global dynamic enc600 valid_lft 81487sec preferred_lft 81487sec inet6 1574:12:5:1185:3e97:eff:fe51:3817/64 scope global noprefixroute dynamic valid_lft 2591994sec preferred_lft 604794sec inet6 fe45::a455:eff:d078:3847/64 scope link valid_lft forever preferred_lft forever",
"ip route default via 10.12.20.136 dev enc600 proto dhcp src",
"ping -c 1 10.12.20.136 PING 10.12.20.136 (10.12.20.136) 56(84) bytes of data. 64 bytes from 10.12.20.136: icmp_seq=0 ttl=63 time=8.07 ms",
"machine_id=USD(cat /etc/machine-id) kernel_version=USD(uname -r) ls /boot/loader/entries/USDmachine_id-USDkernel_version.conf",
"root=10.16.105.196:/nfs/nfs_root cio_ignore=all,!condev rd.znet=qeth,0.0.0a00,0.0.0a01,0.0.0a02,layer2=1,portno=0,portname=OSAPORT ip=10.16.105.197:10.16.105.196:10.16.111.254:255.255.248.0:nfs‐server.subdomain.domain:enc9a0:none rd_NO_LUKS rd_NO_LVM rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYTABLE=us",
"dnf -y install kernel-64k",
"k=USD(echo /boot/vmlinuz*64k) grubby --set-default=USDk --update-kernel=USDk --args=\"crashkernel=2G-:640M\"",
"efibootmgr BootCurrent: 0000 Timeout: 5 seconds BootOrder: 0003,0004,0001,0000,0002,0005 Boot0000\\* Red Hat Enterprise Linux",
"efibootmgr -o 0000,0001,0002,0003,0004,0005",
"reboot",
"dnf erase kernel",
"getconf PAGESIZE 65536",
"free total used free shared buff/cache available Mem: 35756352 3677184 34774848 25792 237120 32079168 Swap: 6504384 0 6504384",
"subscription-manager unregister",
"%packages @^Infrastructure Server %end",
"%packages @X Window System @Desktop @Sound and Video %end",
"%packages sqlite curl aspell docbook* %end",
"%packages @module:stream/profile %end",
"%packages -@Graphical Administration Tools -autofs -ipa*compat %end",
"%packages --multilib --ignoremissing",
"%packages --multilib --default %end",
"%packages @Graphical Administration Tools --optional %end",
"%packages kernel-64k -kmod-kvdo -vdo -kernel %end",
"getconf PAGESIZE 65536",
"free total used free shared buff/cache available Mem: 35756352 3677184 34774848 25792 237120 32079168 Swap: 6504384 0 6504384",
"%pre --interpreter=/usr/libexec/platform-python -- Python script omitted -- %end",
"%pre --log=/tmp/ks-pre.log",
"%pre-install --interpreter=/usr/libexec/platform-python -- Python script omitted -- %end",
"%pre-install --log=/mnt/sysroot/root/ks-pre.log",
"%post --interpreter=/usr/libexec/platform-python -- Python script omitted -- %end",
"%post --interpreter=/usr/libexec/platform-python",
"%post --nochroot cp /etc/resolv.conf /mnt/sysroot/etc/resolv.conf %end",
"%post --log=/root/ks-post.log",
"%post --nochroot --log=/mnt/sysroot/root/ks-post.log",
"Start of the %post section with logging into /root/ks-post.log %post --log=/root/ks-post.log Mount an NFS share mkdir /mnt/temp mount -o nolock 10.10.0.2:/usr/new-machines /mnt/temp openvt -s -w -- /mnt/temp/runme umount /mnt/temp End of the %post section %end",
"%onerror --interpreter=/usr/libexec/platform-python",
"%addon com_redhat_kdump --enable --reserve-mb=auto %end",
"cdrom",
"cmdline",
"driverdisk [ partition |--source= url |--biospart= biospart ]",
"driverdisk --source=ftp://path/to/dd.img driverdisk --source=http://path/to/dd.img driverdisk --source=nfs:host:/path/to/dd.img",
"driverdisk LABEL = DD :/e1000.rpm",
"eula [--agreed]",
"firstboot OPTIONS",
"graphical [--non-interactive]",
"halt",
"harddrive OPTIONS",
"harddrive --partition=hdb2 --dir=/tmp/install-tree",
"liveimg --url= SOURCE [ OPTIONS ]",
"liveimg --url=file:///images/install/squashfs.img --checksum=03825f567f17705100de3308a20354b4d81ac9d8bed4bb4692b2381045e56197 --noverifyssl",
"logging OPTIONS",
"mediacheck",
"nfs OPTIONS",
"nfs --server=nfsserver.example.com --dir=/tmp/install-tree",
"ostreesetup --osname= OSNAME [--remote= REMOTE ] --url= URL --ref= REF [--nogpg]",
"ostreecontainer [--stateroot STATEROOT] --url URL [--transport TRANSPORT] [--remote REMOTE] [--no-signature-verification]",
"poweroff",
"reboot OPTIONS",
"shutdown",
"sshpw --username= name [ OPTIONS ] password",
"python3 -c 'import crypt,getpass;pw=getpass.getpass();print(crypt.crypt(pw) if (pw==getpass.getpass(\"Confirm: \")) else exit())'",
"sshpw --username= example_username example_password --plaintext sshpw --username=root example_password --lock",
"sshpw --username=root example_password --lock",
"text [--non-interactive]",
"url --url= FROM [ OPTIONS ]",
"url --url=http:// server / path",
"url --url=ftp:// username : password @ server / path",
"vnc [--host= host_name ] [--port= port ] [--password= password ]",
"hmc",
"%include path/to/file",
"%ksappend path/to/file",
"authconfig [ OPTIONS ]",
"authselect [ OPTIONS ]",
"firewall --enabled|--disabled [ incoming ] [ OPTIONS ]",
"group --name= name [--gid= gid ]",
"keyboard --vckeymap|--xlayouts OPTIONS",
"keyboard --xlayouts=us,'cz (qwerty)' --switch=grp:alt_shift_toggle",
"lang language [--addsupport= language,... ]",
"lang en_US --addsupport=cs_CZ,de_DE,en_UK",
"lang en_US",
"module --name= NAME [--stream= STREAM ]",
"repo --name= repoid [--baseurl= url |--mirrorlist= url |--metalink= url ] [ OPTIONS ]",
"rootpw [--iscrypted|--plaintext] [--lock] password",
"python -c 'import crypt,getpass;pw=getpass.getpass();print(crypt.crypt(pw) if (pw==getpass.getpass(\"Confirm: \")) else exit())'",
"%post echo \"PermitRootLogin yes\" > /etc/ssh/sshd_config.d/01-permitrootlogin.conf %end",
"selinux [--disabled|--enforcing|--permissive]",
"services [--disabled= list ] [--enabled= list ]",
"services --disabled=auditd,cups,smartd,nfslock",
"services --disabled=auditd, cups, smartd, nfslock",
"skipx",
"sshkey --username= user \"ssh_key\"",
"syspurpose [ OPTIONS ]",
"syspurpose --role=\"Red Hat Enterprise Linux Server\"",
"timezone timezone [ OPTIONS ]",
"timesource [--ntp-server NTP_SERVER | --ntp-pool NTP_POOL | --ntp-disable] [--nts]",
"timezone Europe timesource --ntp-server 0.rhel.pool.ntp.org timesource --ntp-server 1.rhel.pool.ntp.org timesource --ntp-server 2.rhel.pool.ntp.org",
"user --name= username [ OPTIONS ]",
"python -c 'import crypt,getpass;pw=getpass.getpass();print(crypt.crypt(pw) if (pw==getpass.getpass(\"Confirm: \")) else exit())'",
"xconfig [--startxonboot]",
"network OPTIONS",
"network --bootproto=dhcp",
"network --bootproto=bootp",
"network --bootproto=ibft",
"network --bootproto=static --ip=10.0.2.15 --netmask=255.255.255.0 --gateway=10.0.2.254 --nameserver=10.0.2.1",
"network --bootproto=static --ip=10.0.2.15 --netmask=255.255.255.0 --gateway=10.0.2.254 --nameserver=192.168.2.1,192.168.3.1",
"network --bootproto=dhcp --device=em1",
"network --device ens3 --ipv4-dns-search domain1.example.com,domain2.example.com",
"network --device=bond0 --bondslaves=em1,em2",
"network --bondopts=mode=active-backup,balance-rr;primary=eth1",
"network --device=em1 --vlanid=171 --interfacename=vlan171",
"network --teamslaves=\"p3p1'{\\\"prio\\\": -10, \\\"sticky\\\": true}',p3p2'{\\\"prio\\\": 100}'\"",
"network --device team0 --activate --bootproto static --ip=10.34.102.222 --netmask=255.255.255.0 --gateway=10.34.102.254 --nameserver=10.34.39.2 --teamslaves=\"p3p1'{\\\"prio\\\": -10, \\\"sticky\\\": true}',p3p2'{\\\"prio\\\": 100}'\" --teamconfig=\"{\\\"runner\\\": {\\\"name\\\": \\\"activebackup\\\"}}\"",
"network --device=bridge0 --bridgeslaves=em1",
"realm join [ OPTIONS ] domain",
"ignoredisk --drives= drive1,drive2 ,... | --only-use= drive",
"ignoredisk --only-use=sda",
"ignoredisk --only-use=disk/by-id/dm-uuid-mpath-2416CD96995134CA5D787F00A5AA11017",
"ignoredisk --only-use==/dev/disk/by-id/dm-uuid-mpath-",
"bootloader --location=mbr",
"ignoredisk --drives=disk/by-id/dm-uuid-mpath-2416CD96995134CA5D787F00A5AA11017",
"part / --fstype=xfs --onpart=sda1",
"part / --fstype=xfs --onpart=/dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:0-part1 part / --fstype=xfs --onpart=/dev/disk/by-id/ata-ST3160815AS_6RA0C882-part1",
"clearpart OPTIONS",
"clearpart --drives=hda,hdb --all",
"clearpart --drives=disk/by-id/scsi-58095BEC5510947BE8C0360F604351918",
"clearpart --drives=disk/by-id/dm-uuid-mpath-2416CD96995134CA5D787F00A5AA11017",
"clearpart --initlabel --drives=names_of_disks",
"clearpart --initlabel --drives=dasda,dasdb,dasdc",
"clearpart --list=sda2,sda3,sdb1",
"part / --fstype=xfs --onpart=sda1",
"part / --fstype=xfs --onpart=/dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:0-part1 part / --fstype=xfs --onpart=/dev/disk/by-id/ata-ST3160815AS_6RA0C882-part1",
"zerombr",
"bootloader [ OPTIONS ]",
"bootloader --location=mbr --append=\"hdd=ide-scsi ide=nodma\"",
"%packages -plymouth %end",
"bootloader --driveorder=sda,hda",
"bootloader --iscrypted --password=grub.pbkdf2.sha512.10000.5520C6C9832F3AC3D149AC0B24BE69E2D4FB0DBEEDBD29CA1D30A044DE2645C4C7A291E585D4DC43F8A4D82479F8B95CA4BA4381F8550510B75E8E0BB2938990.C688B6F0EF935701FF9BD1A8EC7FE5BD2333799C98F28420C5CC8F1A2A233DE22C83705BB614EA17F3FDFDF4AC2161CEA3384E56EB38A2E39102F5334C47405E",
"part / --fstype=xfs --onpart=sda1",
"part / --fstype=xfs --onpart=/dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:0-part1 part / --fstype=xfs --onpart=/dev/disk/by-id/ata-ST3160815AS_6RA0C882-part1",
"autopart OPTIONS",
"reqpart [--add-boot]",
"part|partition mntpoint [ OPTIONS ]",
"swap --recommended",
"swap --hibernation",
"partition /home --onpart=hda1",
"partition pv.1 --onpart=hda2",
"partition pv.1 --onpart=hdb",
"part / --fstype=xfs --grow --asprimary --size=8192 --ondisk=disk/by-id/dm-uuid-mpath-2416CD96995134CA5D787F00A5AA11017",
"part /opt/foo1 --size=512 --fstype=ext4 --mkfsoptions=\"-O ^has_journal,^flex_bg,^metadata_csum\" part /opt/foo2 --size=512 --fstype=xfs --mkfsoptions=\"-m bigtime=0,finobt=0\"",
"part / --fstype=xfs --onpart=sda1",
"part / --fstype=xfs --onpart=/dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:0-part1 part / --fstype=xfs --onpart=/dev/disk/by-id/ata-ST3160815AS_6RA0C882-part1",
"raid mntpoint --level= level --device= device-name partitions*",
"part /opt/foo1 --size=512 --fstype=ext4 --mkfsoptions=\"-O ^has_journal,^flex_bg,^metadata_csum\" part /opt/foo2 --size=512 --fstype=xfs --mkfsoptions=\"-m bigtime=0,finobt=0\"",
"part raid.01 --size=6000 --ondisk=sda part raid.02 --size=6000 --ondisk=sdb part raid.03 --size=6000 --ondisk=sdc part swap --size=512 --ondisk=sda part swap --size=512 --ondisk=sdb part swap --size=512 --ondisk=sdc part raid.11 --size=1 --grow --ondisk=sda part raid.12 --size=1 --grow --ondisk=sdb part raid.13 --size=1 --grow --ondisk=sdc raid / --level=1 --device=rhel8-root --label=rhel8-root raid.01 raid.02 raid.03 raid /home --level=5 --device=rhel8-home --label=rhel8-home raid.11 raid.12 raid.13",
"volgroup name [ OPTIONS ] [ partition *]",
"volgroup rhel00 --useexisting --noformat",
"part pv.01 --size 10000 volgroup my_volgrp pv.01 logvol / --vgname=my_volgrp --size=2000 --name=root",
"logvol mntpoint --vgname= name --name= name [ OPTIONS ]",
"swap --recommended",
"swap --hibernation",
"part /opt/foo1 --size=512 --fstype=ext4 --mkfsoptions=\"-O ^has_journal,^flex_bg,^metadata_csum\" part /opt/foo2 --size=512 --fstype=xfs --mkfsoptions=\"-m bigtime=0,finobt=0\"",
"part pv.01 --size 3000 volgroup myvg pv.01 logvol / --vgname=myvg --size=2000 --name=rootvol",
"part pv.01 --size 1 --grow volgroup myvg pv.01 logvol / --vgname=myvg --name=rootvol --percent=90",
"snapshot vg_name/lv_name --name= snapshot_name --when= pre-install|post-install",
"mount [ OPTIONS ] device mountpoint",
"fcoe --nic= name [ OPTIONS ]",
"iscsi --ipaddr= address [ OPTIONS ]",
"iscsiname iqname",
"nvdimm action [ OPTIONS ]",
"nvdimm reconfigure [--namespace= NAMESPACE ] [--mode= MODE ] [--sectorsize= SECTORSIZE ]",
"nvdimm reconfigure --namespace=namespace0.0 --mode=sector --sectorsize=512",
"nvdimm reconfigure --namespace=namespace0.0 --mode=sector --sectorsize=512",
"nvdimm use [--namespace= NAMESPACE |--blockdevs= DEVICES ]",
"nvdimm use --namespace=namespace0.0",
"nvdimm use --blockdevs=pmem0s,pmem1s nvdimm use --blockdevs=pmem*",
"zfcp --devnum= devnum [--wwpn= wwpn --fcplun= lun ]",
"zfcp --devnum=0.0.4000 --wwpn=0x5005076300C213e9 --fcplun=0x5022000000000000 zfcp --devnum=0.0.4000",
"%addon com_redhat_kdump [ OPTIONS ] %end",
"%addon com_redhat_kdump --enable --reserve-mb=128 %end",
"%addon com_redhat_oscap key = value %end",
"%addon com_redhat_oscap content-type = scap-security-guide profile = xccdf_org.ssgproject.content_profile_pci-dss %end",
"%addon com_redhat_oscap content-type = datastream content-url = http://www.example.com/scap/testing_ds.xml datastream-id = scap_example.com_datastream_testing xccdf-id = scap_example.com_cref_xccdf.xml profile = xccdf_example.com_profile_my_profile fingerprint = 240f2f18222faa98856c3b4fc50c4195 %end",
"pwpolicy name [--minlen= length ] [--minquality= quality ] [--strict|--notstrict] [--emptyok|--notempty] [--changesok|--nochanges]",
"rescue [--nomount|--romount]",
"rescue [--nomount|--romount]",
"inst.stage2=https://hostname/path_to_install_image/ inst.noverifyssl",
"inst.repo=https://hostname/path_to_install_repository/ inst.noverifyssl",
"inst.stage2.all inst.stage2=http://hostname1/path_to_install_tree/ inst.stage2=http://hostname2/path_to_install_tree/ inst.stage2=http://hostname3/path_to_install_tree/",
"[PROTOCOL://][USERNAME[:PASSWORD]@]HOST[:PORT]",
"inst.nosave=Input_ks,logs",
"ifname=eth0:01:23:45:67:89:ab",
"vlan=vlan5:enp0s1",
"bond=bond0:enp0s1,enp0s2:mode=active-backup,tx_queues=32,downdelay=5000",
"team=team0:enp0s1,enp0s2",
"bridge=bridge0:enp0s1,enp0s2",
"modprobe.blacklist=ahci,firewire_ohci",
"modprobe.blacklist=virtio_blk"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html-single/automatically_installing_rhel/index |
Chapter 33. System Recovery | Chapter 33. System Recovery Red Hat Enterprise Linux 6 offers three system recovery modes, rescue mode , single-user mode , and emergency mode that can be used to repair malfunctioning systems. This chapter describes how to boot into each system recovery mode and gives guidance to resolve certain problems that can only be solved with help of system recovery modes. These are the usual reasons why you may need to boot to one of the system recovery modes: You are unable to boot normally into Red Hat Enterprise Linux (runlevel 3 or 5). You need to resolve hardware or software problems that cannot be resolved while the system is running normally, or you want to access some important files off of your hard drive. You forgot the root password. Some of the problems behind are further discussed in Section 33.4, "Resolving Problems in System Recovery Modes" . 33.1. Rescue Mode Rescue mode provides the ability to boot a small Red Hat Enterprise Linux environment entirely from external media, such as CD-ROM or USB drive, instead of the system's hard drive. It contains command-line utilities for repairing a wide variety of issues. In this mode, you can mount file systems as read-only or even to not mount them at all, blacklist or add drivers provided on a driver disc, install or upgrade system packages, or manage partitions. To boot into rescue mode follow this procedure: Procedure 33.1. Booting into Rescue Mode Boot the system from either minimal boot media, or a full installation DVD or USB drive, and wait for the boot menu to appear. For details about booting the system from the chosen media, see the respective chapters in the Installation Guide . From the boot menu, append the rescue keyword as a kernel parameter to the boot command line. If your system requires a third-party driver provided on a driver disc to boot, append the additional option dd to the boot command line to load that driver: For more information about using a disc driver at boot time, see the respective chapters in the Installation Guide . If a driver that is a part of the Red Hat Enterprise Linux 6 distribution prevents the system from booting, blacklist that driver by appending the rdblacklist option to the boot command line: Answer a few basic questions and select the location of a valid rescue image as you are prompted to. Select the relevant type from Local CD-ROM , Hard Drive , NFS image , FTP , or HTTP . The selected location must contain a valid installation tree, and the installation tree must be for the same version of Red Hat Enterprise Linux as is the disk from which you booted. For more information about how to setup an installation tree on a hard drive, NFS server, FTP server, or HTTP server, see the respective chapters in the Installation Guide . If you select a rescue image that does not require a network connection, you are asked whether or not you want to establish a network connection. A network connection is useful if you need to backup files to a different computer or install some RPM packages from a shared network location. The following message is displayed: If you select Continue , the system attempts to mount your root partition under the /mnt/sysimage/ directory. The root partition typically contains several file systems, such as /home/ , /boot/ , and /var/ , which are automatically mounted to the correct locations. If mounting the partition fails, you will be notified. If you select Read-Only , the system attempts to mount your file systems under the directory /mnt/sysimage/ , but in read-only mode. If you select Skip , your file systems will not be mounted. Choose Skip if you think your file system is corrupted. Once you have your system in rescue mode, the following prompt appears on the virtual console (VC) 1 and VC 2. Use the Ctrl - Alt - F1 key combination to access VC 1 and Ctrl - Alt - F2 to access VC 2: If you selected Continue to mount your partitions automatically and they were mounted successfully, you are in single-user mode . Even if your file system is mounted, the default root partition while in rescue mode is a temporary root partition, not the root partition of the file system used during normal user mode (runlevel 3 or 5). If you selected to mount your file system and it mounted successfully, you can change the root partition of the rescue mode environment to the root partition of your file system by executing the following command: This is useful if you need to run commands, such as rpm , that require your root partition to be mounted as / . To exit the chroot environment, type exit to return to the prompt. If you selected Skip , you can still try to mount a partition or a LVM2 logical volume manually inside rescue mode by creating a directory and typing the following command: where /directory is a directory that you have created and /dev/mapper/VolGroup00-LogVol02 is the LVM2 logical volume you want to mount. If the partition is of ext2 or ext3 type, replace ext4 with ext2 or ext3 respectively. If you do not know the names of all physical partitions, use the following command to list them: If you do not know the names of all LVM2 physical volumes, volume groups, or logical volumes, use the pvdisplay , vgdisplay or lvdisplay commands, respectively. From the prompt, you can run many useful commands, such as: ssh , scp , and ping if the network is started dump and restore for users with tape drives parted and fdisk for managing partitions rpm for installing or upgrading software vi for editing text files | [
"rescue dd",
"rescue rdblacklist= driver_name",
"The rescue environment will now attempt to find your Linux installation and mount it under the directory /mnt/sysimage. You can then make any changes required to your system. If you want to proceed with this step choose 'Continue'. You can also choose to mount your file systems read-only instead of read-write by choosing 'Read-only'. If for some reason this process fails you can choose 'Skip' and this step will be skipped and you will go directly to a command shell.",
"sh-3.00b#",
"sh-3.00b# chroot /mnt/sysimage",
"sh-3.00b# mkdir / directory sh-3.00b# mount -t ext4 /dev/mapper/VolGroup00-LogVol02 /directory",
"sh-3.00b# fdisk -l"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/ch-system_recovery |
Chapter 1. Introduction to Automation execution environments | Chapter 1. Introduction to Automation execution environments Using Ansible content that depends on non-default dependencies can be complicated because the packages must be installed on each node, interact with other software installed on the host system, and be kept in sync. Automation execution environments help simplify this process and can easily be created with Ansible Builder. 1.1. About automation execution environments All automation in Red Hat Ansible Automation Platform runs on container images called automation execution environments. Automation execution environments create a common language for communicating automation dependencies, and offer a standard way to build and distribute the automation environment. An automation execution environment should contain the following: Ansible Core 2.15 or later Python 3.8-3.11 Ansible Runner Ansible content collections and their dependencies System dependencies 1.1.1. Why use automation execution environments? With automation execution environments, Red Hat Ansible Automation Platform has transitioned to a distributed architecture by separating the control plane from the execution plane. Keeping automation execution independent of the control plane results in faster development cycles and improves scalability, reliability, and portability across environments. Red Hat Ansible Automation Platform also includes access to Ansible content tools, making it easy to build and manage automation execution environments. In addition to speed, portability, and flexibility, automation execution environments provide the following benefits: They ensure that automation runs consistently across multiple platforms and make it possible to incorporate system-level dependencies and collection-based content. They give Red Hat Ansible Automation Platform administrators the ability to provide and manage automation environments to meet the needs of different teams. They allow automation to be easily scaled and shared between teams by providing a standard way of building and distributing the automation environment. They enable automation teams to define, build, and update their automation environments themselves. Automation execution environments provide a common language to communicate automation dependencies. | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/creating_and_consuming_execution_environments/assembly-intro-to-builder |
Chapter 27. Creating guided decision tables | Chapter 27. Creating guided decision tables You can use guided decision tables to define rule attributes, metadata, conditions, and actions in a tabular format that can be added to your business rules project. Procedure In Business Central, go to Menu Design Projects and click the project name. Click Add Asset Guided Decision Table . Enter an informative Guided Decision Table name and select the appropriate Package . The package that you specify must be the same package where the required data objects have been assigned or will be assigned. Select Use Wizard to finish setting up the table in the wizard, or leave this option unselected to finish creating the table and specify remaining configurations in the guided decision tables designer. Select the hit policy that you want your rows of rules in the table to conform to. For details, see Chapter 28, Hit policies for guided decision tables . Specify whether you want the Extended entry or Limited entry table. For details, see Section 28.1.1, "Types of guided decision tables" . Click Ok to complete the setup. If you have selected Use Wizard , the Guided Decision Table wizard is displayed. If you did not select the Use Wizard option, this prompt does not appear and you are taken directly to the table designer. For example, the following wizard setup is for a guided decision table in a loan application decision service: Figure 27.1. Create guided decision table If you are using the wizard, add any available imports, fact patterns, constraints, and actions, and select whether table columns should expand. Click Finish to close the wizard and view the table designer. Figure 27.2. Guided Decision Table wizard In the guided decision tables designer, you can add or edit columns and rows, and make other final adjustments. | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_decision_services_in_red_hat_process_automation_manager/guided-decision-tables-create-proc |
5.6. Configure CXF for a Web Service Data Source: SSL Support (HTTPS) | 5.6. Configure CXF for a Web Service Data Source: SSL Support (HTTPS) For using HTTPS, you can configure the CXF configuration file as below: For more information about http-conduit based configuration see http://cxf.apache.org/docs/client-http-transport-including-ssl-support.html . You can also configure for services such as HTTPBasic and Kerberos. | [
"<beans xmlns=\"http://www.springframework.org/schema/beans\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:sec=\"http://cxf.apache.org/configuration/security\" xmlns:http-conf=\"http://cxf.apache.org/transports/http/configuration\" xmlns:jaxws=\"http://java.sun.com/xml/ns/jaxws\" xsi:schemaLocation=\"http://cxf.apache.org/transports/http/configuration http://cxf.apache.org/schemas/configuration/http-conf.xsd http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd http://cxf.apache.org/configuration/security http://cxf.apache.org/schemas/configuration/security.xsd\"> <http-conf:conduit name=\"*.http-conduit\"> <http-conf:client ConnectionTimeout=\"120000\" ReceiveTimeout=\"240000\"/> <http-conf:tlsClientParameters secureSocketProtocol=\"SSL\"> <sec:trustManagers> <sec:keyStore type=\"JKS\" password=\"changeit\" file=\"/path/to/truststore.jks\"/> </sec:trustManagers> </http-conf:tlsClientParameters> </http-conf:conduit> </beans>"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/administration_and_configuration_guide/configure_cxf_for_a_web_service_data_source_ssl_support_https |
4.78. gnome-terminal | 4.78. gnome-terminal 4.78.1. RHBA-2011:1172 - gnome-terminal bug fix update An updated gnome-terminal package that fixes one bug is now available for Red Hat Enterprise Linux 6. The gnome-terminal package contains a terminal emulator for GNOME. It supports translucent backgrounds, opening multiple terminals in a single window (tabs) and clickable URLs. Bug Fix BZ# 655132 Previously, the regular expression used to find URLs in the text was missing a colon character. As a consequence, the URL containing a colon was not interpreted correctly. With this update, a colon character has been added to the regular expression so that the URL is now properly interpreted. All gnome-terminal users are advised to upgrade to this updated package, which fixes this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/gnome-terminal |
Chapter 16. Enabling Red Hat build of Keycloak Metrics | Chapter 16. Enabling Red Hat build of Keycloak Metrics Red Hat build of Keycloak has built in support for metrics. This chapter describes how to enable and configure server metrics. 16.1. Enabling Metrics It is possible to enable metrics using the build time option metrics-enabled : bin/kc.[sh|bat] start --metrics-enabled=true 16.2. Querying Metrics Red Hat build of Keycloak exposes metrics at the following endpoint: /metrics The response from the endpoint uses a application/openmetrics-text content type and it is based on the Prometheus (OpenMetrics) text format. The snippet bellow is an example of a response: 16.3. Available Metrics The table below summarizes the available metrics groups: Metric Description System A set of system-level metrics related to CPU and memory usage. JVM A set of metrics from the Java Virtual Machine (JVM) related to GC, and heap. Database A set of metrics from the database connection pool, if using a database. Cache A set of metrics from Infinispan caches. See Configuring distributed caches for more details. 16.4. Relevant options Value metrics-enabled 🛠 If the server should expose metrics. If enabled, metrics are available at the /metrics endpoint. CLI: --metrics-enabled Env: KC_METRICS_ENABLED true , false (default) | [
"bin/kc.[sh|bat] start --metrics-enabled=true",
"HELP base_gc_total Displays the total number of collections that have occurred. This attribute lists -1 if the collection count is undefined for this collector. TYPE base_gc_total counter base_gc_total{name=\"G1 Young Generation\",} 14.0 HELP jvm_memory_usage_after_gc_percent The percentage of long-lived heap pool used after the last GC event, in the range [0..1] TYPE jvm_memory_usage_after_gc_percent gauge jvm_memory_usage_after_gc_percent{area=\"heap\",pool=\"long-lived\",} 0.0 HELP jvm_threads_peak_threads The peak live thread count since the Java virtual machine started or peak was reset TYPE jvm_threads_peak_threads gauge jvm_threads_peak_threads 113.0 HELP agroal_active_count Number of active connections. These connections are in use and not available to be acquired. TYPE agroal_active_count gauge agroal_active_count{datasource=\"default\",} 0.0 HELP base_memory_maxHeap_bytes Displays the maximum amount of memory, in bytes, that can be used for memory management. TYPE base_memory_maxHeap_bytes gauge base_memory_maxHeap_bytes 1.6781410304E10 HELP process_start_time_seconds Start time of the process since unix epoch. TYPE process_start_time_seconds gauge process_start_time_seconds 1.675188449054E9 HELP system_load_average_1m The sum of the number of runnable entities queued to available processors and the number of runnable entities running on the available processors averaged over a period of time TYPE system_load_average_1m gauge system_load_average_1m 4.005859375"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/server_guide/configuration-metrics- |
Chapter 8. Configuring virtual GPUs for instances | Chapter 8. Configuring virtual GPUs for instances To support GPU-based rendering on your instances, you can define and manage virtual GPU (vGPU) resources according to your available physical GPU devices and your hypervisor type. You can use this configuration to divide the rendering workloads between all your physical GPU devices more effectively, and to have more control over scheduling your vGPU-enabled instances. To enable vGPU in OpenStack Compute, create flavors that your cloud users can use to create Red Hat Enterprise Linux (RHEL) instances with vGPU devices. Each instance can then support GPU workloads with virtual GPU devices that correspond to the physical GPU devices. The OpenStack Compute service tracks the number of vGPU devices that are available for each GPU profile you define on each host. The Compute service schedules instances to these hosts based on the flavor, attaches the devices, and monitors usage on an ongoing basis. When an instance is deleted, the Compute service adds the vGPU devices back to the available pool. 8.1. Supported configurations and limitations Supported GPU cards For a list of supported NVIDIA GPU cards, see Virtual GPU Software Supported Products on the NVIDIA website. Limitations when using vGPU devices You can enable only one vGPU type on each Compute node. Each instance can use only one vGPU resource. Live migration of vGPU between hosts is not supported. Suspend operations on a vGPU-enabled instance is not supported due to a libvirt limitation. Instead, you can snapshot or shelve the instance. Resize and cold migration operations on an instance with a vGPU flavor does not automatically re-allocate the vGPU resources to the instance. After you resize or migrate the instance, you must rebuild it manually to re-allocate the vGPU resources. By default, vGPU types on Compute hosts are not exposed to API users. To grant access, add the hosts to a host aggregate. For more information, see Section 4.4, "Manage Host Aggregates" . If you use NVIDIA accelerator hardware, you must comply with the NVIDIA licensing requirements. For example, NVIDIA vGPU GRID requires a licensing server. For more information about the NVIDIA licensing requirements, see NVIDIA License Server Release Notes on the NVIDIA website. 8.2. Configuring vGPU on the Compute nodes To enable your cloud users to create instances that use a virtual GPU (vGPU), you must configure the Compute nodes that have the physical GPUs: Build a custom GPU-enabled overcloud image. Prepare the GPU role, profile, and flavor for designating Compute nodes for vGPU. Configure the Compute node for vGPU. Deploy the overcloud. Note To use an NVIDIA GRID vGPU, you must comply with the NVIDIA GRID licensing requirements and you must have the URL of your self-hosted license server. For more information, see the NVIDIA License Server Release Notes web page. 8.2.1. Building a custom GPU overcloud image Perform the following steps on the director node to install the NVIDIA GRID host driver on an overcloud Compute image and upload the image to the OpenStack Image Service (glance). Procedure Copy the overcloud image and add the gpu suffix to the copied image. Install an ISO image generator tool from YUM. Download the NVIDIA GRID host driver RPM package that corresponds to your GPU device from the NVIDIA website. To determine which driver you need, see the NVIDIA Driver Downloads Portal . Note You must be a registered NVIDIA customer to download the drivers from the portal. Create an ISO image from the driver RPM package and save the image in the nvidia-host directory. Create a driver installation script for your Compute nodes. This script installs the NVIDIA GRID host driver on each Compute node that you run it on. The following example creates a script named install_nvidia.sh : Customize the overcloud image by attaching the ISO image that you generated in Step 4, and running the driver installation script that you created in Step 5: Relabel the customized image with SELinux: Prepare the custom image files for upload to the OpenStack Image Service: From the undercloud, upload the custom image to the OpenStack Image Service: 8.2.2. Designating Compute nodes for vGPU To designate Compute nodes for vGPU workloads, you must create a new role file to configure the vGPU role, and configure a new flavor to use to tag the GPU-enabled Compute nodes. Procedure To create the new ComputeGPU role file, copy the file /usr/share/openstack-tripleo-heat-templates/roles/Compute.yaml to /usr/share/openstack-tripleo-heat-templates/roles/ComputeGPU.yaml and edit the following file sections: Table 8.1. ComputeGPU role file edits Section/Parameter Current value New value Role comment Role: Compute Role: ComputeGpu Role name name: Compute name: ComputeGpu description Basic Compute Node role GPU Compute Node role ImageDefault overcloud-full overcloud-full-gpu HostnameFormatDefault -compute- -computegpu- deprecated_nic_config_name compute.yaml compute-gpu.yaml Generate a new roles data file named gpu_roles_data.yaml that includes the Controller , Compute , and ComputeGpu roles. The following example shows the ComputeGpu role details: Register the node for the overcloud. For more information, see Registering nodes for the overcloud in the Director Installation and Usage guide. Inspect the node hardware. For more information, see Inspecting the hardware of nodes in the Director Installation and Usage guide. Create the compute-vgpu-nvidia flavor to use to tag nodes that you want to designate for vGPU workloads: Tag each node that you want to designate for GPU workloads with the compute-vgpu-nvidia profile. Replace <node> with the ID of the baremetal node. 8.2.3. Configuring the Compute node for vGPU and deploying the overcloud You need to retrieve and assign the vGPU type that corresponds to the physical GPU device in your environment, and prepare the environment files to configure the Compute node for vGPU. Procedure Install Red Hat Enterprise Linux and the NVIDIA GRID driver on a temporary Compute node and launch the node. For more information about installing the NVIDIA GRID driver, see Section 8.2.1, "Building a custom GPU overcloud image" . On the Compute node, locate the vGPU type of the physical GPU device that you want to enable. For libvirt, virtual GPUs are mediated devices, or mdev type devices. To discover the supported mdev devices, enter the following command: Add the compute-gpu.yaml file to the network-environment.yaml file: Add the following parameters to the node-info.yaml file to specify the number of GPU Compute nodes, and the flavor to use for the GPU-designated Compute nodes: Create a gpu.yaml file to specify the vGPU type of your GPU device: Note Each physical GPU supports only one virtual GPU type. If you specify multiple vGPU types in this property, only the first type is used. Deploy the overcloud, adding your new role and environment files to the stack along with your other environment files: 8.3. Creating the vGPU image and flavor To enable your cloud users to create instances that use a virtual GPU (vGPU), you can define a custom vGPU-enabled image, and you can create a vGPU flavor. 8.3.1. Creating a custom GPU instance image After you deploy the overcloud with GPU-enabled Compute nodes, you can create a custom vGPU-enabled instance image with the NVIDIA GRID guest driver and license file. Procedure Create an instance with the hardware and software profile that your vGPU instances require: Replace <flavor> with the name or ID of the flavor that has the hardware profile that your vGPU instances require. For information on default flavors, see Manage flavors . Replace <image> with the name or ID of the image that has the software profile that your vGPU instances require. For information on downloading RHEL cloud images, see Image service . Log in to the instance as a cloud-user. For more information, see Log in to an Instance . Create the gridd.conf NVIDIA GRID license file on the instance, following the NVIDIA guidance: Licensing an NVIDIA vGPU on Linux by Using a Configuration File . Install the GPU driver on the instance. For more information about installing an NVIDIA driver, see Installing the NVIDIA vGPU Software Graphics Driver on Linux . Note Use the hw_video_model image property to define the GPU driver type. You can choose none if you want to disable the emulated GPUs for your vGPU instances. For more information about supported drivers, see Appendix A, Image Configuration Parameters . Create an image snapshot of the instance: Optional: Delete the instance. 8.3.2. Creating a vGPU flavor for instances After you deploy the overcloud with GPU-enabled Compute nodes, you can create a custom flavor that your cloud users can use to launch instances for GPU workloads. Procedure Create an NVIDIA GPU flavor. For example: Assign a vGPU resource to the flavor that you created. You can assign only one vGPU for each instance. 8.3.3. Launching a vGPU instance You can create a GPU-enabled instance for GPU workloads. Procedure Create an instance using a GPU flavor and image. For example: Log in to the instance as a cloud-user. For more information, see Log in to an Instance . To verify that the GPU is accessible from the instance, run the following command from the instance: 8.4. Enabling PCI passthrough for a GPU device You can use PCI passthrough to attach a physical PCI device, such as a graphics card, to an instance. If you use PCI passthrough for a device, the instance reserves exclusive access to the device for performing tasks, and the device is not available to the host. Prerequisites The pciutils package is installed on the physical servers that have the PCI cards. The GPU driver is available to install on the GPU instances. For more information, see Section 8.2.1, "Building a custom GPU overcloud image" . Procedure To determine the vendor ID and product ID for each passthrough device type, run the following command on the physical server that has the PCI cards: For example, to determine the vendor and product ID for an NVIDIA GPU, run the following command: To configure the Controller node on the overcloud for PCI passthrough, create an environment file, for example, pci_passthru_controller.yaml . Add PciPassthroughFilter to the NovaSchedulerDefaultFilters parameter in pci_passthru_controller.yaml : To specify the PCI alias for the devices on the Controller node, add the following to pci_passthru_controller.yaml : Note If the nova-api service is running in a role other than the Controller, then replace ControllerExtraConfig with the user role, in the format <Role>ExtraConfig . To configure the Compute node on the overcloud for PCI passthrough, create an environment file, for example, pci_passthru_compute.yaml . To specify the allowed PCIs for the devices on the Compute node, add the following to pci_passthru_compute.yaml : To enable IOMMU in the server BIOS of the Compute nodes to support PCI passthrough, add the KernelArgs parameter to pci_passthru_compute.yaml : Deploy the overcloud, adding your custom environment files to the stack along with your other environment files: Configure a flavor to request the PCI devices. The following example requests two devices, each with a vendor ID of 10de and a product ID of 13f2 : Create an instance with a PCI passthrough device: Log in to the instance as a cloud-user. For more information, see Log in to an Instance . Install the GPU driver on the instance. For example, run the following script to install an NVIDIA driver: Verification To verify that the GPU is accessible from the instance, run the following command from the instance: To check the NVIDIA System Management Interface status, run the following command from the instance: Example output: | [
"cp overcloud-full.qcow2 overcloud-full-gpu.qcow2",
"sudo yum install genisoimage -y",
"genisoimage -o nvidia-host.iso -R -J -V NVIDIA nvidia-host/ I: -input-charset not specified, using utf-8 (detected in locale settings) 9.06% done, estimate finish Wed Oct 31 11:24:46 2018 18.08% done, estimate finish Wed Oct 31 11:24:46 2018 27.14% done, estimate finish Wed Oct 31 11:24:46 2018 36.17% done, estimate finish Wed Oct 31 11:24:46 2018 45.22% done, estimate finish Wed Oct 31 11:24:46 2018 54.25% done, estimate finish Wed Oct 31 11:24:46 2018 63.31% done, estimate finish Wed Oct 31 11:24:46 2018 72.34% done, estimate finish Wed Oct 31 11:24:46 2018 81.39% done, estimate finish Wed Oct 31 11:24:46 2018 90.42% done, estimate finish Wed Oct 31 11:24:46 2018 99.48% done, estimate finish Wed Oct 31 11:24:46 2018 Total translation table size: 0 Total rockridge attributes bytes: 358 Total directory bytes: 0 Path table size(bytes): 10 Max brk space used 0 55297 extents written (108 MB)",
"#/bin/bash NVIDIA GRID package mkdir /tmp/mount mount LABEL=NVIDIA /tmp/mount -ivh /tmp/mount/NVIDIA-vGPU-rhel-8.1-430.27.x86_64.rpm",
"virt-customize --attach nvidia-packages.iso -a overcloud-full-gpu.qcow2 -v --run install_nvidia.sh [ 0.0] Examining the guest libguestfs: launch: program=virt-customize libguestfs: launch: version=1.36.10rhel=8,release=6.el8_5.2,libvirt libguestfs: launch: backend registered: unix libguestfs: launch: backend registered: uml libguestfs: launch: backend registered: libvirt",
"virt-customize -a overcloud-full-gpu.qcow2 --selinux-relabel [ 0.0] Examining the guest [ 2.2] Setting a random seed [ 2.2] SELinux relabelling [ 27.4] Finishing off",
"mkdir /var/image/x86_64/image guestmount -a overcloud-full-gpu.qcow2 -i --ro image cp image/boot/vmlinuz-3.10.0-862.14.4.el8.x86_64 ./overcloud-full-gpu.vmlinuz cp image/boot/initramfs-3.10.0-862.14.4.el8.x86_64.img ./overcloud-full-gpu.initrd",
"(undercloud) USD openstack overcloud image upload --update-existing --os-image-name overcloud-full-gpu.qcow2",
"(undercloud) [stack@director templates]USD openstack overcloud roles generate -o /home/stack/templates/gpu_roles_data.yaml Controller Compute ComputeGpu",
"##################################################################### Role: ComputeGpu # ##################################################################### - name: ComputeGpu description: | GPU Compute Node role CountDefault: 1 ImageDefault: overcloud-full-gpu networks: - InternalApi - Tenant - Storage HostnameFormatDefault: '%stackname%-computegpu-%index%' RoleParametersDefault: TunedProfileName: \"virtual-host\" # Deprecated & backward-compatible values (FIXME: Make parameters consistent) # Set uses_deprecated_params to True if any deprecated params are used. uses_deprecated_params: True deprecated_param_image: 'NovaImage' deprecated_param_extraconfig: 'NovaComputeExtraConfig' deprecated_param_metadata: 'NovaComputeServerMetadata' deprecated_param_scheduler_hints: 'NovaComputeSchedulerHints' deprecated_param_ips: 'NovaComputeIPs' deprecated_server_resource_name: 'NovaCompute' deprecated_nic_config_name: 'compute-gpu.yaml' ServicesDefault: - OS::TripleO::Services::Aide - OS::TripleO::Services::AuditD - OS::TripleO::Services::BootParams - OS::TripleO::Services::CACerts - OS::TripleO::Services::CephClient - OS::TripleO::Services::CephExternal - OS::TripleO::Services::CertmongerUser - OS::TripleO::Services::Collectd - OS::TripleO::Services::ComputeCeilometerAgent - OS::TripleO::Services::ComputeNeutronCorePlugin - OS::TripleO::Services::ComputeNeutronL3Agent - OS::TripleO::Services::ComputeNeutronMetadataAgent - OS::TripleO::Services::ComputeNeutronOvsAgent - OS::TripleO::Services::Docker - OS::TripleO::Services::Fluentd - OS::TripleO::Services::IpaClient - OS::TripleO::Services::Ipsec - OS::TripleO::Services::Iscsid - OS::TripleO::Services::Kernel - OS::TripleO::Services::LoginDefs - OS::TripleO::Services::MetricsQdr - OS::TripleO::Services::MySQLClient - OS::TripleO::Services::NeutronBgpVpnBagpipe - OS::TripleO::Services::NeutronLinuxbridgeAgent - OS::TripleO::Services::NeutronVppAgent - OS::TripleO::Services::NovaCompute - OS::TripleO::Services::NovaLibvirt - OS::TripleO::Services::NovaLibvirtGuests - OS::TripleO::Services::NovaMigrationTarget - OS::TripleO::Services::ContainersLogrotateCrond - OS::TripleO::Services::OpenDaylightOvs - OS::TripleO::Services::Podman - OS::TripleO::Services::Rhsm - OS::TripleO::Services::RsyslogSidecar - OS::TripleO::Services::Securetty - OS::TripleO::Services::SensuClient - OS::TripleO::Services::Snmp - OS::TripleO::Services::Sshd - OS::TripleO::Services::Timesync - OS::TripleO::Services::Timezone - OS::TripleO::Services::TripleoFirewall - OS::TripleO::Services::TripleoPackages - OS::TripleO::Services::Tuned - OS::TripleO::Services::Vpp - OS::TripleO::Services::OVNController - OS::TripleO::Services::OVNMetadataAgent",
"(undercloud) [stack@director templates]USD openstack flavor create --id auto --ram 6144 --disk 40 --vcpus 4 compute-vgpu-nvidia +----------------------------+--------------------------------------+ | Field | Value | +----------------------------+--------------------------------------+ | OS-FLV-DISABLED:disabled | False | | OS-FLV-EXT-DATA:ephemeral | 0 | | disk | 40 | | id | 9cb47954-be00-47c6-a57f-44db35be3e69 | | name | compute-vgpu-nvidia | | os-flavor-access:is_public | True | | properties | | | ram | 6144 | | rxtx_factor | 1.0 | | swap | | | vcpus | 4 | +----------------------------+--------------------------------------+",
"(undercloud) [stack@director templates]USD openstack baremetal node set --property capabilities='profile:compute-vgpu-nvidia,boot_option:local' <node>",
"ls /sys/class/mdev_bus/0000\\:06\\:00.0/mdev_supported_types/ nvidia-11 nvidia-12 nvidia-13 nvidia-14 nvidia-15 nvidia-16 nvidia-17 nvidia-18 nvidia-19 nvidia-20 nvidia-21 nvidia-210 nvidia-22 cat /sys/class/mdev_bus/0000\\:06\\:00.0/mdev_supported_types/nvidia-18/description num_heads=4, frl_config=60, framebuffer=2048M, max_resolution=4096x2160, max_instance=4",
"resource_registry: OS::TripleO::Compute::Net::SoftwareConfig: /home/stack/templates/nic-configs/compute.yaml OS::TripleO::ComputeGpu::Net::SoftwareConfig: /home/stack/templates/nic-configs/compute-gpu.yaml OS::TripleO::Controller::Net::SoftwareConfig: /home/stack/templates/nic-configs/controller.yaml #OS::TripleO::AllNodes::Validation: OS::Heat::None",
"parameter_defaults: OvercloudControllerFlavor: control OvercloudComputeFlavor: compute OvercloudComputeGpuFlavor: compute-vgpu-nvidia ControllerCount: 1 ComputeCount: 0 ComputeGpuCount: 1",
"parameter_defaults: ComputeGpuExtraConfig: nova::compute::vgpu::enabled_vgpu_types: - nvidia-18",
"(undercloud) USD openstack overcloud deploy --templates -r /home/stack/templates/nvidia/gpu_roles_data.yaml -e /home/stack/templates/node-info.yaml -e /home/stack/templates/network-environment.yaml -e [your environment files] -e /home/stack/templates/gpu.yaml",
"(overcloud) [stack@director ~]USD openstack server create --flavor <flavor> --image <image> temp_vgpu_instance",
"(overcloud) [stack@director ~]USD openstack server image create --name vgpu_image temp_vgpu_instance",
"(overcloud) [stack@virtlab-director2 ~]USD openstack flavor create --vcpus 6 --ram 8192 --disk 100 m1.small-gpu +----------------------------+--------------------------------------+ | Field | Value | +----------------------------+--------------------------------------+ | OS-FLV-DISABLED:disabled | False | | OS-FLV-EXT-DATA:ephemeral | 0 | | disk | 100 | | id | a27b14dd-c42d-4084-9b6a-225555876f68 | | name | m1.small-gpu | | os-flavor-access:is_public | True | | properties | | | ram | 8192 | | rxtx_factor | 1.0 | | swap | | | vcpus | 6 | +----------------------------+--------------------------------------+",
"(overcloud) [stack@virtlab-director2 ~]USD openstack flavor set m1.small-gpu --property \"resources:VGPU=1\" (overcloud) [stack@virtlab-director2 ~]USD openstack flavor show m1.small-gpu +----------------------------+--------------------------------------+ | Field | Value | +----------------------------+--------------------------------------+ | OS-FLV-DISABLED:disabled | False | | OS-FLV-EXT-DATA:ephemeral | 0 | | access_project_ids | None | | disk | 100 | | id | a27b14dd-c42d-4084-9b6a-225555876f68 | | name | m1.small-gpu | | os-flavor-access:is_public | True | | properties | resources:VGPU='1' | | ram | 8192 | | rxtx_factor | 1.0 | | swap | | | vcpus | 6 | +----------------------------+--------------------------------------+",
"(overcloud) [stack@virtlab-director2 ~]USD openstack server create --flavor m1.small-gpu --image vgpu_image --security-group web --nic net-id=internal0 --key-name lambda vgpu-instance",
"lspci -nn | grep <gpu_name>",
"lspci -nn | grep -i <gpu_name>",
"lspci -nn | grep -i nvidia 3b:00.0 3D controller [0302]: NVIDIA Corporation TU104GL [Tesla T4] [10de:1eb8] (rev a1) d8:00.0 3D controller [0302]: NVIDIA Corporation TU104GL [Tesla T4] [10de:1db4] (rev a1)",
"parameter_defaults: NovaSchedulerDefaultFilters: ['RetryFilter','AvailabilityZoneFilter','ComputeFilter','ComputeCapabilitiesFilter','ImagePropertiesFilter','ServerGroupAntiAffinityFilter','ServerGroupAffinityFilter','PciPassthroughFilter','NUMATopologyFilter']",
"ControllerExtraConfig: nova::pci::aliases: - name: \"t4\" product_id: \"1eb8\" vendor_id: \"10de\" - name: \"v100\" product_id: \"1db4\" vendor_id: \"10de\"",
"parameter_defaults: NovaPCIPassthrough: - vendor_id: \"10de\" product_id: \"1eb8\"",
"parameter_defaults: ComputeParameters: KernelArgs: \"intel_iommu=on iommu=pt\"",
"(undercloud) USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/pci_passthru_controller.yaml -e /home/stack/templates/pci_passthru_compute.yaml",
"openstack flavor set m1.large --property \"pci_passthrough:alias\"=\"t4:2\"",
"openstack server create --flavor m1.large --image rhelgpu --wait test-pci",
"sh NVIDIA-Linux-x86_64-430.24-grid.run",
"lspci -nn | grep <gpu_name>",
"nvidia-smi",
"----------------------------------------------------------------------------- | NVIDIA-SMI 440.33.01 Driver Version: 440.33.01 CUDA Version: 10.2 | |------------------------------- ---------------------- ----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |=============================== ====================== ======================| | 0 Tesla T4 Off | 00000000:01:00.0 Off | 0 | | N/A 43C P0 20W / 70W | 0MiB / 15109MiB | 0% Default | ------------------------------- ---------------------- ---------------------- ----------------------------------------------------------------------------- | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | No running processes found | -----------------------------------------------------------------------------"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/instances_and_images_guide/ch-virtual_gpu |
Chapter 3. Creating and building an application using the web console | Chapter 3. Creating and building an application using the web console 3.1. Before you begin Review Accessing the web console . You must be able to access a running instance of OpenShift Container Platform. If you do not have access, contact your cluster administrator. 3.2. Logging in to the web console You can log in to the OpenShift Container Platform web console to access and manage your cluster. Prerequisites You must have access to an OpenShift Container Platform cluster. Procedure Log in to the OpenShift Container Platform web console using your login credentials. You are redirected to the Projects page. For non-administrative users, the default view is the Developer perspective. For cluster administrators, the default view is the Administrator perspective. If you do not have cluster-admin privileges, you will not see the Administrator perspective in your web console. The web console provides two perspectives: the Administrator perspective and Developer perspective. The Developer perspective provides workflows specific to the developer use cases. Figure 3.1. Perspective switcher Use the perspective switcher to switch to the Developer perspective. The Topology view with options to create an application is displayed. 3.3. Creating a new project A project enables a community of users to organize and manage their content in isolation. Projects are OpenShift Container Platform extensions to Kubernetes namespaces. Projects have additional features that enable user self-provisioning. Users must receive access to projects from administrators. Cluster administrators can allow developers to create their own projects. In most cases, users automatically have access to their own projects. Each project has its own set of objects, policies, constraints, and service accounts. Prerequisites You are logged in to the OpenShift Container Platform web console. You are in the Developer perspective. You have the appropriate roles and permissions in a project to create applications and other workloads in OpenShift Container Platform. Procedure In the +Add view, select Project Create Project . In the Name field, enter user-getting-started . Optional: In the Display name field, enter Getting Started with OpenShift . Note Display name and Description fields are optional. Click Create . You have created your first project on OpenShift Container Platform. Additional resources Default cluster roles Viewing a project using the web console Providing access permissions to your project using the Developer perspective Deleting a project using the web console 3.4. Granting view permissions OpenShift Container Platform automatically creates a few special service accounts in every project. The default service account takes responsibility for running the pods. OpenShift Container Platform uses and injects this service account into every pod that launches. The following procedure creates a RoleBinding object for the default ServiceAccount object. The service account communicates with the OpenShift Container Platform API to learn about pods, services, and resources within the project. Prerequisites You are logged in to the OpenShift Container Platform web console. You have a deployed image. You are in the Administrator perspective. Procedure Navigate to User Management and then click RoleBindings . Click Create binding . Select Namespace role binding (RoleBinding) . In the Name field, enter sa-user-account . In the Namespace field, search for and select user-getting-started . In the Role name field, search for view and select view . In the Subject field, select ServiceAccount . In the Subject namespace field, search for and select user-getting-started . In the Subject name field, enter default . Click Create . Additional resources Understanding authentication RBAC overview 3.5. Deploying your first image The simplest way to deploy an application in OpenShift Container Platform is to run an existing container image. The following procedure deploys a front end component of an application called national-parks-app . The web application displays an interactive map. The map displays the location of major national parks across the world. Prerequisites You are logged in to the OpenShift Container Platform web console. You are in the Developer perspective. You have the appropriate roles and permissions in a project to create applications and other workloads in OpenShift Container Platform. Procedure From the +Add view in the Developer perspective, click Container images to open a dialog. In the Image Name field, enter the following: quay.io/openshiftroadshow/parksmap:latest Ensure that you have the current values for the following: Application: national-parks-app Name: parksmap Select Deployment as the Resource . Select Create route to the application . In the Advanced Options section, click Labels and add labels to better identify this deployment later. Labels help identify and filter components in the web console and in the command line. Add the following labels: app=national-parks-app component=parksmap role=frontend Click Create . You are redirected to the Topology page where you can see the parksmap deployment in the national-parks-app application. Additional resources Creating applications using the Developer perspective Viewing a project using the web console Viewing the topology of your application Deleting a project using the web console 3.5.1. Examining the pod OpenShift Container Platform leverages the Kubernetes concept of a pod, which is one or more containers deployed together on one host, and the smallest compute unit that can be defined, deployed, and managed. Pods are the rough equivalent of a machine instance, physical or virtual, to a container. The Overview panel enables you to access many features of the parksmap deployment. The Details and Resources tabs enable you to scale application pods, check build status, services, and routes. Prerequisites You are logged in to the OpenShift Container Platform web console. You are in the Developer perspective. You have a deployed image. Procedure Click D parksmap in the Topology view to open the Overview panel. Figure 3.2. Parksmap deployment The Overview panel includes tabs for Details , Resources , and Observe . The Details tab might be displayed by default. Table 3.1. Overview panel tab definitions Tab Defintion Details Enables you to scale your application and view pod configuration such as labels, annotations, and the status of the application. Resources Displays the resources that are associated with the deployment. Pods are the basic units of OpenShift Container Platform applications. You can see how many pods are being used, what their status is, and you can view the logs. Services that are created for your pod and assigned ports are listed under the Services heading. Routes enable external access to the pods and a URL is used to access them. Observe View various Events and Metrics information as it relates to your pod. Additional resources Interacting with applications and components Scaling application pods and checking builds and routes Labels and annotations used for the Topology view 3.5.2. Scaling the application In Kubernetes, a Deployment object defines how an application deploys. In most cases, users use Pod , Service , ReplicaSets , and Deployment resources together. In most cases, OpenShift Container Platform creates the resources for you. When you deploy the national-parks-app image, a deployment resource is created. In this example, only one Pod is deployed. The following procedure scales the national-parks-image to use two instances. Prerequisites You are logged in to the OpenShift Container Platform web console. You are in the Developer perspective. You have a deployed image. Procedure In the Topology view, click the national-parks-app application. Click the Details tab. Use the up arrow to scale the pod to two instances. Figure 3.3. Scaling application Note Application scaling can happen quickly because OpenShift Container Platform is launching a new instance of an existing image. Use the down arrow to scale the pod down to one instance. Additional resources Recommended practices for scaling the cluster Understanding horizontal pod autoscalers About the Vertical Pod Autoscaler Operator 3.6. Deploying a Python application The following procedure deploys a back-end service for the parksmap application. The Python application performs 2D geo-spatial queries against a MongoDB database to locate and return map coordinates of all national parks in the world. The deployed back-end service that is nationalparks . Prerequisites You are logged in to the OpenShift Container Platform web console. You are in the Developer perspective. You have a deployed image. Procedure From the +Add view in the Developer perspective, click Import from Git to open a dialog. Enter the following URL in the Git Repo URL field: https://github.com/openshift-roadshow/nationalparks-py.git A builder image is automatically detected. Note If the detected builder image is Dockerfile, select Edit Import Strategy . Select Builder Image and then click Python . Scroll to the General section. Ensure that you have the current values for the following: Application: national-parks-app Name: nationalparks Select Deployment as the Resource . Select Create route to the application . In the Advanced Options section, click Labels and add labels to better identify this deployment later. Labels help identify and filter components in the web console and in the command line. Add the following labels: app=national-parks-app component=nationalparks role=backend type=parksmap-backend Click Create . From the Topology view, select the nationalparks application. Note Click the Resources tab. In the Builds section, you can see your build running. Additional resources Adding services to your application Importing a codebase from Git to create an application Viewing the topology of your application Providing access permissions to your project using the Developer perspective Deleting a project using the web console 3.7. Connecting to a database Deploy and connect a MongoDB database where the national-parks-app application stores location information. Once you mark the national-parks-app application as a backend for the map visualization tool, parksmap deployment uses the OpenShift Container Platform discover mechanism to display the map automatically. Prerequisites You are logged in to the OpenShift Container Platform web console. You are in the Developer perspective. You have a deployed image. Procedure From the +Add view in the Developer perspective, click Container images to open a dialog. In the Image Name field, enter quay.io/centos7/mongodb-36-centos7 . In the Runtime icon field, search for mongodb . Scroll down to the General section. Ensure that you have the current values for the following: Application: national-parks-app Name: mongodb-nationalparks Select Deployment as the Resource . Unselect the checkbox to Create route to the application . In the Advanced Options section, click Deployment to add environment variables to add the following environment variables: Table 3.2. Environment variable names and values Name Value MONGODB_USER mongodb MONGODB_PASSWORD mongodb MONGODB_DATABASE mongodb MONGODB_ADMIN_PASSWORD mongodb Click Create . Additional resources Adding services to your application Viewing a project using the web console Viewing the topology of your application Providing access permissions to your project using the Developer perspective Deleting a project using the web console 3.7.1. Creating a secret The Secret object provides a mechanism to hold sensitive information such as passwords, OpenShift Container Platform client configuration files, private source repository credentials, and so on. Secrets decouple sensitive content from the pods. You can mount secrets into containers using a volume plugin or the system can use secrets to perform actions on behalf of a pod. The following procedure adds the secret nationalparks-mongodb-parameters and mounts it to the nationalparks workload. Prerequisites You are logged in to the OpenShift Container Platform web console. You are in the Developer perspective. You have a deployed image. Procedure From the Developer perspective, navigate to Secrets on the left hand navigation and click Secrets . Click Create Key/value secret . In the Secret name field, enter nationalparks-mongodb-parameters . Enter the following values for Key and Value : Table 3.3. Secret keys and values Key Value MONGODB_USER mongodb DATABASE_SERVICE_NAME mongodb-nationalparks MONGODB_PASSWORD mongodb MONGODB_DATABASE mongodb MONGODB_ADMIN_PASSWORD mongodb Click Create . Click Add Secret to workload . From the drop down menu, select nationalparks as the workload to add. Click Save . This change in configuration triggers a new rollout of the nationalparks deployment with the environment variables properly injected. Additional resources Understanding secrets 3.7.2. Loading data and displaying the national parks map You deployed the parksmap and nationalparks applications and then deployed the mongodb-nationalparks database. However, no data has been loaded into the database. Before loading the data, add the proper labels to the mongodb-nationalparks and nationalparks deployment. Prerequisites You are logged in to the OpenShift Container Platform web console. You are in the Developer perspective. You have a deployed image. Procedure From the Topology view, navigate to nationalparks deployment and click Resources and retrieve your route information. Copy and paste the URL into your web browser and add the following at the end of the URL: /ws/data/load Example output Items inserted in database: 2893 From the Topology view, navigate to parksmap deployment and click Resources and retrieve your route information. Copy and paste the URL into your web browser to view your national parks across the world map. Figure 3.4. National parks across the world Additional resources Providing access permissions to your project using the Developer perspective Labels and annotations used for the Topology view | [
"/ws/data/load",
"Items inserted in database: 2893"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/getting_started/openshift-web-console |
Getting started with automation hub | Getting started with automation hub Red Hat Ansible Automation Platform 2.4 Configure Red Hat automation hub as your default server for Ansible collections content Red Hat Customer Content Services [email protected] | [
"curl https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token -d grant_type=refresh_token -d client_id=\"cloud-services\" -d refresh_token=\"{{ user_token }}\" --fail --silent --show-error --output /dev/null",
"[galaxy_server._<server_name>_]",
"https:// <server_fully_qualified_domain_name> /api/galaxy/",
"[galaxy] server_list = automation_hub, my_org_hub [galaxy_server.automation_hub] url=https://console.redhat.com/api/automation-hub/content/published/ 1 auth_url=https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token token=my_ah_token [galaxy_server.my_org_hub] url=https://automation.my_org/api/galaxy/content/rh-certified/ 2 username=my_user password=my_pass",
"cd ansible-automation-platform-setup-bundle-<latest-version>",
"cd ansible-automation-platform-setup-<latest-version>",
"[all:vars] automationhub_enable_unauthenticated_collection_access = True 1 automationhub_enable_unauthenticated_collection_download = True 2",
"ansible-galaxy collection publish path/to/my_namespace-my_collection-1.0.0.tar.gz --api-key=SECRET"
] | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html-single/getting_started_with_automation_hub/index |
Chapter 3. Setting up cloud-init | Chapter 3. Setting up cloud-init Red Hat Enterprise Linux Atomic Host uses cloud-init to configure the system during installation and first-boot. Cloud-init was initially developed to provide early initialization of cloud instances. In Red Hat Enterprise Linux Atomic Host it can also be used for virtual machine installations. The files used by cloud-init are YAML formatted files. Note cloud-init is run only the first time that the machine is booted. If cloud-init fails because of syntax errors in the file or doesn't contain all of the needed directives, such as user credentials, a new instance must be created and launched. Restarting the failed instance with a new cloud-init file will not work. Here are some examples of how to do common tasks with cloud-init. How do I create users with cloud-init? To create users with cloud-init, you must create two files: meta-data and user-data , and then package them into an ISO image. Make a directory and move into it: Create a file called meta-data. Add the following to the file called meta-data: Create a file called user-data. Add the following to the file called user-data: Note: The final line of the user-data file above is an SSH public key. SSH public keys are found in ~/.ssh/id_rsa.pub . Create an ISO image that includes meta-data and user-data : A file named atomic0cidata.iso is generated. Attach this file to the machine on which you plan to install Red Hat Enterprise Linux Atomic Host, and your username will be "cloud-user" and your password will be "atomic". How do I expire the cloud-user's password so that the user must change it during their first login? To force "cloud-user" to change their password at first login, change the line chpasswd: {expire: False} to chpasswd: {expire: True} in the user-data file. This works because the password and chpasswd operate on the default user unless otherwise indicated. Note: This is a global setting. If you set this to True all users who are created (see below) will have to change their password. How do I change the default username? To change the default username from cloud-user to something else, add the line user: username to the user-data file: How do I set the root password? To set the root password you must create a user list in the chpasswd section of the user-data file. The format of the list is shown below. Whitespace is significant, so do not include any on either side of the colon ( : ) as it will set a password with a space in it. If you use this method to set the user passwords, all passwords must be set in this section. This means that the password: line must be moved from the top and into this section. How do I manage Red Hat subscriptions with cloud-init? The rh_subscription directive can be used to perform various operations concerning registering your system (for RHEL Atomic 7.4 and later). Following are a few examples showing different available options: Note that service-level is only used with the auto-attach option. Alternatively, you can use an activation key and org instead of username and password: There is also support for adding pools. The following is the equivalent of the subscription-manager attach --pool=XYZ01234567 command: You can set up the server hostname in /etc/rhsm/rhsm.conf with the following: How do I add more users during initial system configuration? How do I set additional user options? Users are created and described in the users section of the user-data file. Adding this section requires that options for the default user be set here as well. If the first entry in the users section is default , the default user, cloud-user will be created along with the other users. If the default line is omitted, then cloud-user is not created. Note: By default users will be labeled as unconfined_u if there is not an se-linux-user value. Note: This example places the user foobar into two groups: users and wheel . As of cloud-init 0.7.5, no whitespace is supported in the group list: BZ 1126365 How do I run first boot commands? The runcmd and bootcmd sections of the user-data file can be used to execute arbitrary commands during startup and initialization. The bootcmd section is run early in the initialization process. The runcmd section is executed near the end of the process by init. These commands are not saved for future boots and will only be executed during the first initialization-boot. How do I add additional sudoers? A user can be configured as a sudoer by adding a sudo and groups entry to the users section of the user-data file, as shown below. How do I set up a static networking configuration? Add a network-interfaces section to the meta-data file. This section contains the usual set of networking configuration options. Because of a current bug in cloud-init, static networking configurations are not automatically started. Instead the default DHCP configuration remains active. A suggested work around is to manually stop and restart the network interface via the bootcmd directive. How do I delete cloud-user and just have root and no other users? To have only a root user created, create an entry for root in the users section of the user-data file. This section can be as simple as just a name option: Optionally, you can set up SSH keys for the root user as follows: How do I set up storage with container-storage-setup? To set up the size of the root logical volume to 6GB for example instead of the default 3GB, use the write_files directive in user-data : Note Prior to RHEL 7.4, container-storage-setup was called docker-storage-setup . If you are using OverlayFS for storage, note that as of RHEL 7.4 you can now use that type of filesystem with SELinux in enforcing mode. How do I enable the Overlay Graph Driver? The Overlay Graph Driver is enabled through container-storage-setup . Use the runcmd directive to change the STORAGE_DRIVER option to "overlay2": Note Note that changing the backend storage driver is a destructive operation. Furthermore, OverlayFS is not POSIX-compliant and it can be used with restrictions. For more information, see RHEL 7.2 Release Notes . How do I re-run cloud-init on an instance? In most situations it is not possible to re-run cloud-init to change the configuration of a virtual machine that has already been created. When cloud-init is used in an environment where the Instance ID can be changed (for instance, from Atomic0 to Atomic1 ), it is possible to re-configure an existing virtual machine by changing the Instance ID and rebooting to re-run cloud-init . This is not recommended practice for production environments because cloud-init is supposed to be set up to create on first boot systems that are fully and properly configured. In most IAAS implementations it is not possible to change the Instance ID. If cloud-init must be re-run, the instance should be cloned in order to obtain a new Instance ID. Can I put shell scripts in bootcmd and runcmd? Yes. If you use a list value for bootcmd or runcmd , each list item is run in turn using execve . If you use a string value, then the entire string is run as a shell script. Alternatively, if you want simply to use cloud-init to run a shell script, you can provide a shell script (complete with shebang (#!) ) instead of providing cloud-init with a '.yaml' file. See this website for examples of how to put shell scripts in bootcmd and runcmd . | [
"mkdir cloudinitiso cd cloudinitiso",
"instance-id: Atomic0 local-hostname: atomic-00",
"#cloud-config password: atomic chpasswd: {expire: False} ssh_pwauth: True ssh_authorized_keys: - ssh-rsa AAA...SDvZ [email protected]",
"genisoimage -output atomic0cidata.iso -volid cidata -joliet -rock user-data meta-data",
"#cloud-config password: atomic chpasswd: {expire: True} ssh_pwauth: True ssh_authorized_keys: - ssh-rsa AAA...SDvz [email protected] - ssh-rsa AAB...QTuo [email protected]",
"#cloud-config user: username password: atomic chpasswd: {expire: False} ssh_pwauth: True ssh_authorized_keys: - ssh-rsa AAA...SDvz [email protected] - ssh-rsa AAB...QTuo [email protected]",
"#cloud-config ssh_pwauth: True ssh_authorized_keys: - ssh-rsa AAA...SDvz [email protected] - ssh-rsa AAB...QTuo [email protected] chpasswd: list: | root:password cloud-user:atomic expire: False",
"rh_subscription: username: [email protected] password: '<password>' auto-attach: True service-level: self-support",
"rh_subscription: activation-key: example_key org: 12345 auto-attach: True",
"rh_subscription: username: [email protected] password: '<password>' add-pool: XYZ01234567",
"rh_subscription: username: [email protected] password: '<password>' server-hostname: atomic.example.com auto-attach: True",
"#cloud-config users: - default - name: foobar gecos: User N. Ame selinux-user: staff_u groups: users,wheel ssh_pwauth: True ssh_authorized_keys: - ssh-rsa AA..vz [email protected] chpasswd: list: | root:password cloud-user:atomic foobar:foobar expire: False",
"#cloud-config users: - default - name: foobar gecos: User N. Ame groups: users chpasswd: list: | root:password fedora:atomic foobar:foobar expire: False bootcmd: - echo New MOTD >> /etc/motd runcmd: - echo New MOTD2 >> /etc/motd",
"#cloud-config users: - default - name: foobar gecos: User D. Two sudo: [\"ALL=(ALL) NOPASSWD:ALL\"] groups: wheel,adm,systemd-journal ssh_pwauth: True ssh_authorized_keys: - ssh-rsa AA...vz [email protected] chpasswd: list: | root:password cloud-user:atomic foobar:foobar expire: False",
"network-interfaces: | iface eth0 inet static address 192.168.1.10 network 192.168.1.0 netmask 255.255.255.0 broadcast 192.168.1.255 gateway 192.168.1.254 bootcmd: - ifdown eth0 - ifup eth0",
"users: - name: root chpasswd: list: | root:password expire: False",
"users: - name: root ssh_pwauth: True ssh_authorized_keys: - ssh-rsa AA..vz [email protected]",
"write_files: - path: /etc/sysconfig/docker-storage-setup permissions: 0644 owner: root content: | ROOT_SIZE=6G",
"runcmd: - echo \"STORAGE_DRIVER=overlay2\" >> /etc/sysconfig/docker-storage-setup"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_atomic_host/7/html/installation_and_configuration_guide/setting_up_cloud_init |
Chapter 3. Hammer reference | Chapter 3. Hammer reference You can review the usage of Hammer statements. These usage statements are current to the versions of Hammer and its components released for Satellite 6.15. 3.1. hammer Usage Options --[no-]use-defaults - Enable/disable stored defaults. Enabled by default --autocomplete VALUE - Get list of possible endings --csv - Output as CSV (same as --output=csv) --csv-separator VALUE - Character to separate the values --fetch-ca-cert VALUE - Fetch CA certificate from server and exit --interactive BOOLEAN - Explicitly turn interactive mode on/off --no-headers - Hide headers from output --output ENUM - Set output format Possible value(s): base , table , silent , csv , yaml , json --output-file VALUE - Path to custom output file --show-ids - Show ids of associated resources --ssl-ca-file VALUE - Configure the file containing the CA certificates --ssl-ca-path VALUE - Configure the directory containing the CA certificates --ssl-client-cert VALUE - Configure the client`s public certificate --ssl-client-key VALUE - Configure the client`s private key --ssl-with-basic-auth - Use standard authentication in addition to client certificate authentication --verify-ssl BOOLEAN - Configure SSL verification of remote system --version - Show version -c , --config VALUE - Path to custom config file -d , --debug - Show debugging output -h , --help - Print help -p , --password VALUE - Password to access the remote system -q , --quiet - Completely silent -r , --reload-cache - Force reload of Apipie cache -s , --server VALUE - Remote system address -u , --username VALUE - Username to access the remote system -v , --[no-]verbose - Be verbose (or not). True by default 3.2. activation-key Manipulate activation keys Usage Options -h , --help - Print help 3.2.1. activation-key add-host-collection Associate a resource Usage Options --host-collection VALUE - Host collection name to search by --host-collection-id NUMBER - Id of the host collection --id VALUE - ID of the activation key --name VALUE - Activation key name to search by --organization VALUE - Organization name to search by --organization-id NUMBER - Organization ID --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title -h , --help - Print help 3.2.2. activation-key add-subscription Add subscription Usage Options --id NUMBER - ID of the activation key --name VALUE - Activation key name to search by --organization VALUE - Organization name to search by --organization-id NUMBER --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --quantity NUMBER - Quantity of this subscription to add --subscription VALUE - Subscription name to search by --subscription-id NUMBER - Subscription identifier --subscriptions SCHEMA - Array of subscriptions to add -h , --help - Print help Following parameters accept format defined by its schema (bold are required; <> contains acceptable type; [] contains acceptable value): --subscriptions - "id=<string>,quantity=<numeric>, ... " 3.2.3. activation-key content-override Override product content defaults Usage Options --content-label VALUE - Label of the content --id NUMBER - ID of the activation key --name VALUE - Activation key name to search by --organization VALUE - Organization name to search by --organization-id NUMBER --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --override-name VALUE - Override parameter key or name. To enable or disable a repo select enabled . Default value: enabled Default: "enabled" --remove - Remove a content override --value VALUE - Override value. Note for repo enablement you can use a boolean value -h , --help - Print help 3.2.4. activation-key copy Copy an activation key Usage Options --id NUMBER - ID of the activation key --name VALUE - Activation key name to search by --new-name VALUE - Name of new activation key --organization VALUE - Organization name to search by --organization-id NUMBER - Organization identifier --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title -h , --help - Print help 3.2.5. activation-key create Create an activation key Usage Options --auto-attach BOOLEAN - Auto attach subscriptions upon registration --content-view VALUE - Content view name to search by --content-view-id NUMBER - Content view id --description VALUE - Description --environment VALUE - Lifecycle environment name to search by (--environment is deprecated: Use --lifecycle-environment instead) --environment-id NUMBER - (--environment-id is deprecated: Use --lifecycle-environment-id instead) --lifecycle-environment VALUE - Lifecycle environment name to search by --lifecycle-environment-id NUMBER Environment id --max-hosts NUMBER - Maximum number of registered content hosts --name VALUE - Name --organization VALUE - Organization name to search by --organization-id NUMBER - Organization identifier --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --purpose-addons LIST - Sets the system add-ons --purpose-role VALUE - Sets the system purpose usage --purpose-usage VALUE - Sets the system purpose usage --release-version VALUE - Content release version --service-level VALUE - Service level --unlimited-hosts - Set hosts max to unlimited -h , --help - Print help 3.2.6. activation-key delete Destroy an activation key Usage Options --id NUMBER - ID of the activation key --name VALUE - Activation key name to search by --organization VALUE - Organization name to search by --organization-id NUMBER --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title -h , --help - Print help 3.2.7. activation-key host-collections List associated host collections Usage Options --available-for VALUE - Interpret specified object to return only Host Collections that can be associated with specified object. The value host is supported. --fields LIST - Show specified fields or predefined field sets only. (See below) --full-result BOOLEAN - Whether or not to show all results --host-id NUMBER - Filter products by host id --id VALUE - ID of activation key --name VALUE - Name of activation key --organization VALUE - Organization name to search by --organization-id NUMBER - Organization identifier --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --sort-by VALUE - Field to sort the results on --sort-order VALUE - How to order the sorted results (e.g. ASC for ascending) -h , --help - Print help Table 3.1. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x 3.2.8. activation-key info Show an activation key Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id NUMBER - ID of the activation key --name VALUE - Activation key name to search by --organization VALUE - Organization name to search by --organization-id NUMBER - Organization identifier --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --show-hosts BOOLEAN - Show hosts associated to an activation key -h , --help - Print help Table 3.2. Predefined field sets FIELDS ALL DEFAULT THIN Name x x x Id x x x Description x x Host limit x x Auto attach x x Release version x x Lifecycle environment x x Content view x x Associated hosts/id x x Associated hosts/name x x Host collections/id x x Host collections/name x x Content overrides/content label x x Content overrides/name x x Content overrides/value x x System purpose/service level x x System purpose/purpose usage x x System purpose/purpose role x x System purpose/purpose addons x x 3.2.9. activation-key list List activation keys Usage Options --content-view VALUE - Content view name to search by --content-view-id NUMBER - Content view identifier --environment VALUE - Lifecycle environment name to search by (--environment is deprecated: Use --lifecycle-environment instead) --environment-id NUMBER - (--environment-id is deprecated: Use --lifecycle-environment-id instead) --fields LIST - Show specified fields or predefined field sets only. (See below) --full-result BOOLEAN - Whether or not to show all results --lifecycle-environment VALUE - Lifecycle environment name to search by --lifecycle-environment-id NUMBER Environment identifier --name VALUE - Activation key name to filter by --order VALUE - Sort field and order, eg. id DESC --organization VALUE - Organization name to search by --organization-id NUMBER - Organization identifier --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --page NUMBER - Page number, starting at 1 --per-page NUMBER - Number of results per page to return --search VALUE - Search string -h , --help - Print help Table 3.3. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Host limit x x Lifecycle environment x x Content view x x Search / Order fields addon - string content_view - string content_view_id - integer description - text environment - string name - string organization_id - integer role - string subscription_id - string subscription_name - string usage - string 3.2.10. activation-key product-content List associated products Usage Options --content-access-mode-all BOOLEAN Get all content available, not just that provided by subscriptions --content-access-mode-env BOOLEAN Limit content to just that available in the activation key`s content view version --fields LIST - Show specified fields or predefined field sets only. (See below) --full-result BOOLEAN - Whether or not to show all results --id VALUE - ID of the activation key --name VALUE - Activation key name to search by --order VALUE - Sort field and order, eg. id DESC --organization VALUE - Organization name to search by --organization-id NUMBER --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --page NUMBER - Page number, starting at 1 --per-page NUMBER - Number of results per page to return --search VALUE - Search string -h , --help - Print help Table 3.4. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Type x x Url x x Gpg key x x Label x x Default enabled? x x Override x x 3.2.11. activation-key remove-host-collection Disassociate a resource Usage Options --host-collection VALUE - Host collection name to search by --host-collection-id NUMBER - Id of the host collection --id VALUE - ID of the activation key --name VALUE - Activation key name to search by --organization VALUE - Organization name to search by --organization-id NUMBER - Organization ID --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title -h , --help - Print help 3.2.12. activation-key remove-subscription Remove subscription Usage Options --id NUMBER - ID of the activation key --name VALUE - Activation key name to search by --organization VALUE - Organization name to search by --organization-id NUMBER --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --subscription-id VALUE - ID of subscription -h , --help - Print help 3.2.13. activation-key subscriptions List associated subscriptions Usage Options --activation-key VALUE - Activation key name to search by --activation-key-id VALUE - Activation key ID --available-for VALUE - Object to show subscriptions available for, either host or activation_key --fields LIST - Show specified fields or predefined field sets only. (See below) --full-result BOOLEAN - Whether or not to show all results --host VALUE - Host name --host-id VALUE - Id of a host --id VALUE - ID of the activation key --match-host BOOLEAN - Ignore subscriptions that are unavailable to the specified host --match-installed BOOLEAN - Return subscriptions that match installed products of the specified host --name VALUE - Activation key name to search by --no-overlap BOOLEAN - Return subscriptions which do not overlap with a currently-attached subscription --order VALUE - Sort field and order, eg. id DESC --organization VALUE - Organization name to search by --organization-id NUMBER - Organization ID --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --page NUMBER - Page number, starting at 1 --per-page NUMBER - Number of results per page to return --search VALUE - Search string -h , --help - Print help Table 3.5. Predefined field sets FIELDS ALL DEFAULT Id x x Name x x Attached x x Quantity x x Start date x x End date x x Support x x Contract x x Account x x 3.2.14. activation-key update Update an activation key Usage Options --auto-attach BOOLEAN - Auto attach subscriptions upon registration --content-view VALUE - Content view name to search by --content-view-id NUMBER - Content view id --description VALUE - Description --environment VALUE - Lifecycle environment name to search by (--environment is deprecated: Use --lifecycle-environment instead) --environment-id NUMBER - (--environment-id is deprecated: Use --lifecycle-environment-id instead) --id NUMBER - ID of the activation key --lifecycle-environment VALUE - Lifecycle environment name to search by --lifecycle-environment-id NUMBER Environment id --max-hosts NUMBER - Maximum number of registered content hosts --name VALUE - Name --new-name VALUE - Name --organization VALUE - Organization name to search by --organization-id NUMBER - Organization identifier --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --purpose-addons LIST - Sets the system add-ons --purpose-role VALUE - Sets the system purpose usage --purpose-usage VALUE - Sets the system purpose usage --release-version VALUE - Content release version --service-level VALUE - Service level --unlimited-hosts - Set hosts max to unlimited -h , --help - Print help 3.3. admin Administrative server-side tasks Usage Options -h , --help - Print help 3.3.1. admin logging Logging verbosity level setup Usage Options --no-backup - Skip configuration backups creation. --prefix VALUE - Operate on prefixed environment (e.g. chroot). -a , --all - Apply to all components. -c , --components LIST - Components to apply, use --list to get them. -d , --level-debug - Increase verbosity level to debug. -h , --help - Print help -l , --list - List available components. -n , --dry-run - Do not apply specified changes. -p , --level-production - Decrease verbosity level to standard. 3.4. alternate-content-source Manipulate alternate content sources Usage Options -h , --help - Print help 3.4.1. alternate-content-source bulk Modify alternate content sources in bulk Usage Options -h , --help - Print help 3.4.1.1. alternate-content-source bulk destroy Destroy alternate content sources Usage Options --ids LIST - List of alternate content source IDs -h , --help - Print help 3.4.1.2. alternate-content-source bulk refresh Refresh alternate content sources Usage Options --ids LIST - List of alternate content source IDs -h , --help - Print help 3.4.1.3. alternate-content-source bulk refresh-all Refresh all alternate content sources Usage Options -h , --help - Print help 3.4.2. alternate-content-source create Create an alternate content source to download content from during repository syncing. Note: alternate content sources are global and affect ALL sync actions on their capsules regardless of organization. Usage Options --alternate-content-source-type ENUM The Alternate Content Source type Possible value(s): custom , simplified , rhui --base-url VALUE - Base URL for finding alternate content --content-type ENUM - The content type for the Alternate Content Source Possible value(s): file , yum --description VALUE - Description for the alternate content source --name VALUE - Name of the alternate content source --product-ids LIST - IDs of products to copy repository information from into a Simplified Alternate Content Source. Products must include at least one repository of the chosen content type. --smart-proxies LIST --smart-proxy-ids LIST - Ids of capsules to associate --smart-proxy-names LIST - Names of capsules to associate --ssl-ca-cert-id NUMBER - Identifier of the content credential containing the SSL CA Cert --ssl-client-cert-id NUMBER - Identifier of the content credential containing the SSL Client Cert --ssl-client-key-id NUMBER - Identifier of the content credential containing the SSL Client Key --subpaths LIST - Path suffixes for finding alternate content --upstream-password VALUE - Basic authentication password --upstream-username VALUE - Basic authentication username --use-http-proxies BOOLEAN - If the capsules` assigned HTTP Proxies should be used --verify-ssl BOOLEAN - If SSL should be verified for the upstream URL -h , --help - Print help 3.4.3. alternate-content-source delete Destroy an alternate content source. Usage Options --id NUMBER - Alternate content source ID --name VALUE - Name to search by -h , --help - Print help 3.4.4. alternate-content-source info Show an alternate content source. Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id NUMBER - Alternate content source ID --name VALUE - Name to search by -h , --help - Print help Table 3.6. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Label x x Description x x Base url x x Content type x x Alternate content source type x x Upstream username x x Verify ssl x x Ssl ca cert/id x x Ssl ca cert/name x x Ssl client cert/id x x Ssl client cert/name x x Ssl client key/id x x Ssl client key/name x x Subpaths/ x x Products/id x x Products/organization id x x Products/name x x Products/label x x Smart proxies/id x x Smart proxies/name x x Smart proxies/url x x Smart proxies/download policy x x 3.4.5. alternate-content-source list List alternate content sources. Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --full-result BOOLEAN - Whether or not to show all results --name VALUE - Name of the alternate content source --order VALUE - Sort field and order, eg. id DESC --page NUMBER - Page number, starting at 1 --per-page NUMBER - Number of results per page to return --search VALUE - Search string -h , --help - Print help Table 3.7. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Type x x Search / Order fields alternate_content_source_type - string base_url - string content_type - string description - text label - string name - string product_id - integer product_name - string smart_proxy_id - integer smart_proxy_name - string subpath - string upstream_username - string 3.4.6. alternate-content-source refresh Refresh an alternate content source. Refreshing, like repository syncing, is required before using an alternate content source. Usage Options --async - Do not wait for the task --id NUMBER - Alternate content source ID --name VALUE - Name to search by -h , --help - Print help 3.4.7. alternate-content-source update Update an alternate content source. Usage Options --base-url VALUE - Base URL for finding alternate content --description VALUE - Description for the alternate content source --id NUMBER - Alternate content source ID --name VALUE - Name of the alternate content source --new-name VALUE - Name of the alternate content source --product-ids LIST - IDs of products to copy repository information from into a Simplified Alternate Content Source. Products must include at least one repository of the chosen content type. --products LIST --smart-proxies LIST --smart-proxy-ids LIST - Ids of capsules to associate --smart-proxy-names LIST - Names of capsules to associate --ssl-ca-cert-id NUMBER - Identifier of the content credential containing the SSL CA Cert --ssl-client-cert-id NUMBER - Identifier of the content credential containing the SSL Client Cert --ssl-client-key-id NUMBER - Identifier of the content credential containing the SSL Client Key --subpaths LIST - Path suffixes for finding alternate content --upstream-password VALUE - Basic authentication password --upstream-username VALUE - Basic authentication username --use-http-proxies BOOLEAN - If the capsules` assigned HTTP Proxies should be used --verify-ssl BOOLEAN - If SSL should be verified for the upstream URL -h , --help - Print help 3.5. ansible Manage foreman ansible Usage Options -h , --help - Print help 3.5.1. ansible inventory Ansible Inventory Usage Options -h , --help - Print help 3.5.1.1. ansible inventory hostgroups Show Ansible inventory for hostgroups Usage Options --as-json - Full response as json --hostgroup-ids LIST - IDs of hostgroups included in inventory --hostgroup-titles LIST --hostgroups LIST --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.5.1.2. ansible inventory hosts Show Ansible inventory for hosts Usage Options --as-json - Full response as json --host-ids LIST - IDs of hosts included in inventory --hosts LIST --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.5.1.3. ansible inventory schedule Schedule generating of Ansible Inventory report Usage Options --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --report-format ENUM - Report format, defaults to json Possible value(s): csv , json , yaml , html -h , --help - Print help 3.5.2. ansible roles Manage ansible roles Usage Options -h , --help - Print help 3.5.2.1. ansible roles delete Deletes Ansible role Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.5.2.2. ansible roles fetch Fetch Ansible roles available to be synced Usage Options --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --proxy-id VALUE - Capsule to fetch from -h , --help - Print help 3.5.2.3. ansible roles import DEPRECATED: Import Ansible roles Usage Options --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --proxy-id VALUE - Capsule to import from --role-names LIST - Ansible role names to be imported -h , --help - Print help 3.5.2.4. ansible roles info Show role Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.8. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Imported at x x 3.5.2.5. ansible roles list List Ansible roles Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results -h , --help - Print help Table 3.9. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Imported at x x Search / Order fields host - string host_id - integer hostgroup - string hostgroup_id - integer id - integer name - string updated_at - datetime 3.5.2.6. ansible roles obsolete DEPRECATED: Obsolete Ansible roles Usage Options --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --proxy-id VALUE - Capsule to import from -h , --help - Print help 3.5.2.7. ansible roles play-hostgroups Runs all Ansible roles on hostgroups Usage Options --hostgroup-ids LIST - IDs of hostgroups to play roles on --hostgroup-titles LIST --hostgroups LIST --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.5.2.8. ansible roles play-hosts Runs all Ansible roles on hosts Usage Options --host-ids LIST - IDs of hosts to play roles on --hosts LIST --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.5.2.9. ansible roles sync Sync Ansible roles Usage Options --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --proxy-id VALUE - Capsule to sync from --role-names LIST - Ansible role names to be synced -h , --help - Print help 3.5.3. ansible variables Manage ansible variables Usage Options -h , --help - Print help 3.5.3.1. ansible variables add-matcher Create an override value for a specific ansible variable Usage Options --ansible-variable VALUE - Name to search by --ansible-variable-id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --match VALUE - Override match --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --value VALUE - Override value, required if omit is false -h , --help - Print help 3.5.3.2. ansible variables create Create Ansible variable Usage Options --ansible-role VALUE - Name to search by --ansible-role-id NUMBER - Role ID --avoid-duplicates BOOLEAN - Remove duplicate values (only array type) --default-value VALUE - Default value of variable --description VALUE - Description of variable --hidden-value BOOLEAN - When enabled the parameter is hidden in the UI --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --merge-default BOOLEAN - Include default value when merging all matching values --merge-overrides BOOLEAN - Merge all matching values (only array/hash type) --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --override BOOLEAN - Whether to override variable or not --override-value-order VALUE The order in which values are resolved --validator-rule VALUE - Used to enforce certain values for the parameter values --validator-type ENUM - Types of validation values Possible value(s): regexp , list --variable VALUE - Name of variable --variable-type ENUM - Types of variable values Possible value(s): string , boolean , integer , real , array , hash , yaml , json -h , --help - Print help 3.5.3.3. ansible variables delete Deletes Ansible variable Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.5.3.4. ansible variables import DEPRECATED: Import Ansible variables. This will only import variables for already existing roles, it will not import any new roles Usage Options --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --proxy-id VALUE - Capsule to import from -h , --help - Print help 3.5.3.5. ansible variables info Show variable Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.10. Predefined field sets FIELDS ALL DEFAULT Id x x Variable x x Default value x x Type x x Role x x Role id x x Description x x Hidden value? x x Validator/type x x Validator/rule x x Override values/override x x Override values/merge overrides x x Override values/merge default value x x Override values/avoid duplicates x x Override values/order x x Override values/values/id x x Override values/values/match x x Override values/values/value x x Created at x x Updated at x x 3.5.3.6. ansible variables list List Ansible variables Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results -h , --help - Print help Table 3.11. Predefined field sets FIELDS ALL DEFAULT Id x x Variable x x Default value x x Type x x Role x x Role id x x Search / Order fields ansible_role - string avoid_duplicates - Values: true, false imported - Values: true, false key - string merge_default - Values: true, false merge_overrides - Values: true, false name - string override - Values: true, false parameter - string 3.5.3.7. ansible variables obsolete DEPRECATED: Obsolete Ansible variables. This will only obsolete variables for already existing roles, it will not delete any old roles Usage Options --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --proxy-id VALUE - Capsule to import from -h , --help - Print help 3.5.3.8. ansible variables remove-matcher Destroy an override value Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.5.3.9. ansible variables update Updates Ansible variable Usage Options --ansible-role VALUE - Name to search by --ansible-role-id NUMBER - Role ID --avoid-duplicates BOOLEAN - Remove duplicate values (only array type) --default-value VALUE - Default value of variable --description VALUE - Description of variable --hidden-value BOOLEAN - When enabled the parameter is hidden in the UI --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --merge-default BOOLEAN - Include default value when merging all matching values --merge-overrides BOOLEAN - Merge all matching values (only array/hash type) --name VALUE - Name to search by --new-name VALUE --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --override BOOLEAN - Whether to override variable or not --override-value-order LIST - The order in which values are resolved --validator-rule VALUE - Used to enforce certain values for the parameter values --validator-type ENUM - Types of validation values Possible value(s): regexp , list --variable VALUE - Name of variable --variable-type ENUM - Types of variable values Possible value(s): string , boolean , integer , real , array , hash , yaml , json -h , --help - Print help 3.6. architecture Manipulate architectures Usage Options -h , --help - Print help 3.6.1. architecture add-operatingsystem Associate an operating system Usage Options --id VALUE --name VALUE - Architecture name --operatingsystem VALUE - Operating system title --operatingsystem-id NUMBER -h , --help - Print help 3.6.2. architecture create Create an architecture Usage Options --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE --operatingsystem-ids LIST - Operating system IDs --operatingsystems LIST --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.6.3. architecture delete Delete an architecture Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Architecture name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.6.4. architecture info Show an architecture Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Architecture name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.12. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Operating systems/ x x Locations/ x x Organizations/ x x Created at x x Updated at x x 3.6.5. architecture list List all architectures Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --operatingsystem VALUE - Operating system title --operatingsystem-id NUMBER - ID of operating system --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results -h , --help - Print help Table 3.13. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Search / Order fields id - integer name - string 3.6.6. architecture remove-operatingsystem Disassociate an operating system Usage Options --id VALUE --name VALUE - Architecture name --operatingsystem VALUE - Operating system title --operatingsystem-id NUMBER -h , --help - Print help 3.6.7. architecture update Update an architecture Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE --new-name VALUE --operatingsystem-ids LIST - Operating system IDs --operatingsystems LIST --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.7. arf-report Manipulate compliance reports Usage Options -h , --help - Print help 3.7.1. arf-report delete Delete an ARF Report Usage Options --id VALUE --location VALUE - Name to search by --location-id NUMBER - Set the current location context for the request --organization VALUE - Name to search by --organization-id NUMBER - Set the current organization context for the request -h , --help - Print help 3.7.2. arf-report download Download bzipped ARF report Usage Options --id VALUE --location VALUE - Name to search by --location-id NUMBER - Set the current location context for the request --organization VALUE - Name to search by --organization-id NUMBER - Set the current organization context for the request --path VALUE - Path to directory where downloaded file will be saved -h , --help - Print help 3.7.3. arf-report download-html Download ARF report in HTML Usage Options --id VALUE --location VALUE - Name to search by --location-id NUMBER - Set the current location context for the request --organization VALUE - Name to search by --organization-id NUMBER - Set the current organization context for the request --path VALUE - Path to directory where downloaded file will be saved -h , --help - Print help 3.7.4. arf-report info Show an ARF report Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Name to search by --location-id NUMBER - Set the current location context for the request --organization VALUE - Name to search by --organization-id NUMBER - Set the current organization context for the request -h , --help - Print help Table 3.14. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Reported at x x Host name x x x Openscap proxy name x x Policy name x x Passed x x Failed x x Othered x x Host id x x Openscap proxy id x x Policy id x x Locations/ x x Organizations/ x x 3.7.5. arf-report list List ARF reports Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results -h , --help - Print help Table 3.15. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Reported at x x Host name x x x Openscap proxy name x x Policy name x x Passed x x Failed x x Othered x x Search / Order fields compliance_policy - string compliance_status - Values: compliant, incompliant, inconclusive comply_with - string eventful - Values: true, false host - string host_collection - string host_id - integer host_owner_id - integer hostgroup - string hostgroup_fullname - string hostgroup_title - string id - integer inconclusive_with - string last_for - Values: host, policy last_report - datetime lifecycle_environment location - string location_id - integer log - text not_comply_with - string openscap_proxy - string organization - string organization_id - integer origin - string policy - string reported - datetime resource - text xccdf_rule_failed - string xccdf_rule_name - text xccdf_rule_othered - string xccdf_rule_passed - string 3.8. audit Search audit trails. Usage Options -h , --help - Print help 3.8.1. audit info Show an audit Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.16. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x At x x Ip x x User x x Action x x Audit type x x Audit record x x Request uuid x x Audited changes/attribute x x Audited changes/value x x Audited changes/old x x Audited changes/new x x 3.8.2. audit list List all audits Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results -h , --help - Print help Table 3.17. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x At x x Ip x x User x x Action x x Audit type x x Audit record x x Request uuid x Search / Order fields location - string location_id - integer organization - string organization_id - integer 3.9. auth Foreman connection login/logout Usage Options -h , --help - Print help 3.9.1. auth login Set credentials Usage Options -h , --help - Print help 3.9.1.1. auth login basic provide username and password Usage Options -h , --help - Print help -p , --password VALUE - Password to access the remote system -u , --username VALUE - Username to access the remote system 3.9.1.2. auth login basic-external Authenticate against external source (IPA/PAM) with credentials Usage Options -h , --help - Print help -p , --password VALUE - Password to access the remote system -u , --username VALUE - Username to access the remote system 3.9.1.3. auth login negotiate negotiate the login credentials from the auth ticket (Kerberos) Usage Options -h , --help - Print help 3.9.1.4. auth login oauth supports for both with/without 2fa Usage Options -a , --oidc-authorization-endpoint VALUE Openidc provider URL which issues authentication code (two factor only) -c , --oidc-client-id VALUE - Client id used in the Openidc provider -f , --two-factor - Authenticate with two factor -h , --help - Print help -p , --password VALUE - Password to access the remote system -r , --oidc-redirect-uri VALUE - Redirect URI for the authentication code grant flow -t , --oidc-token-endpoint VALUE - Openidc provider URL which issues access token -u , --username VALUE - Username to access the remote system 3.9.2. auth logout Wipe your credentials Usage Options -h , --help - Print help 3.9.3. auth status Information about current connections Usage Options -h , --help - Print help 3.10. auth-source Manipulate auth sources Usage Options -h , --help - Print help 3.10.1. auth-source external Manage external auth sources Usage Options -h , --help - Print help 3.10.1.1. auth-source external info Show an external authentication source Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.18. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Locations/ x x Organizations/ x x 3.10.1.2. auth-source external list List external authentication sources Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Scope by locations --location-title VALUE - Set the current location context for the request --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Scope by organizations --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results -h , --help - Print help Table 3.19. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Search / Order fields id - integer location - string location_id - integer name - string organization - string organization_id - integer 3.10.1.3. auth-source external update Update organization and location for Auth Source Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-ids LIST - REPLACE locations with given ids --location-title VALUE - Set the current location context for the request --location-titles LIST --locations LIST --name VALUE --new-name VALUE --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-ids LIST - REPLACE organizations with given ids. --organization-title VALUE - Set the current organization context for the request --organization-titles LIST --organizations LIST -h , --help - Print help 3.10.2. auth-source ldap Manage LDAP auth sources Usage Options -h , --help - Print help 3.10.2.1. auth-source ldap create Create an LDAP authentication source Usage Options --account VALUE --account-password VALUE - Required if onthefly_register is true --attr-firstname VALUE - Required if onthefly_register is true --attr-lastname VALUE - Required if onthefly_register is true --attr-login VALUE - Required if onthefly_register is true --attr-mail VALUE - Required if onthefly_register is true --attr-photo VALUE --base-dn VALUE --groups-base VALUE - Groups base DN --host VALUE - The hostname of the LDAP server --ldap-filter VALUE - LDAP filter --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-ids LIST - REPLACE locations with given ids --location-title VALUE - Set the current location context for the request --location-titles LIST --locations LIST --name VALUE --onthefly-register BOOLEAN --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-ids LIST - REPLACE organizations with given ids. --organization-title VALUE - Set the current organization context for the request --organization-titles LIST --organizations LIST --port NUMBER - Defaults to 389 --server-type ENUM - Type of the LDAP server Possible value(s): free_ipa , active_directory , posix --tls BOOLEAN --use-netgroups BOOLEAN - Use NIS netgroups instead of posix groups, applicable only when server_type is posix or free_ipa --usergroup-sync BOOLEAN - Sync external user groups on login -h , --help - Print help 3.10.2.2. auth-source ldap delete Delete an LDAP authentication source Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.10.2.3. auth-source ldap info Show an LDAP authentication source Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.20. Predefined field sets FIELDS ALL DEFAULT Server/id x x Server/name x x Server/server x x Server/ldaps x x Server/port x x Server/server type x x Account/account username x x Account/base dn x x Account/groups base dn x x Account/use netgroups x x Account/ldap filter x x Account/automatically create accounts? x x Account/usergroup sync x x Attribute mappings/login name attribute x x Attribute mappings/first name attribute x x Attribute mappings/last name attribute x x Attribute mappings/email address attribute x x Attribute mappings/photo attribute x x Locations/ x x Organizations/ x x 3.10.2.4. auth-source ldap list List all LDAP authentication sources Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Scope by locations --location-title VALUE - Set the current location context for the request --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Scope by organizations --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results -h , --help - Print help Table 3.21. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Server x x Port x x Ldaps? x x Search / Order fields id - integer location - string location_id - integer name - string organization - string organization_id - integer 3.10.2.5. auth-source ldap update Update an LDAP authentication source Usage Options --account VALUE --account-password VALUE - Required if onthefly_register is true --attr-firstname VALUE - Required if onthefly_register is true --attr-lastname VALUE - Required if onthefly_register is true --attr-login VALUE - Required if onthefly_register is true --attr-mail VALUE - Required if onthefly_register is true --attr-photo VALUE --base-dn VALUE --groups-base VALUE - Groups base DN --host VALUE - The hostname of the LDAP server --id VALUE --ldap-filter VALUE - LDAP filter --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-ids LIST - REPLACE locations with given ids --location-title VALUE - Set the current location context for the request --location-titles LIST --locations LIST --name VALUE --new-name VALUE --onthefly-register BOOLEAN --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-ids LIST - REPLACE organizations with given ids. --organization-title VALUE - Set the current organization context for the request --organization-titles LIST --organizations LIST --port NUMBER - Defaults to 389 --server-type ENUM - Type of the LDAP server Possible value(s): free_ipa , active_directory , posix --tls BOOLEAN --use-netgroups BOOLEAN - Use NIS netgroups instead of posix groups, applicable only when server_type is posix or free_ipa --usergroup-sync BOOLEAN - Sync external user groups on login -h , --help - Print help 3.10.3. auth-source list List all auth sources Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Scope by locations --location-title VALUE - Set the current location context for the request --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Scope by organizations --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results -h , --help - Print help Table 3.22. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Type of auth source x x Search / Order fields id - integer location - string location_id - integer name - string organization - string organization_id - integer 3.11. bookmark Manage bookmarks Usage Options -h , --help - Print help 3.11.1. bookmark create Create a bookmark Usage Options --controller VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --public BOOLEAN --query VALUE -h , --help - Print help 3.11.2. bookmark delete Delete a bookmark Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.11.3. bookmark info Show a bookmark Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.23. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Controller x x Search query x x Public x x Owner id x x Owner type x x 3.11.4. bookmark list List all bookmarks Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results -h , --help - Print help Table 3.24. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Controller x x Search query x x Public x x Owner id x x Owner type x x Search / Order fields controller - string id - integer name - string 3.11.5. bookmark update Update a bookmark Usage Options --controller VALUE --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE --new-name VALUE --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --public BOOLEAN --query VALUE -h , --help - Print help 3.12. bootdisk Download boot disks Usage Options -h , --help - Print help 3.12.1. bootdisk generic Download generic image Usage Options --file VALUE - File or device to write image to --force - Force writing to existing destination (device etc.) --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --sudo - Use sudo to write to device -h , --help - Print help 3.12.2. bootdisk host Download host image Usage Options --file VALUE - File or device to write image to --force - Force writing to existing destination (device etc.) --full BOOLEAN - True for full, false for basic reusable image --host VALUE - Host name --host-id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --sudo - Use sudo to write to device -h , --help - Print help 3.12.3. bootdisk subnet Download subnet generic image Usage Options --file VALUE - File or device to write image to --force - Force writing to existing destination (device etc.) --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --subnet VALUE - Subnet name --subnet-id VALUE --sudo - Use sudo to write to device -h , --help - Print help 3.13. capsule Manipulate capsule Usage Options -h , --help - Print help 3.13.1. capsule content Manage the capsule content Usage Options -h , --help - Print help 3.13.1.1. capsule content add-lifecycle-environment Add lifecycle environments to the capsule Usage Options --environment VALUE - Lifecycle environment name to search by (--environment is deprecated: Use --lifecycle-environment instead) --environment-id NUMBER - (--environment-id is deprecated: Use --lifecycle-environment-id instead) --id NUMBER - Id of the capsule --lifecycle-environment VALUE - Lifecycle environment name to search by --lifecycle-environment-id NUMBER Id of the lifecycle environment --name VALUE - Name to search by --organization VALUE - Organization name --organization-id VALUE - Organization ID -h , --help - Print help 3.13.1.2. capsule content available-lifecycle-environments List the lifecycle environments not attached to the capsule Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id NUMBER - Id of the capsule --name VALUE - Name to search by --organization VALUE - Organization name to search by --organization-id NUMBER - Id of the organization to limit environments on --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title -h , --help - Print help Table 3.25. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Organization x x 3.13.1.3. capsule content cancel-synchronization Cancel running capsule synchronization Usage Options --id NUMBER - Id of the capsule --name VALUE - Name to search by -h , --help - Print help 3.13.1.4. capsule content info Get current capsule synchronization status Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id NUMBER - Id of the capsule --name VALUE - Name to search by --organization VALUE - Organization name to search by --organization-id NUMBER - Id of the organization to get the status for --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title -h , --help - Print help Table 3.26. Predefined field sets FIELDS ALL DEFAULT Lifecycle environments/name x x Lifecycle environments/organization x x Lifecycle environments/content views/name x x Lifecycle environments/content views/composite x x Lifecycle environments/content views/last published x x Lifecycle environments/content views/repositories/repository id x x Lifecycle environments/content views/repositories/repository name x x Lifecycle environments/content views/repositories/content counts/warning x x Lifecycle environments/content views/repositories/content counts/packages x x Lifecycle environments/content views/repositories/content counts/srpms x x Lifecycle environments/content views/repositories/content counts/module streams x x Lifecycle environments/content views/repositories/content counts/package groups x x Lifecycle environments/content views/repositories/content counts/errata x x Lifecycle environments/content views/repositories/content counts/debian packages x x Lifecycle environments/content views/repositories/content counts/container tags x x Lifecycle environments/content views/repositories/content counts/container ma... x x Lifecycle environments/content views/repositories/content counts/container ma... x x Lifecycle environments/content views/repositories/content counts/files x x Lifecycle environments/content views/repositories/content counts/ansible coll... x x Lifecycle environments/content views/repositories/content counts/ostree refs x x Lifecycle environments/content views/repositories/content counts/python packages x x 3.13.1.5. capsule content lifecycle-environments List the lifecycle environments attached to the capsule Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id NUMBER - Id of the capsule --name VALUE - Name to search by --organization VALUE - Organization name to search by --organization-id NUMBER - Id of the organization to limit environments on --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title -h , --help - Print help Table 3.27. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Organization x x 3.13.1.6. capsule content reclaim-space Reclaim space from all On Demand repositories on a capsule Usage Options --async - Do not wait for the task --id NUMBER - Id of the capsule --name VALUE - Name to search by -h , --help - Print help 3.13.1.7. capsule content remove-lifecycle-environment Remove lifecycle environments from the capsule Usage Options --environment VALUE - Lifecycle environment name to search by (--environment is deprecated: Use --lifecycle-environment instead) --environment-id NUMBER - (--environment-id is deprecated: Use --lifecycle-environment-id instead) --id NUMBER - Id of the capsule --lifecycle-environment VALUE - Lifecycle environment name to search by --lifecycle-environment-id NUMBER Id of the lifecycle environment --name VALUE - Name to search by --organization VALUE - Organization name --organization-id VALUE - Organization ID -h , --help - Print help 3.13.1.8. capsule content synchronization-status Get current capsule synchronization status Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id NUMBER - Id of the capsule --name VALUE - Name to search by --organization VALUE - Organization name to search by --organization-id NUMBER - Id of the organization to get the status for --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title -h , --help - Print help Table 3.28. Predefined field sets FIELDS ALL DEFAULT Last sync x x Status x x Currently running sync tasks/task id x x Currently running sync tasks/progress x x Last failure/task id x x Last failure/messages x x 3.13.1.9. capsule content synchronize Synchronize the content to the capsule Usage Options --async - Do not wait for the task --content-view VALUE - Content view name to search by --content-view-id NUMBER - Id of the content view to limit the synchronization on --environment VALUE - Lifecycle environment name to search by (--environment is deprecated: Use --lifecycle-environment instead) --environment-id NUMBER - (--environment-id is deprecated: Use --lifecycle-environment-id instead) --id NUMBER - Id of the capsule --lifecycle-environment VALUE - Lifecycle environment name to search by --lifecycle-environment-id NUMBER Id of the environment to limit the synchronization on --name VALUE - Name to search by --organization VALUE - Organization name --organization-id VALUE - Organization ID --repository VALUE - Repository name to search by --repository-id NUMBER - Id of the repository to limit the synchronization on --skip-metadata-check BOOLEAN - Skip metadata check on each repository on the capsule -h , --help - Print help 3.13.1.10. capsule content update-counts Update content counts for the capsule Usage Options --async - Do not wait for the task --id NUMBER - Id of the capsule --name VALUE - Name to search by --organization VALUE - Organization name --organization-id VALUE - Organization ID -h , --help - Print help 3.13.2. capsule create Create a capsule Usage Options --download-policy VALUE - Download Policy of the capsule, must be one of on_demand, immediate, inherit, streamed --http-proxy VALUE - Name to search by --http-proxy-id NUMBER - Id of the HTTP Proxy to use with alternate content sources --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-ids LIST - REPLACE locations with given ids --location-title VALUE - Set the current location context for the request --location-titles LIST --locations LIST --name VALUE --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-ids LIST - REPLACE organizations with given ids. --organization-title VALUE - Set the current organization context for the request --organization-titles LIST --organizations LIST --url VALUE -h , --help - Print help 3.13.3. capsule delete Delete a capsule Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.13.4. capsule import-subnets Import subnets from Capsule Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.13.5. capsule info Show a capsule Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --include-status BOOLEAN - Flag to indicate whether to include status or not --include-version BOOLEAN - Flag to indicate whether to include version or not --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.29. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Status x x Url x x Features x x Version x x Host count x x Features/name x x Features/version x x Locations/ x x Organizations/ x x Created at x x Updated at x x 3.13.6. capsule list List all capsules Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --include-status BOOLEAN - Flag to indicate whether to include status or not --location VALUE - Set the current location context for the request --location-id NUMBER - Scope by locations --location-title VALUE - Set the current location context for the request --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Scope by organizations --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results -h , --help - Print help Table 3.30. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Status x x Url x x Features x x Search / Order fields feature - string id - integer location - string location_id - integer name - string organization - string organization_id - integer url - string 3.13.7. capsule refresh-features Refresh capsule features Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.13.8. capsule update Update a capsule Usage Options --download-policy VALUE - Download Policy of the capsule, must be one of on_demand, immediate, inherit, streamed --http-proxy VALUE - Name to search by --http-proxy-id NUMBER - Id of the HTTP Proxy to use with alternate content sources --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-ids LIST - REPLACE locations with given ids --location-title VALUE - Set the current location context for the request --location-titles LIST --locations LIST --name VALUE --new-name VALUE --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-ids LIST - REPLACE organizations with given ids. --organization-title VALUE - Set the current organization context for the request --organization-titles LIST --organizations LIST --url VALUE -h , --help - Print help 3.14. compute-profile Manipulate compute profiles Usage Options -h , --help - Print help 3.14.1. compute-profile create Create a compute profile Usage Options --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.14.2. compute-profile delete Delete a compute profile Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Compute profile name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.14.3. compute-profile info Show a compute profile Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Compute profile name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.31. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Locations/ x x Organizations/ x x Created at x x Updated at x x Compute attributes/id x x Compute attributes/name x x Compute attributes/compute resource x x Compute attributes/vm attributes x x 3.14.4. compute-profile list List of compute profiles Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results -h , --help - Print help Table 3.32. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Search / Order fields id - integer name - string 3.14.5. compute-profile update Update a compute profile Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE --new-name VALUE --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.14.6. compute-profile values Create update and delete Compute profile values Usage Options -h , --help - Print help 3.14.6.1. compute-profile values add-interface Add interface for Compute Profile Usage Options --compute-profile VALUE - Compute profile name --compute-profile-id VALUE --compute-resource VALUE - Compute resource name --compute-resource-id VALUE --interface KEY_VALUE_LIST - Interface parameters, should be comma separated list of values --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Provider specific options Bold attributes are required. EC2: --interface : Libvirt: --interface : compute_type - Possible values: bridge, network compute_bridge - Name of interface according to type compute_model - Possible values: virtio, rtl8139, ne2k_pci, pcnet, e1000 compute_network - Libvirt instance network, e.g. default OpenStack: --interface : Red Hat Virtualization: --interface : compute_name - Compute name, e.g. eth0 compute_network - Select one of available networks for a cluster, must be an ID or a name compute_interface - Interface type compute_vnic_profile - Vnic Profile Rackspace: --interface : VMware: --interface : compute_type - Type of the network adapter, for example one of: VirtualVmxnet3, VirtualE1000, See documentation center for your version of vSphere to find more details about available adapter types: https://www.vmware.com/support/pubs/ compute_network - Network ID or Network Name from VMware AzureRM: --interface : compute_network - Select one of available Azure Subnets, must be an ID compute_public_ip - Public IP (None, Static, Dynamic) compute_private_ip - Static Private IP (expressed as true or false) GCE: --interface : 3.14.6.2. compute-profile values add-volume Add volume for Compute Profile Usage Options --compute-profile VALUE - Compute profile name --compute-profile-id VALUE --compute-resource VALUE - Compute resource name --compute-resource-id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --volume KEY_VALUE_LIST - Volume parameters, should be comma separated list of values -h , --help - Print help Provider specific options Bold attributes are required. EC2: --volume : Libvirt: --volume : pool_name - One of available storage pools capacity - String value, e.g. 10G allocation - Initial allocation, e.g. 0G format_type - Possible values: raw, qcow2 OpenStack: --volume : Red Hat Virtualization: --volume : size_gb - Volume size in GB, integer value storage_domain - ID or name of storage domain bootable - Boolean, set 1 for bootable, only one volume can be bootable preallocate - Boolean, set 1 to preallocate wipe_after_delete - Boolean, set 1 to wipe disk after delete interface - Disk interface name, must be ide, virtio or virtio_scsi Rackspace: --volume : VMware: --volume : name - storage_pod - Storage Pod ID from VMware datastore - Datastore ID from VMware mode - persistent/independent_persistent/independent_nonpersistent size_gb - Integer number, volume size in GB thin - true/false eager_zero - true/false controller_key - Associated SCSI controller key AzureRM: --volume : disk_size_gb - Volume Size in GB (integer value) data_disk_caching - Data Disk Caching (None, ReadOnly, ReadWrite) GCE: --volume : size_gb - Volume size in GB, integer value 3.14.6.3. compute-profile values create Create compute profile set of values Usage Options --compute-attributes KEY_VALUE_LIST Compute resource attributes --compute-profile VALUE - Compute profile name --compute-profile-id VALUE --compute-resource VALUE - Compute resource name --compute-resource-id VALUE --interface KEY_VALUE_LIST - Interface parameters, should be comma separated list of values Can be specified multiple times. --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --volume KEY_VALUE_LIST - Volume parameters, should be comma separated list of values Can be specified multiple times. -h , --help - Print help Provider specific options Bold attributes are required. EC2: --volume : --interface : --compute-attributes : availability_zone - flavor_id - groups - security_group_ids - managed_ip - Libvirt: --volume : pool_name - One of available storage pools capacity - String value, e.g. 10G allocation - Initial allocation, e.g. 0G format_type - Possible values: raw, qcow2 --interface : compute_type - Possible values: bridge, network compute_bridge - Name of interface according to type compute_model - Possible values: virtio, rtl8139, ne2k_pci, pcnet, e1000 compute_network - Libvirt instance network, e.g. default --compute-attributes : cpus - Number of CPUs memory - String, amount of memory, value in bytes cpu_mode - Possible values: default, host-model, host-passthrough boot_order - Device names to specify the boot order OpenStack: --volume : --interface : --compute-attributes : availability_zone - boot_from_volume - flavor_ref - image_ref - tenant_id - security_groups - network - Red Hat Virtualization: --volume : size_gb - Volume size in GB, integer value storage_domain - ID or name of storage domain bootable - Boolean, set 1 for bootable, only one volume can be bootable preallocate - Boolean, set 1 to preallocate wipe_after_delete - Boolean, set 1 to wipe disk after delete interface - Disk interface name, must be ide, virtio or virtio_scsi --interface : compute_name - Compute name, e.g. eth0 compute_network - Select one of available networks for a cluster, must be an ID or a name compute_interface - Interface type compute_vnic_profile - Vnic Profile --compute-attributes : cluster - ID or name of cluster to use template - Hardware profile to use cores - Integer value, number of cores sockets - Integer value, number of sockets memory - Amount of memory, integer value in bytes ha - Boolean, set 1 to high availability display_type - Possible values: VNC, SPICE keyboard_layout - Possible values: ar, de-ch, es, fo, fr-ca, hu, ja, mk, no, pt-br, sv, da, en-gb, et, fr, fr-ch, is, lt, nl, pl, ru, th, de, en-us, fi, fr-be, hr, it, lv, nl-be, pt, sl, tr. Not usable if display type is SPICE. Rackspace: --volume : --interface : --compute-attributes : flavor_id - VMware: --volume : name - storage_pod - Storage Pod ID from VMware datastore - Datastore ID from VMware mode - persistent/independent_persistent/independent_nonpersistent size_gb - Integer number, volume size in GB thin - true/false eager_zero - true/false controller_key - Associated SCSI controller key --interface : compute_type - Type of the network adapter, for example one of: VirtualVmxnet3, VirtualE1000, See documentation center for your version of vSphere to find more details about available adapter types: https://www.vmware.com/support/pubs/ compute_network - Network ID or Network Name from VMware --compute-attributes : cluster - Cluster ID from VMware corespersocket - Number of cores per socket (applicable to hardware versions < 10 only) cpus - CPU count memory_mb - Integer number, amount of memory in MB path - Path to folder resource_pool - Resource Pool ID from VMware firmware - automatic/bios/efi guest_id - Guest OS ID form VMware hardware_version - Hardware version ID from VMware memoryHotAddEnabled - Must be a 1 or 0, lets you add memory resources while the machine is on cpuHotAddEnabled - Must be a 1 or 0, lets you add CPU resources while the machine is on add_cdrom - Must be a 1 or 0, Add a CD-ROM drive to the virtual machine annotation - Annotation Notes scsi_controllers - List with SCSI controllers definitions type - ID of the controller from VMware key - Key of the controller (e.g. 1000) boot_order - Device names to specify the boot order AzureRM: --volume : disk_size_gb - Volume Size in GB (integer value) data_disk_caching - Data Disk Caching (None, ReadOnly, ReadWrite) --interface : compute_network - Select one of available Azure Subnets, must be an ID compute_public_ip - Public IP (None, Static, Dynamic) compute_private_ip - Static Private IP (expressed as true or false) --compute-attributes : resource_group - Existing Azure Resource Group of user vm_size - VM Size, eg. Standard_A0 etc. username - The Admin username password - The Admin password platform - OS type eg. Linux ssh_key_data - SSH key for passwordless authentication os_disk_caching - OS disk caching premium_os_disk - Premium OS Disk, Boolean as 0 or 1 script_command - Custom Script Command script_uris - Comma seperated file URIs GCE: --volume : size_gb - Volume size in GB, integer value --interface : --compute-attributes : machine_type - network - associate_external_ip - 3.14.6.4. compute-profile values remove-interface Remove compute profile interface Usage Options --compute-profile VALUE - Compute profile name --compute-profile-id VALUE --compute-resource VALUE - Compute resource name --compute-resource-id VALUE --interface-id NUMBER - Interface id --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.14.6.5. compute-profile values remove-volume Remove compute profile volume Usage Options --compute-profile VALUE - Compute profile name --compute-profile-id VALUE --compute-resource VALUE - Compute resource name --compute-resource-id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --volume-id NUMBER - Volume id -h , --help - Print help 3.14.6.6. compute-profile values update Update compute profile values Usage Options --compute-attributes KEY_VALUE_LIST Compute resource attributes, should be comma separated list of values --compute-profile VALUE - Compute profile name --compute-profile-id VALUE --compute-resource VALUE - Compute resource name --compute-resource-id VALUE --interface KEY_VALUE_LIST - Interface parameters, should be comma separated list of values Can be specified multiple times. --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --volume KEY_VALUE_LIST - Volume parameters, should be comma separated list of values Can be specified multiple times. -h , --help - Print help Provider specific options Bold attributes are required. EC2: --volume : --interface : --compute-attributes : availability_zone - flavor_id - groups - security_group_ids - managed_ip - Libvirt: --volume : pool_name - One of available storage pools capacity - String value, e.g. 10G allocation - Initial allocation, e.g. 0G format_type - Possible values: raw, qcow2 --interface : compute_type - Possible values: bridge, network compute_bridge - Name of interface according to type compute_model - Possible values: virtio, rtl8139, ne2k_pci, pcnet, e1000 compute_network - Libvirt instance network, e.g. default --compute-attributes : cpus - Number of CPUs memory - String, amount of memory, value in bytes cpu_mode - Possible values: default, host-model, host-passthrough boot_order - Device names to specify the boot order OpenStack: --volume : --interface : --compute-attributes : availability_zone - boot_from_volume - flavor_ref - image_ref - tenant_id - security_groups - network - Red Hat Virtualization: --volume : size_gb - Volume size in GB, integer value storage_domain - ID or name of storage domain bootable - Boolean, set 1 for bootable, only one volume can be bootable preallocate - Boolean, set 1 to preallocate wipe_after_delete - Boolean, set 1 to wipe disk after delete interface - Disk interface name, must be ide, virtio or virtio_scsi --interface : compute_name - Compute name, e.g. eth0 compute_network - Select one of available networks for a cluster, must be an ID or a name compute_interface - Interface type compute_vnic_profile - Vnic Profile --compute-attributes : cluster - ID or name of cluster to use template - Hardware profile to use cores - Integer value, number of cores sockets - Integer value, number of sockets memory - Amount of memory, integer value in bytes ha - Boolean, set 1 to high availability display_type - Possible values: VNC, SPICE keyboard_layout - Possible values: ar, de-ch, es, fo, fr-ca, hu, ja, mk, no, pt-br, sv, da, en-gb, et, fr, fr-ch, is, lt, nl, pl, ru, th, de, en-us, fi, fr-be, hr, it, lv, nl-be, pt, sl, tr. Not usable if display type is SPICE. Rackspace: --volume : --interface : --compute-attributes : flavor_id - VMware: --volume : name - storage_pod - Storage Pod ID from VMware datastore - Datastore ID from VMware mode - persistent/independent_persistent/independent_nonpersistent size_gb - Integer number, volume size in GB thin - true/false eager_zero - true/false controller_key - Associated SCSI controller key --interface : compute_type - Type of the network adapter, for example one of: VirtualVmxnet3, VirtualE1000, See documentation center for your version of vSphere to find more details about available adapter types: https://www.vmware.com/support/pubs/ compute_network - Network ID or Network Name from VMware --compute-attributes : cluster - Cluster ID from VMware corespersocket - Number of cores per socket (applicable to hardware versions < 10 only) cpus - CPU count memory_mb - Integer number, amount of memory in MB path - Path to folder resource_pool - Resource Pool ID from VMware firmware - automatic/bios/efi guest_id - Guest OS ID form VMware hardware_version - Hardware version ID from VMware memoryHotAddEnabled - Must be a 1 or 0, lets you add memory resources while the machine is on cpuHotAddEnabled - Must be a 1 or 0, lets you add CPU resources while the machine is on add_cdrom - Must be a 1 or 0, Add a CD-ROM drive to the virtual machine annotation - Annotation Notes scsi_controllers - List with SCSI controllers definitions type - ID of the controller from VMware key - Key of the controller (e.g. 1000) boot_order - Device names to specify the boot order AzureRM: --volume : disk_size_gb - Volume Size in GB (integer value) data_disk_caching - Data Disk Caching (None, ReadOnly, ReadWrite) --interface : compute_network - Select one of available Azure Subnets, must be an ID compute_public_ip - Public IP (None, Static, Dynamic) compute_private_ip - Static Private IP (expressed as true or false) --compute-attributes : resource_group - Existing Azure Resource Group of user vm_size - VM Size, eg. Standard_A0 etc. username - The Admin username password - The Admin password platform - OS type eg. Linux ssh_key_data - SSH key for passwordless authentication os_disk_caching - OS disk caching premium_os_disk - Premium OS Disk, Boolean as 0 or 1 script_command - Custom Script Command script_uris - Comma seperated file URIs GCE: --volume : size_gb - Volume size in GB, integer value --interface : --compute-attributes : machine_type - network - associate_external_ip - 3.14.6.7. compute-profile values update-interface Update compute profile interface Usage Options --compute-profile VALUE - Compute profile name --compute-profile-id VALUE --compute-resource VALUE - Compute resource name --compute-resource-id VALUE --interface KEY_VALUE_LIST - Interface parameters, should be comma separated list of values --interface-id NUMBER - Interface id --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Provider specific options Bold attributes are required. EC2: --interface : Libvirt: --interface : compute_type - Possible values: bridge, network compute_bridge - Name of interface according to type compute_model - Possible values: virtio, rtl8139, ne2k_pci, pcnet, e1000 compute_network - Libvirt instance network, e.g. default OpenStack: --interface : Red Hat Virtualization: --interface : compute_name - Compute name, e.g. eth0 compute_network - Select one of available networks for a cluster, must be an ID or a name compute_interface - Interface type compute_vnic_profile - Vnic Profile Rackspace: --interface : VMware: --interface : compute_type - Type of the network adapter, for example one of: VirtualVmxnet3, VirtualE1000, See documentation center for your version of vSphere to find more details about available adapter types: https://www.vmware.com/support/pubs/ compute_network - Network ID or Network Name from VMware AzureRM: --interface : compute_network - Select one of available Azure Subnets, must be an ID compute_public_ip - Public IP (None, Static, Dynamic) compute_private_ip - Static Private IP (expressed as true or false) GCE: --interface : 3.14.6.8. compute-profile values update-volume Update compute profile volume Usage Options --compute-profile VALUE - Compute profile name --compute-profile-id VALUE --compute-resource VALUE - Compute resource name --compute-resource-id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --volume KEY_VALUE_LIST - Volume parameters, should be comma separated list of values --volume-id NUMBER - Volume id -h , --help - Print help Provider specific options Bold attributes are required. EC2: --volume : Libvirt: --volume : pool_name - One of available storage pools capacity - String value, e.g. 10G allocation - Initial allocation, e.g. 0G format_type - Possible values: raw, qcow2 OpenStack: --volume : Red Hat Virtualization: --volume : size_gb - Volume size in GB, integer value storage_domain - ID or name of storage domain bootable - Boolean, set 1 for bootable, only one volume can be bootable preallocate - Boolean, set 1 to preallocate wipe_after_delete - Boolean, set 1 to wipe disk after delete interface - Disk interface name, must be ide, virtio or virtio_scsi Rackspace: --volume : VMware: --volume : name - storage_pod - Storage Pod ID from VMware datastore - Datastore ID from VMware mode - persistent/independent_persistent/independent_nonpersistent size_gb - Integer number, volume size in GB thin - true/false eager_zero - true/false controller_key - Associated SCSI controller key AzureRM: --volume : disk_size_gb - Volume Size in GB (integer value) data_disk_caching - Data Disk Caching (None, ReadOnly, ReadWrite) GCE: --volume : size_gb - Volume size in GB, integer value 3.15. compute-resource Manipulate compute resources Usage Options -h , --help - Print help 3.15.1. compute-resource associate-vms Associate VMs to Hosts Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Compute resource name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --vm-id VALUE - Associate a specific VM -h , --help - Print help 3.15.2. compute-resource clusters List available clusters for a compute resource Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Compute resource name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.33. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Datacenter x x Hosts x x Cluster path x x 3.15.3. compute-resource create Create a compute resource Usage Options --app-ident VALUE - Client ID for AzureRm --caching-enabled BOOLEAN - Enable caching, for VMware only --cloud VALUE - Cloud --datacenter VALUE - For RHEV, VMware Datacenter --description VALUE --display-type ENUM - For Libvirt and RHEV only Possible value(s): VNC , SPICE --domain VALUE - For RHEL OpenStack Platform (v3) only --email VALUE - Deprecated, email is automatically loaded from the JSON file. For GCE only --key-path VALUE - Certificate path, for GCE only --keyboard-layout ENUM - For RHEV only Possible value(s): ar , de-ch , es , fo , fr-ca , hu , ja , mk , no , pt-br , sv , da , en-gb , et , fr , fr-ch , is , lt , nl , pl , ru , th , de , en-us , fi , fr-be , hr , it , lv , nl-be , pt , sl , tr --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-ids LIST - REPLACE locations with given ids --location-title VALUE - Set the current location context for the request --location-titles LIST --locations LIST --name VALUE --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-ids LIST - REPLACE organizations with given ids. --organization-title VALUE - Set the current organization context for the request --organization-titles LIST --organizations LIST --ovirt-quota VALUE - For RHEV only, ID or Name of quota to use --password VALUE - Password for RHEV, EC2, VMware, RHEL OpenStack Platform. Secret key for EC2 --project VALUE - Deprecated, project is automatically loaded from the JSON file. For GCE only --project-domain-id VALUE - For RHEL OpenStack Platform (v3) only --project-domain-name VALUE - For RHEL OpenStack Platform (v3) only --provider VALUE - Providers include Libvirt, Ovirt, EC2, Vmware, Openstack, AzureRm, GCE --public-key VALUE - For RHEV only --public-key-path FILE - Path to a file that contains oVirt public key (For oVirt only) --region VALUE - For AzureRm eg. eastus and for EC2 only. Use us-gov-west-1 for EC2 GovCloud region --secret-key VALUE - Client Secret for AzureRm --server VALUE - For VMware --set-console-password BOOLEAN For Libvirt and VMware only --sub-id VALUE - Subscription ID for AzureRm --tenant VALUE - For RHEL OpenStack Platform and AzureRm only --url VALUE - URL for Libvirt, RHEV and RHEL OpenStack Platform --user VALUE - Username for RHEV, EC2, VMware, RHEL OpenStack Platform. Access Key for EC2. --zone VALUE - Zone, for GCE only -h , --help - Print help 3.15.4. compute-resource delete Delete a compute resource Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Compute resource name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.15.5. compute-resource flavors List available flavors for a compute resource Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Compute resource name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.34. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x 3.15.6. compute-resource folders List available folders for a compute resource Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Compute resource name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.35. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Parent x x Datacenter x x Path x x Type x x 3.15.7. compute-resource image View and manage compute resource's images Usage Options -h , --help - Print help 3.15.7.1. compute-resource image available Show images available for addition Usage Options --compute-resource VALUE - Compute resource name --compute-resource-id VALUE --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.36. Predefined field sets FIELDS ALL DEFAULT THIN Name x x x Uuid x x 3.15.7.2. compute-resource image create Create an image Usage Options --architecture VALUE - Architecture name --architecture-id VALUE - ID of architecture --compute-resource VALUE - Compute resource name --compute-resource-id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE --operatingsystem VALUE - Operating system title --operatingsystem-id NUMBER - ID of operating system --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --password VALUE --user-data BOOLEAN - Whether or not the image supports user data --username VALUE --uuid VALUE - Template ID in the compute resource -h , --help - Print help 3.15.7.3. compute-resource image delete Delete an image Usage Options --compute-resource VALUE - Compute resource name --compute-resource-id VALUE --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.15.7.4. compute-resource image info Show an image Usage Options --architecture VALUE - Architecture name --architecture-id VALUE - ID of architecture --compute-resource VALUE - Compute resource name --compute-resource-id VALUE - ID of compute resource --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --operatingsystem VALUE - Operating system title --operatingsystem-id NUMBER - ID of operating system --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.37. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Operating system x x Username x x Uuid x x User data x x Architecture x x Iam role x x Created at x x Updated at x x 3.15.7.5. compute-resource image list List all images for a compute resource Usage Options --architecture VALUE - Architecture name --architecture-id VALUE - ID of architecture --compute-resource VALUE - Compute resource name --compute-resource-id VALUE - ID of compute resource --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --operatingsystem VALUE - Operating system title --operatingsystem-id NUMBER - ID of operating system --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results -h , --help - Print help Table 3.38. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Operating system x x Username x x Uuid x x User data x x Search / Order fields architecture - integer compute_resource - string id - integer name - string operatingsystem - integer user_data - Values: true, false username - string 3.15.7.6. compute-resource image update Update an image Usage Options --architecture VALUE - Architecture name --architecture-id VALUE - ID of architecture --compute-resource VALUE - Compute resource name --compute-resource-id VALUE --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE --new-name VALUE --operatingsystem VALUE - Operating system title --operatingsystem-id NUMBER - ID of operating system --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --password VALUE --user-data BOOLEAN - Whether or not the image supports user data --username VALUE --uuid VALUE - Template ID in the compute resource -h , --help - Print help 3.15.8. compute-resource images List available images for a compute resource Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Compute resource name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.39. Predefined field sets FIELDS ALL DEFAULT THIN Uuid x x Name x x x Path x x 3.15.9. compute-resource info Show a compute resource Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Compute resource name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.40. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Provider x x Description x x User x x Locations/ x x Organizations/ x x Created at x x Updated at x x 3.15.10. compute-resource list List all compute resources Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Scope by locations --location-title VALUE - Set the current location context for the request --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Scope by organizations --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results -h , --help - Print help Table 3.41. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Provider x x Search / Order fields id - integer location - string location_id - integer name - string organization - string organization_id - integer type - string 3.15.11. compute-resource networks List available networks for a compute resource Usage Options --cluster-id VALUE - Cluster ID (Deprecated: Use --cluster-name instead) --cluster-name VALUE - Cluster name or path to search by --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Compute resource name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.42. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Datacenter x x Virtual switch x x Vlan id x x 3.15.12. compute-resource resource-pools List resource pools for a compute resource cluster Usage Options --cluster-id VALUE - Cluster ID (Deprecated: Use --cluster-name instead) --cluster-name VALUE - Cluster name or path to search by --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Compute resource name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.43. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Cluster x x Datacenter x x 3.15.13. compute-resource security-groups List available security groups for a compute resource Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Compute resource name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.44. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x 3.15.14. compute-resource storage-domains List storage domains for a compute resource Usage Options --cluster-id VALUE - Cluster ID (Deprecated: Use --cluster-name instead) --cluster-name VALUE - Cluster name or path to search by --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Compute resource name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --storage-domain VALUE -h , --help - Print help Table 3.45. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x 3.15.15. compute-resource storage-pods List storage pods for a compute resource Usage Options --cluster-id VALUE - Cluster ID (Deprecated: Use --cluster-name instead) --cluster-name VALUE - Cluster name or path to search by --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Compute resource name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --storage-pod VALUE -h , --help - Print help Table 3.46. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Datacenter x x 3.15.16. compute-resource update Update a compute resource Usage Options --app-ident VALUE - Client ID for AzureRm --caching-enabled BOOLEAN - Enable caching, for VMware only --cloud VALUE - Cloud --datacenter VALUE - For RHEV, VMware Datacenter --description VALUE --display-type ENUM - For Libvirt and RHEV only Possible value(s): VNC , SPICE --domain VALUE - For RHEL OpenStack Platform (v3) only --email VALUE - Deprecated, email is automatically loaded from the JSON file. For GCE only --id VALUE --key-path VALUE - Certificate path, for GCE only --keyboard-layout ENUM - For RHEV only Possible value(s): ar , de-ch , es , fo , fr-ca , hu , ja , mk , no , pt-br , sv , da , en-gb , et , fr , fr-ch , is , lt , nl , pl , ru , th , de , en-us , fi , fr-be , hr , it , lv , nl-be , pt , sl , tr --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-ids LIST - REPLACE locations with given ids --location-title VALUE - Set the current location context for the request --location-titles LIST --locations LIST --name VALUE - Compute resource name --new-name VALUE --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-ids LIST - REPLACE organizations with given ids. --organization-title VALUE - Set the current organization context for the request --organization-titles LIST --organizations LIST --ovirt-quota VALUE - For RHEV only, ID or Name of quota to use --password VALUE - Password for RHEV, EC2, VMware, RHEL OpenStack Platform. Secret key for EC2 --project VALUE - Deprecated, project is automatically loaded from the JSON file. For GCE only --project-domain-id VALUE - For RHEL OpenStack Platform (v3) only --project-domain-name VALUE - For RHEL OpenStack Platform (v3) only --provider VALUE - Providers include Libvirt, Ovirt, EC2, Vmware, Openstack, AzureRm, GCE --public-key VALUE - For RHEV only --public-key-path FILE - Path to a file that contains oVirt public key (For oVirt only) --region VALUE - For AzureRm eg. eastus and for EC2 only. Use us-gov-west-1 for EC2 GovCloud region --secret-key VALUE - Client Secret for AzureRm --server VALUE - For VMware --set-console-password BOOLEAN For Libvirt and VMware only --sub-id VALUE - Subscription ID for AzureRm --tenant VALUE - For RHEL OpenStack Platform and AzureRm only --url VALUE - URL for Libvirt, RHEV and RHEL OpenStack Platform --user VALUE - Username for RHEV, EC2, VMware, RHEL OpenStack Platform. Access Key for EC2. --zone VALUE - Zone, for GCE only -h , --help - Print help 3.15.17. compute-resource virtual-machine View and manage compute resource's virtual machines Usage Options -h , --help - Print help 3.15.17.1. compute-resource virtual-machine delete Delete a Virtual Machine Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Compute resource name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --vm-id VALUE -h , --help - Print help 3.15.17.2. compute-resource virtual-machine info Show a virtual machine Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Compute resource name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --vm-id VALUE -h , --help - Print help Table 3.47. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x 3.15.17.3. compute-resource virtual-machine power Power a Virtual Machine Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Compute resource name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --vm-id VALUE -h , --help - Print help 3.15.18. compute-resource virtual-machines List available virtual machines for a compute resource Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Compute resource name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.48. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Path x x State x x 3.15.19. compute-resource vnic-profiles List available vnic profiles for a compute resource, for RHEV only Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Compute resource name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.49. Predefined field sets FIELDS ALL DEFAULT THIN Vnic profile id x x x Name x x x Network id x x 3.15.20. compute-resource zones List available zone for a compute resource Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Compute resource name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.50. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x 3.16. config-report Browse and read reports Usage Options -h , --help - Print help 3.16.1. config-report delete Delete a report Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.16.2. config-report info Show a report Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.51. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Host x x Reported at x x Origin x x Report status/applied x x Report status/restarted x x Report status/failed x x Report status/restart failures x x Report status/skipped x x Report status/pending x x Report metrics/config retrieval x x Report metrics/exec x x Report metrics/file x x Report metrics/package x x Report metrics/service x x Report metrics/user x x Report metrics/yumrepo x x Report metrics/filebucket x x Report metrics/cron x x Report metrics/total x x Logs/resource x x Logs/message x x 3.16.3. config-report list List all reports Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results -h , --help - Print help Table 3.52. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Host x x Last report x x Origin x x Applied x x Restarted x x Failed x x Restart failures x x Skipped x x Pending x x Search / Order fields applied - integer eventful - Values: true, false failed - integer failed_restarts - integer host - string host_id - integer host_owner_id - integer hostgroup - string hostgroup_fullname - string hostgroup_title - string id - integer last_report - datetime location - string log - text organization - string origin - string pending - integer reported - datetime resource - text restarted - integer skipped - integer 3.17. content-credentials Manipulate content credentials on the server Usage Options -h , --help - Print help 3.17.1. content-credentials create Create a Content Credential Usage Options --content-type VALUE - Type of content: "cert", "gpg_key" --name VALUE - Name of the Content Credential --organization VALUE - Organization name to search by --organization-id NUMBER - Organization identifier --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --path FILE - Key file -h , --help - Print help 3.17.2. content-credentials delete Destroy a Content Credential Usage Options --id NUMBER - Content Credential ID --name VALUE - Name to search by --organization VALUE - Organization name to search by --organization-id NUMBER --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title -h , --help - Print help 3.17.3. content-credentials info Show a Content Credential Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id NUMBER - Content Credential numeric identifier --name VALUE - Name to search by --organization VALUE - Organization name to search by --organization-id NUMBER --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title -h , --help - Print help Table 3.53. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Organization x x Repositories/id x x Repositories/name x x Repositories/content type x x Repositories/product x x Content x x 3.17.4. content-credentials list List Content Credentials Usage Options --content-type VALUE - Type of content --fields LIST - Show specified fields or predefined field sets only. (See below) --full-result BOOLEAN - Whether or not to show all results --name VALUE - Name of the Content Credential --order VALUE - Sort field and order, eg. id DESC --organization VALUE - Organization name to search by --organization-id NUMBER - Organization identifier --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --page NUMBER - Page number, starting at 1 --per-page NUMBER - Number of results per page to return --search VALUE - Search string -h , --help - Print help Table 3.54. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Content type x x 3.17.5. content-credentials update Update a Content Credential Usage Options --content-type VALUE - Type of content: "cert", "gpg_key" --id NUMBER - Content Credential ID --name VALUE - Name of the Content Credential --new-name VALUE - Name of the Content Credential --organization VALUE - Organization name to search by --organization-id NUMBER --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --path FILE - Key file -h , --help - Print help 3.18. content-export Prepare content for export to a disconnected Katello Usage Options -h , --help - Print help 3.18.1. content-export complete Prepare content for a full export to a disconnected Katello Usage Options -h , --help - Print help 3.18.1.1. content-export complete library Performs a full export of the organization's library environment Usage Options --async - Do not wait for the task --chunk-size-gb NUMBER - Split the exported content into archives no greater than the specified size in gigabytes. --destination-server VALUE - Destination Server name --fail-on-missing-content - Fails if any of the repositories belonging to this organization are unexportable. --format ENUM - Export formats.Choose syncable if the exported content needs to be in a yum format. This option is only available for yum, file repositories. Choose importable if the importing server uses the same version and exported content needs to be one of yum, file, ansible_collection, docker repositories. Possible value(s): syncable , importable --organization VALUE - Organization name to search by --organization-id NUMBER - Organization identifier --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title -h , --help - Print help 3.18.1.2. content-export complete repository Performs a full export of a repository Usage Options --async - Do not wait for the task --chunk-size-gb NUMBER - Split the exported content into archives no greater than the specified size in gigabytes. --format ENUM - Export formats.Choose syncable if the exported content needs to be in a yum format. This option is only available for yum, file repositories. Choose importable if the importing server uses the same version and exported content needs to be one of yum, file, ansible_collection, docker repositories. Possible value(s): syncable , importable --id NUMBER - Repository identifier --name VALUE - Filter repositories by name. --organization VALUE - Organization name to search by --organization-id NUMBER - Organization ID --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --product VALUE - Product name to search by --product-id NUMBER - Product numeric identifier -h , --help - Print help 3.18.1.3. content-export complete version Performs a full export a content view version Usage Options --async - Do not wait for the task --chunk-size-gb NUMBER - Split the exported content into archives no greater than the specified size in gigabytes. --content-view VALUE - Content view name to search by --content-view-id NUMBER - Content view numeric identifier --destination-server VALUE - Destination Server name --fail-on-missing-content - Fails if any of the repositories belonging to this version are unexportable. --format ENUM - Export formats.Choose syncable if the exported content needs to be in a yum format. This option is only available for yum, file repositories. Choose importable if the importing server uses the same version and exported content needs to be one of yum, file, ansible_collection, docker repositories. Possible value(s): syncable , importable --id NUMBER - Content view version identifier --lifecycle-environment VALUE - Lifecycle environment name to search by --lifecycle-environment-id NUMBER ID of the environment --organization VALUE - Organization name to search by --organization-id NUMBER - Organization ID --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --version VALUE - Filter versions by version number. -h , --help - Print help 3.18.2. content-export generate-listing Generates listing file on each directory of a syncable export. This command only needs to be used if the export was performed asynchronously or if the listing files were lost. Assumes the syncable export directory is accessible on disk Usage Options --id VALUE - Generate listing files based on specified export history --task-id VALUE - Generate listing files for a syncable export task -h , --help - Print help 3.18.3. content-export generate-metadata Writes export metadata to disk for use by the importing Katello. This command only needs to be used if the export was performed asynchronously or if the metadata was lost Usage Options --id VALUE - Generate metadata based on specified export history --task-id VALUE - Generate metadata based on output of the specified export task -h , --help - Print help 3.18.4. content-export incremental Prepare content for an incremental export to a disconnected Katello Usage Options -h , --help - Print help 3.18.4.1. content-export incremental library Performs an incremental export of the organization's library environment Usage Options --async - Do not wait for the task --chunk-size-gb NUMBER - Split the exported content into archives no greater than the specified size in gigabytes. --destination-server VALUE - Destination Server name --fail-on-missing-content - Fails if any of the repositories belonging to this organization are unexportable. --format ENUM - Export formats.Choose syncable if the exported content needs to be in a yum format. This option is only available for yum, file repositories. Choose importable if the importing server uses the same version and exported content needs to be one of yum, file, ansible_collection, docker repositories. Possible value(s): syncable , importable --from-history-id NUMBER - Export history identifier used for incremental export. If not provided the most recent export history will be used. --organization VALUE - Organization name to search by --organization-id NUMBER - Organization identifier --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title -h , --help - Print help 3.18.4.2. content-export incremental repository Performs an incremental export of a repository Usage Options --async - Do not wait for the task --chunk-size-gb NUMBER - Split the exported content into archives no greater than the specified size in gigabytes. --format ENUM - Export formats.Choose syncable if the exported content needs to be in a yum format. This option is only available for yum, file repositories. Choose importable if the importing server uses the same version and exported content needs to be one of yum, file, ansible_collection, docker repositories. Possible value(s): syncable , importable --from-history-id NUMBER - Export history identifier used for incremental export. If not provided the most recent export history will be used. --id NUMBER - Repository identifier --name VALUE - Filter repositories by name. --organization VALUE - Organization name to search by --organization-id NUMBER - Organization ID --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --product VALUE - Product name to search by --product-id NUMBER - Product numeric identifier -h , --help - Print help 3.18.4.3. content-export incremental version Performs an incremental export of a content view version Usage Options --async - Do not wait for the task --chunk-size-gb NUMBER - Split the exported content into archives no greater than the specified size in gigabytes. --content-view VALUE - Content view name to search by --content-view-id NUMBER - Content view numeric identifier --destination-server VALUE - Destination Server name --fail-on-missing-content - Fails if any of the repositories belonging to this version are unexportable. --format ENUM - Export formats.Choose syncable if the exported content needs to be in a yum format. This option is only available for yum, file repositories. Choose importable if the importing server uses the same version and exported content needs to be one of yum, file, ansible_collection, docker repositories. Possible value(s): syncable , importable --from-history-id NUMBER - Export history identifier used for incremental export. If not provided the most recent export history will be used. --id NUMBER - Content view version identifier --lifecycle-environment VALUE - Lifecycle environment name to search by --lifecycle-environment-id NUMBER ID of the environment --organization VALUE - Organization name to search by --organization-id NUMBER - Organization ID --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --version VALUE - Filter versions by version number. -h , --help - Print help 3.18.5. content-export list View content view export histories Usage Options --content-view VALUE - Content view name to search by --content-view-id NUMBER - Content view identifier --content-view-version VALUE - Content view version number --content-view-version-id NUMBER Content view version identifier --destination-server VALUE - Destination Server name --fields LIST - Show specified fields or predefined field sets only. (See below) --full-result BOOLEAN - Whether or not to show all results --id NUMBER - Content view version export history identifier --order VALUE - Sort field and order, eg. id DESC --organization VALUE - Organization name to search by --organization-id NUMBER - Organization identifier --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --page NUMBER - Page number, starting at 1 --per-page NUMBER - Number of results per page to return --search VALUE - Search string --type ENUM - Export Types Possible value(s): complete , incremental -h , --help - Print help Table 3.55. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Destination server x x Path x x Type x x Content view version x x Content view version id x x Created at x x Updated at x x Search / Order fields content_view_id - integer content_view_version_id - integer id - integer type - string 3.19. content-import Import content from an upstream archive. Usage Options -h , --help - Print help 3.19.1. content-import library Imports a content archive to an organization's library lifecycle environment Usage Options --async - Do not wait for the task --metadata-file VALUE - Location of the metadata.json file. This is not required if the metadata.json file is already in the archive directory. --organization VALUE - Organization name to search by --organization-id NUMBER - Organization identifier --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --path VALUE - Directory containing the exported Content View Version -h , --help - Print help 3.19.2. content-import list View content view import histories Usage Options --content-view VALUE - Content view name to search by --content-view-id NUMBER - Content view identifier --content-view-version VALUE - Content view version number --content-view-version-id NUMBER Content view version identifier --fields LIST - Show specified fields or predefined field sets only. (See below) --full-result BOOLEAN - Whether or not to show all results --id NUMBER - Content view version import history identifier --order VALUE - Sort field and order, eg. id DESC --organization VALUE - Organization name to search by --organization-id NUMBER - Organization identifier --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --page NUMBER - Page number, starting at 1 --per-page NUMBER - Number of results per page to return --search VALUE - Search string --type ENUM - Import Types Possible value(s): complete , incremental -h , --help - Print help Table 3.56. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Path x x Type x x Content view version x x Content view version id x x Created at x x Updated at x x Search / Order fields content_view_id - integer content_view_version_id - integer id - integer type - string 3.19.3. content-import repository Imports a repository Usage Options --async - Do not wait for the task --metadata-file VALUE - Location of the metadata.json file. This is not required if the metadata.json file is already in the archive directory. --organization VALUE - Organization name to search by --organization-id NUMBER - Organization identifier --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --path VALUE - Directory containing the exported Content View Version -h , --help - Print help 3.19.4. content-import version Imports a content archive to a content view version Usage Options --async - Do not wait for the task --metadata-file VALUE - Location of the metadata.json file. This is not required if the metadata.json file is already in the archive directory. --organization VALUE - Organization name to search by --organization-id NUMBER - Organization identifier --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --path VALUE - Directory containing the exported Content View Version -h , --help - Print help 3.20. content-units Manipulate content units Usage Options -h , --help - Print help 3.20.1. content-units info Show a content unit Usage Options --content-type VALUE - Possible values: --content-view VALUE - Content view name to search by --content-view-id NUMBER - Content view numeric identifier --content-view-version VALUE - Content view version number --content-view-version-id NUMBER Content view version identifier --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE - A content unit identifier --name VALUE - Name to search by --organization VALUE - Organization name to search by --organization-id NUMBER - Organization identifier --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --product VALUE - Product name to search by --product-id NUMBER - Product numeric identifier --repository VALUE - Repository name to search by --repository-id NUMBER - Repository identifier -h , --help - Print help Table 3.57. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Version x x Filename x x 3.20.2. content-units list List content_units Usage Options --content-type VALUE - Possible values: --content-view VALUE - Content view name to search by --content-view-filter VALUE - Name to search by --content-view-filter-id NUMBER - Content view filter identifier --content-view-filter-rule VALUE - Name to search by --content-view-filter-rule-id NUMBER Content view filter rule identifier --content-view-id NUMBER - Content view numeric identifier --content-view-version VALUE - Content view version number --content-view-version-id NUMBER - Content view version identifier --fields LIST - Show specified fields or predefined field sets only. (See below) --full-result BOOLEAN - Whether or not to show all results --ids LIST - Ids to filter content by --include-filter-ids BOOLEAN - Includes associated content view filter ids in response --lifecycle-environment-id NUMBER - Environment identifier --order VALUE - Sort field and order, eg. id DESC --organization VALUE - Organization name to search by --organization-id NUMBER - Organization identifier --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --page NUMBER - Page number, starting at 1 --per-page NUMBER - Number of results per page to return --product VALUE - Product name to search by --product-id NUMBER - Product numeric identifier --repository VALUE - Repository name to search by --repository-id NUMBER - Repository identifier --search VALUE - Search string -h , --help - Print help Table 3.58. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Version x x Filename x x 3.21. content-view Manipulate content views Usage Options -h , --help - Print help 3.21.1. content-view add-repository Associate a resource Usage Options --id VALUE - Content view numeric identifier --name VALUE - Content view name to search by --organization VALUE - Organization name to search by --organization-id VALUE - Organization ID to search by --organization-label VALUE - Organization label to search by --product VALUE - Product name to search by --product-id NUMBER - Product numeric identifier --repository VALUE - Repository name to search by --repository-id NUMBER - Repository ID -h , --help - Print help 3.21.2. content-view add-version Add a content view version to a composite view Usage Options --content-view VALUE - Content view name to search by --content-view-id NUMBER - Content view id to search by --content-view-version VALUE - Content view version number --content-view-version-id NUMBER Content view version identifier --id VALUE - Content view numeric identifier --name VALUE - Content view name to search by --organization VALUE - Organization name to search by --organization-id VALUE - Organization ID to search by --organization-label VALUE - Organization label to search by -h , --help - Print help 3.21.3. content-view component View and manage components Usage Options -h , --help - Print help 3.21.3.1. content-view component add Add components to the content view Usage Options --component-content-view VALUE - Content View name of the component who`s latest version is desired --component-content-view-id VALUE - Content View identifier of the component who`s latest version is desired --component-content-view-version VALUE - Content View Version number of the component. Either use this or --component-content-view-version-id option --component-content-view-version-id VALUE Content View Version identifier of the component --composite-content-view VALUE - Name of the composite content view --composite-content-view-id NUMBER - Composite content view identifier --latest - Select the latest version of the components content view is desired --organization VALUE - Organization name to search by --organization-id VALUE - Organization ID to search by --organization-label VALUE - Organization label to search by -h , --help - Print help 3.21.3.2. content-view component list List components attached to this content view Usage Options --composite-content-view VALUE - Name of the composite content view --composite-content-view-id NUMBER Composite content view identifier --fields LIST - Show specified fields or predefined field sets only. (See below) --organization VALUE - Organization name to search by --organization-id VALUE - Organization ID to search by --organization-label VALUE - Organization label to search by -h , --help - Print help Table 3.59. Predefined field sets FIELDS ALL DEFAULT THIN Content view id x x Name x x Version x x Component id x x x Current version x x Version id x x 3.21.3.3. content-view component remove Remove components from the content view Usage Options --component-content-view-ids VALUE Array of component content view identfiers to remove. Comma separated list of values --component-content-views VALUE - Array of component content view names to remove. Comma separated list of values --component-ids LIST - Array of content view component IDs to remove. Identifier of the component association --composite-content-view VALUE - Name of the composite content view --composite-content-view-id NUMBER Composite content view identifier --organization VALUE - Organization name to search by --organization-id VALUE - Organization ID to search by --organization-label VALUE - Organization label to search by -h , --help - Print help 3.21.3.4. content-view component update Update a component associated with the content view Usage Options --component-content-view VALUE - Content View name of the component who`s latest version is desired --component-content-view-id VALUE - Content View identifier of the component who`s latest version is desired --component-content-view-version VALUE - Content View Version number of the component. Either use this or --component-content-view-version-id option --component-content-view-version-id VALUE Content View Version identifier of the component --composite-content-view VALUE - Name of the composite content view --composite-content-view-id NUMBER - Composite content view identifier --id NUMBER - Content view component ID. Identifier of the component association --latest - Select the latest version of the components content view is desired --organization VALUE - Organization name to search by --organization-id VALUE - Organization ID to search by --organization-label VALUE - Organization label to search by -h , --help - Print help 3.21.4. content-view copy Copy a content view Usage Options --id NUMBER - Content view numeric identifier --name VALUE - Content view name to search by --new-name VALUE - New content view name --organization VALUE - Organization name to search by --organization-id VALUE - Organization ID to search by --organization-label VALUE - Organization label to search by -h , --help - Print help 3.21.5. content-view create Create a content view Usage Options --auto-publish BOOLEAN - Enable/Disable auto publish of composite view --component-ids LIST - List of component content view version ids for composite views --composite - Create a composite content view --description VALUE - Description for the content view --import-only - Designate this Content View for importing from upstream servers only. --label VALUE - Content view label --name VALUE - Name of the content view --organization VALUE - Organization name to search by --organization-id NUMBER - Organization identifier --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --repository-ids LIST - List of repository ids --solve-dependencies BOOLEAN Solve RPM dependencies by default on Content View publish, defaults to false -h , --help - Print help 3.21.6. content-view delete Delete a content view Usage Options --async - Do not wait for the task --id NUMBER - Content view numeric identifier --name VALUE - Content view name to search by --organization VALUE - Organization name to search by --organization-id VALUE - Organization ID to search by --organization-label VALUE - Organization label to search by -h , --help - Print help 3.21.7. content-view filter View and manage filters Usage Options -h , --help - Print help 3.21.7.1. content-view filter add-repository Associate a resource Usage Options --content-view VALUE - Content view name to search by --content-view-id NUMBER - Content view numeric identifier --id VALUE - Filter identifier --name VALUE - Name to search by --organization VALUE - Organization name to search by --organization-id VALUE - Organization ID to search by --organization-label VALUE - Organization label to search by --product VALUE - Product name to search by --product-id NUMBER - Product numeric identifier --repository VALUE - Repository name to search by --repository-id NUMBER - Repository ID -h , --help - Print help 3.21.7.2. content-view filter create create a filter for a content view Usage Options --content-view VALUE - Content view name to search by --content-view-id NUMBER - Content view identifier --description VALUE - Description of the filter --inclusion BOOLEAN - Specifies if content should be included or excluded, default: inclusion=false --name VALUE - Name of the filter --organization VALUE - Organization name to search by --organization-id VALUE - Organization ID to search by --organization-label VALUE - Organization label to search by --original-module-streams BOOLEAN Add all module streams without errata to the included/excluded list. (module stream filter only) --original-packages BOOLEAN - Add all packages without errata to the included/excluded list. (package filter only) --product VALUE - Product name to search by --product-id NUMBER - Product numeric identifier --repositories LIST --repository-ids LIST - List of repository ids --type VALUE - Type of filter (e.g. deb, rpm, package_group, erratum, erratum_id, erratum_date, docker, modulemd) -h , --help - Print help 3.21.7.3. content-view filter delete delete a filter Usage Options --content-view VALUE - Content view name to search by --content-view-id NUMBER - Content view identifier --id NUMBER - Filter identifier --name VALUE - Name to search by --organization VALUE - Organization name to search by --organization-id VALUE - Organization ID to search by --organization-label VALUE - Organization label to search by -h , --help - Print help 3.21.7.4. content-view filter info show filter info Usage Options --content-view VALUE - Content view name to search by --content-view-id NUMBER - Content view identifier --fields LIST - Show specified fields or predefined field sets only. (See below) --id NUMBER - Filter identifier --name VALUE - Name to search by --organization VALUE - Organization name to search by --organization-id VALUE - Organization ID to search by --organization-label VALUE - Organization label to search by -h , --help - Print help Table 3.60. Predefined field sets FIELDS ALL DEFAULT THIN Filter id x x x Name x x x Type x x Inclusion x x Description x x Repositories/id x x Repositories/name x x Repositories/label x x Rules/id x x Rules/name x x Rules/version x x Rules/minimum version x x Rules/maximum version x x Rules/errata id x x Rules/start date x x Rules/end date x x Rules/types x x Rules/created x x Rules/updated x x 3.21.7.5. content-view filter list list filters Usage Options --content-view VALUE - Content view name to search by --content-view-id NUMBER - Content view identifier --fields LIST - Show specified fields or predefined field sets only. (See below) --full-result BOOLEAN - Whether or not to show all results --name VALUE - Filter content view filters by name --order VALUE - Sort field and order, eg. id DESC --organization VALUE - Organization name to search by --organization-id VALUE - Organization ID to search by --organization-label VALUE - Organization label to search by --page NUMBER - Page number, starting at 1 --per-page NUMBER - Number of results per page to return --search VALUE - Search string --types LIST - Types of filters -h , --help - Print help Table 3.61. Predefined field sets FIELDS ALL DEFAULT THIN Filter id x x x Name x x x Description x x Type x x Inclusion x x Search / Order fields content_type - Values: rpm, deb, package_group, erratum, docker, modulemd inclusion_type - Values: include, exclude name - string 3.21.7.6. content-view filter remove-repository Disassociate a resource Usage Options --content-view VALUE - Content view name to search by --content-view-id NUMBER - Content view numeric identifier --id VALUE - Filter identifier --name VALUE - Name to search by --organization VALUE - Organization name to search by --organization-id VALUE - Organization ID to search by --organization-label VALUE - Organization label to search by --product VALUE - Product name to search by --product-id NUMBER - Product numeric identifier --repository VALUE - Repository name to search by --repository-id NUMBER - Repository ID -h , --help - Print help 3.21.7.7. content-view filter rule View and manage filter rules Usage Options -h , --help - Print help 3.21.7.7.1. content-view filter rule create Create a filter rule. The parameters included should be based upon the filter type. Usage Options --architecture VALUE - Package: architecture --content-view VALUE - Content view name to search by --content-view-filter VALUE - Name to search by --content-view-filter-id NUMBER Filter identifier --content-view-id NUMBER --date-type VALUE - Erratum: search using the Issued On or Updated On column of the errata. Values are issued / updated --end-date VALUE - Erratum: end date (YYYY-MM-DD) --errata-id VALUE - Erratum: id --errata-ids LIST - Erratum: IDs or a select all object --max-version VALUE - Package: maximum version --min-version VALUE - Package: minimum version --module-stream-ids LIST - Module stream ids --name LIST - Deb, package, package group, or docker tag names --names VALUE - Package and package group names --organization VALUE - Organization name to search by --organization-id VALUE - Organization ID to search by --organization-label VALUE - Organization label to search by --start-date VALUE - Erratum: start date (YYYY-MM-DD) --types LIST - Erratum: types (enhancement, bugfix, security) --uuid VALUE - Package group: uuid --version VALUE - Package: version -h , --help - Print help 3.21.7.7.2. content-view filter rule delete Delete a filter rule Usage Options --content-view VALUE - Content view name to search by --content-view-filter VALUE - Name to search by --content-view-filter-id NUMBER Filter identifier --content-view-id NUMBER --id NUMBER - Rule identifier --name VALUE - Name to search by --organization VALUE - Organization name to search by --organization-id VALUE - Organization ID to search by --organization-label VALUE - Organization label to search by -h , --help - Print help 3.21.7.7.3. content-view filter rule info Show filter rule info Usage Options --content-view VALUE - Content view name to search by --content-view-filter VALUE - Name to search by --content-view-filter-id NUMBER Filter identifier --content-view-id NUMBER --fields LIST - Show specified fields or predefined field sets only. (See below) --id NUMBER - Rule identifier --name VALUE - Name to search by --organization VALUE - Organization name to search by --organization-id VALUE - Organization ID to search by --organization-label VALUE - Organization label to search by -h , --help - Print help Table 3.62. Predefined field sets FIELDS ALL DEFAULT THIN Rule id x x x Filter id x x Name x x x Version x x Minimum version x x Maximum version x x Architecture x x Errata id x x Start date x x End date x x Date type x x Types x x Created x x Updated x x 3.21.7.7.4. content-view filter rule list List filter rules Usage Options --content-view VALUE - Content view name to search by --content-view-filter VALUE - Name to search by --content-view-filter-id NUMBER Filter identifier --content-view-id NUMBER --errata-id VALUE - Errata_id of the content view filter rule --fields LIST - Show specified fields or predefined field sets only. (See below) --full-result BOOLEAN - Whether or not to show all results --name VALUE - Name of the content view filter rule --order VALUE - Sort field and order, eg. id DESC --organization VALUE - Organization name to search by --organization-id VALUE - Organization ID to search by --organization-label VALUE - Organization label to search by --page NUMBER - Page number, starting at 1 --per-page NUMBER - Number of results per page to return --search VALUE - Search string -h , --help - Print help Table 3.63. Predefined field sets FIELDS ALL DEFAULT THIN Rule id x x x Filter id x x Name x x x Version x x Minimum version x x Maximum version x x Architecture x x Errata id x x Start date x x End date x x 3.21.7.7.5. content-view filter rule update Update a filter rule. The parameters included should be based upon the filter type. Usage Options --architecture VALUE - Package: architecture --content-view VALUE - Content view name to search by --content-view-filter VALUE - Name to search by --content-view-filter-id NUMBER Filter identifier --content-view-id NUMBER --end-date VALUE - Erratum: end date (YYYY-MM-DD) --errata-id VALUE - Erratum: id --id NUMBER - Rule identifier --max-version VALUE - Package: maximum version --min-version VALUE - Package: minimum version --name VALUE - Package, package group, or docker tag: name --new-name VALUE - Package, package group, or docker tag: name --organization VALUE - Organization name to search by --organization-id VALUE - Organization ID to search by --organization-label VALUE - Organization label to search by --start-date VALUE - Erratum: start date (YYYY-MM-DD) --types LIST - Erratum: types (enhancement, bugfix, security) --version VALUE - Package: version -h , --help - Print help 3.21.7.8. content-view filter update update a filter Usage Options --content-view VALUE - Content view name to search by --content-view-id NUMBER - Content view identifier --description VALUE - Description of the filter --id NUMBER - Filter identifier --inclusion BOOLEAN - Specifies if content should be included or excluded, default: inclusion=false --name VALUE - New name for the filter --new-name VALUE - New name for the filter --organization VALUE - Organization name to search by --organization-id VALUE - Organization ID to search by --organization-label VALUE - Organization label to search by --original-module-streams BOOLEAN Add all module streams without errata to the included/excluded list. (module stream filter only) --original-packages BOOLEAN - Add all packages without errata to the included/excluded list. (package filter only) --repositories LIST --repository-ids LIST - List of repository ids -h , --help - Print help 3.21.8. content-view info Show a content view Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id NUMBER - Content view numeric identifier --name VALUE - Content view name to search by --organization VALUE - Organization name to search by --organization-id VALUE - Organization ID to search by --organization-label VALUE - Organization label to search by -h , --help - Print help Table 3.64. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Label x x Composite x x Description x x Content host count x x Solve dependencies x x Organization x x Yum repositories/id x x Yum repositories/name x x Yum repositories/label x x Container image repositories/id x x Container image repositories/name x x Container image repositories/label x x Ostree repositories/id x x Ostree repositories/name x x Ostree repositories/label x x Lifecycle environments/id x x Lifecycle environments/name x x Versions/id x x Versions/version x x Versions/published x x Components/id x x Components/name x x Activation keys/ x x 3.21.9. content-view list List content views Usage Options --composite BOOLEAN - Filter only composite content views --environment VALUE - Lifecycle environment name to search by (--environment is deprecated: Use --lifecycle-environment instead) --environment-id NUMBER - (--environment-id is deprecated: Use --lifecycle-environment-id instead) --fields LIST - Show specified fields or predefined field sets only. (See below) --full-result BOOLEAN - Whether or not to show all results --include-generated BOOLEAN - Include content views generated by imports/exports. Defaults to false --label VALUE - Label of the content view --lifecycle-environment VALUE - Lifecycle environment name to search by --lifecycle-environment-id NUMBER Environment identifier --name VALUE - Name of the content view --noncomposite BOOLEAN - Filter out composite content views --nondefault BOOLEAN - Filter out default content views --order VALUE - Sort field and order, eg. id DESC --organization VALUE - Organization name to search by --organization-id NUMBER - Organization identifier --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --page NUMBER - Page number, starting at 1 --per-page NUMBER - Number of results per page to return --search VALUE - Search string --without LIST - Do not include this array of content views -h , --help - Print help Table 3.65. Predefined field sets FIELDS ALL DEFAULT THIN Content view id x x x Name x x x Label x x Composite x x Last published x x Repository ids x x Search / Order fields composite - boolean content_views - string default - boolean generated_for - integer label - string name - string organization_id - integer 3.21.10. content-view publish Publish a content view Usage Options --async - Do not wait for the task --description VALUE - Description for the new published content view version --id NUMBER - Content view identifier --is-force-promote BOOLEAN - Force content view promotion and bypass lifecycle environment restriction --lifecycle-environment-ids LIST Identifiers for Lifecycle Environment --lifecycle-environments LIST - Names for Lifecycle Environment --major NUMBER - Override the major version number --minor NUMBER - Override the minor version number --name VALUE - Content view name to search by --organization VALUE - Organization name to search by --organization-id VALUE - Organization ID to search by --organization-label VALUE - Organization label to search by --publish-only-if-needed BOOLEAN Check audited changes and proceed only if content or filters have changed since last publish --repos-units SCHEMA - Specify the list of units in each repo -h , --help - Print help Following parameters accept format defined by its schema (bold are required; <> contains acceptable type; [] contains acceptable value): --repos-units - " label =<string>, rpm_filenames =<array>, ... " 3.21.11. content-view purge Delete old versions of a content view Usage Options --async - Do not wait for the task --count NUMBER - Count of unused versions to keep Default: 3 --id VALUE - Content View numeric identifier --name VALUE - Content View name --organization VALUE - Organization name to search by --organization-id VALUE - Organization ID to search by --organization-label VALUE - Organization label to search by -h , --help - Print help 3.21.12. content-view remove Remove versions and/or environments from a content view and reassign systems and keys Usage Options --async - Do not wait for the task --content-view-version-ids LIST Content view version identifiers to be deleted --content-view-versions LIST --destroy-content-view BOOLEAN - Delete the content view with all the versions and environments --environment-ids LIST - (--environment-ids is deprecated: Use --lifecycle-environment-ids instead) --environments LIST - (--environments is deprecated: Use --lifecycle-environments instead) --id NUMBER - Content view numeric identifier --key-content-view-id NUMBER - Content view to reassign orphaned activation keys to --key-environment-id NUMBER - Environment to reassign orphaned activation keys to --lifecycle-environment-ids LIST Environment numeric identifiers to be removed --name VALUE - Content view name to search by --organization VALUE - Organization name to search by --organization-id VALUE - Organization ID to search by --organization-label VALUE - Organization label to search by --system-content-view-id NUMBER Content view to reassign orphaned systems to --system-environment-id NUMBER - Environment to reassign orphaned systems to -h , --help - Print help 3.21.13. content-view remove-from-environment Remove a content view from an environment Usage Options --async - Do not wait for the task --environment VALUE - Lifecycle environment name to search by (--environment is deprecated: Use --lifecycle-environment instead) --environment-id NUMBER - (--environment-id is deprecated: Use --lifecycle-environment-id instead) --id NUMBER - Content view numeric identifier --lifecycle-environment VALUE - Lifecycle environment name to search by --lifecycle-environment-id NUMBER Environment numeric identifier --name VALUE - Content view name to search by --organization VALUE - Organization name to search by --organization-id VALUE - Organization ID to search by --organization-label VALUE - Organization label to search by -h , --help - Print help 3.21.14. content-view remove-repository Disassociate a resource Usage Options --id VALUE - Content view numeric identifier --name VALUE - Content view name to search by --organization VALUE - Organization name to search by --organization-id VALUE - Organization ID to search by --organization-label VALUE - Organization label to search by --product VALUE - Product name to search by --product-id NUMBER - Product numeric identifier --repository VALUE - Repository name to search by --repository-id NUMBER - Repository ID -h , --help - Print help 3.21.15. content-view remove-version Remove a content view version from a composite view Usage Options --content-view VALUE - Content view name to search by --content-view-id NUMBER - Content view numeric identifier --content-view-version VALUE - Content view version number --content-view-version-id NUMBER Content view version identifier --id VALUE - Content view numeric identifier --name VALUE - Content view name to search by --organization VALUE - Organization name to search by --organization-id VALUE - Organization ID to search by --organization-label VALUE - Organization label to search by -h , --help - Print help 3.21.16. content-view update Update a content view Usage Options --auto-publish BOOLEAN - Enable/Disable auto publish of composite view --component-ids LIST - List of component content view version ids for composite views --description VALUE - Description for the content view --id NUMBER - Content view identifier --import-only BOOLEAN - Designate this Content View for importing from upstream servers only. Defaults to false --name VALUE - New name for the content view --new-name VALUE - New name for the content view --organization VALUE - Organization name to search by --organization-id VALUE - Organization ID to search by --organization-label VALUE - Organization label to search by --repository-ids LIST - List of repository ids --solve-dependencies BOOLEAN Solve RPM dependencies by default on Content View publish, defaults to false -h , --help - Print help 3.21.17. content-view version View and manage content view versions Usage Options -h , --help - Print help 3.21.17.1. content-view version delete Remove content view version Usage Options --async - Do not wait for the task --content-view VALUE - Content view name to search by --content-view-id NUMBER - Content view numeric identifier --environment VALUE - Lifecycle environment name to search by (--environment is deprecated: Use --lifecycle-environment instead) --environment-id NUMBER - (--environment-id is deprecated: Use --lifecycle-environment-id instead) --id NUMBER - Content view version identifier --lifecycle-environment VALUE - Lifecycle environment name to search by --lifecycle-environment-id NUMBER ID of the environment --organization VALUE - Organization name to search by --organization-id NUMBER - Organization ID --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --version VALUE - Content view version number -h , --help - Print help 3.21.17.2. content-view version incremental-update Perform an Incremental Update on one or more Content View Versions Usage Options --async - Do not wait for the task --content-view-version VALUE - Content view version number --content-view-version-id NUMBER - Content View Version Ids to perform an incremental update on. May contain composites as well as one or more components to update. --deb-ids LIST - Deb Package ids to copy into the new versions --debs LIST --description VALUE - The description for the new generated Content View Versions --errata-ids LIST - Errata ids to copy into the new versions --host-ids LIST - IDs of hosts to update --lifecycle-environment-ids LIST - List of lifecycle environment IDs to update the content view version in --lifecycle-environments LIST - List of lifecycle environment names to update the content view version in --organization VALUE - Organization name for resolving lifecycle environment names --organization-id VALUE - Organization id for resolving lifecycle environment names --package-ids LIST - Package ids to copy into the new versions --packages LIST --propagate-all-composites BOOLEAN If true, will publish a new composite version using any specified content_view_version_id that has been promoted to a lifecycle environment --resolve-dependencies BOOLEAN - If true, when adding the specified errata or packages, any needed dependencies will be copied as well. Defaults to true --update-all-hosts BOOLEAN - Update all editable and applicable hosts within the specified Content View and Lifecycle Environments -h , --help - Print help 3.21.17.3. content-view version info Show content view version Usage Options --content-view VALUE - Content view name to search by --content-view-id NUMBER - Content view numeric identifier --environment VALUE - Lifecycle environment name to search by (--environment is deprecated: Use --lifecycle-environment instead) --environment-id NUMBER - (--environment-id is deprecated: Use --lifecycle-environment-id instead) --fields LIST - Show specified fields or predefined field sets only. (See below) --id NUMBER - Content view version identifier --include-applied-filters BOOLEAN Whether or not to return filters applied to the content view version --lifecycle-environment VALUE - Lifecycle environment name to search by --lifecycle-environment-id NUMBER ID of the environment --organization VALUE - Organization name to search by --organization-id NUMBER - Organization ID --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --version VALUE - Content view version number -h , --help - Print help Table 3.66. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x Version x x x Description x x Content view id x x Content view name x x Content view label x x Lifecycle environments/id x x Lifecycle environments/name x x Lifecycle environments/label x x Repositories/id x x Repositories/name x x Repositories/label x x Has applied filters x x Applied filters/id x x Applied filters/name x x Applied filters/type x x Applied filters/inclusion x x Applied filters/original packages x x Applied filters/original module streams x x Applied filters/rules/id x x Applied filters/rules/name x x Applied filters/rules/uuid x x Applied filters/rules/module stream id x x Applied filters/rules/types/ x x Applied filters/rules/architecture x x Applied filters/rules/content view filter id x x Applied filters/rules/errata id x x Applied filters/rules/date type x x Applied filters/rules/start date x x Applied filters/rules/end date x x Dependency solving x x 3.21.17.4. content-view version list List content view versions Usage Options --composite-version-id NUMBER - Filter versions that are components in the specified composite version --content-view VALUE - Content view name to search by --content-view-id NUMBER - Content view identifier --environment VALUE - Lifecycle environment name to search by (--environment is deprecated: Use --lifecycle-environment instead) --environment-id NUMBER - (--environment-id is deprecated: Use --lifecycle-environment-id instead) --fields LIST - Show specified fields or predefined field sets only. (See below) --file-id NUMBER - Filter content view versions that contain the file --full-result BOOLEAN - Whether or not to show all results --include-applied-filters BOOLEAN Whether or not to return filters applied to the content view version --lifecycle-environment VALUE - Lifecycle environment name to search by --lifecycle-environment-id NUMBER Filter versions by environment --nondefault BOOLEAN - Filter out default content views --order VALUE - Sort field and order, eg. id DESC --organization VALUE - Organization name to search by --organization-id NUMBER - Organization identifier --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --page NUMBER - Page number, starting at 1 --per-page NUMBER - Number of results per page to return --search VALUE - Search string --triggered-by-id NUMBER - Filter composite versions whose publish was triggered by the specified component version --version VALUE - Filter versions by version number -h , --help - Print help Table 3.67. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x Version x x x Description x x Lifecycle environments x x Search / Order fields content_view_id - integer repository - string version - string 3.21.17.5. content-view version promote Promote a content view version Usage Options --async - Do not wait for the task --content-view VALUE - Content view name to search by --content-view-id NUMBER - Content view numeric identifier --description VALUE - The description for the content view version promotion --force - Force content view promotion and bypass lifecycle environment restriction --from-lifecycle-environment VALUE - Environment name from where to promote its version from (if version is unknown) --from-lifecycle-environment-id VALUE Id of the environment from where to promote its version from (if version is unknown) --id NUMBER - Content view version identifier --organization VALUE - Organization name to search by --organization-id NUMBER - Organization ID --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --to-lifecycle-environment VALUE - Name of the target environment --to-lifecycle-environment-id VALUE - Id of the target environment --version VALUE - Content view version number -h , --help - Print help 3.21.17.6. content-view version republish-repositories Forces a republish of the version's repositories' metadata Usage Options --async - Do not wait for the task --content-view VALUE - Content view name to search by --content-view-id NUMBER - Content view numeric identifier --force BOOLEAN - Force metadata regeneration to proceed. Dangerous operation when version has repositories with the Complete Mirroring mirroring policy --id NUMBER - Content view version identifier --organization VALUE - Organization name to search by --organization-id NUMBER - Organization ID --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --version VALUE - Content view version number -h , --help - Print help 3.21.17.7. content-view version update Update a content view version Usage Options --content-view VALUE - Content view name to search by --content-view-id NUMBER - Content view numeric identifier --description VALUE - The description for the content view version --environment VALUE - Lifecycle environment name to search by (--environment is deprecated: Use --lifecycle-environment instead) --environment-id NUMBER - (--environment-id is deprecated: Use --lifecycle-environment-id instead) --id NUMBER - Content view version identifier --lifecycle-environment VALUE - Lifecycle environment name to search by --lifecycle-environment-id NUMBER ID of the environment --new-version VALUE --organization VALUE - Organization name to search by --organization-id NUMBER - Organization ID --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --version VALUE - Content view version number -h , --help - Print help 3.22. deb-package Manipulate deb packages Usage Options -h , --help - Print help 3.22.1. deb-package info Show a deb package Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE - A deb package identifier --name VALUE - Name to search by --organization VALUE - Organization name to search by --organization-id NUMBER - Organization identifier --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --repository VALUE - Repository name to search by --repository-id NUMBER - Repository identifier -h , --help - Print help Table 3.68. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Pulp id x x Uuid x x Name x x x Version x x Checksum x x Architecture x x Nav x x Nva x x Filename x x Available host count x x Applicable host count x x Description x x 3.22.2. deb-package list List deb packages Usage Options --available-for VALUE - Return deb packages that can be added to the specified object. Only the value content_view_version is supported. --content-view VALUE - Content view name to search by --content-view-filter VALUE - Name to search by --content-view-filter-id NUMBER - Content View Filter identifier --content-view-id NUMBER - Content view numeric identifier --content-view-version VALUE - Content view version number --content-view-version-id NUMBER - Content View Version identifier --environment VALUE - Lifecycle environment name to search by (--environment is deprecated: Use --lifecycle-environment instead) --environment-id NUMBER - (--environment-id is deprecated: Use --lifecycle-environment-id instead) --fields LIST - Show specified fields or predefined field sets only. (See below) --full-result BOOLEAN - Whether or not to show all results --host VALUE - Host name --host-id NUMBER - Host id to list applicable deb packages for --ids LIST - Deb package identifiers to filter content by --lifecycle-environment-id NUMBER - Environment identifier --order VALUE - Sort field and order, eg. id DESC --organization VALUE - Organization name to search by --organization-id NUMBER - Organization identifier --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --packages-restrict-applicable BOOLEAN Return deb packages that are applicable to one or more hosts (defaults to true if host_id is specified) --packages-restrict-latest BOOLEAN - Return only the latest version of each package --packages-restrict-upgradable BOOLEAN Return deb packages that are upgradable on one or more hosts --page NUMBER - Page number, starting at 1 --per-page NUMBER - Number of results per page to return --product VALUE - Product name to search by --product-id NUMBER - Product numeric identifier --repository VALUE - Repository name to search by --repository-id NUMBER - Repository identifier --search VALUE - Search string -h , --help - Print help Table 3.69. Predefined field sets FIELDS ALL DEFAULT Id x x Filename x x 3.23. defaults Defaults management Usage Options -h , --help - Print help 3.23.1. defaults add Add a default parameter to config Usage Options --param-name VALUE - The name of the default option (e.g. organization_id) --param-value VALUE - The value for the default option --provider VALUE - The name of the provider providing the value. For list available providers see hammer defaults providers -h , --help - Print help 3.23.2. defaults delete Delete a default param Usage Options --param-name VALUE - The name of the default option -h , --help - Print help 3.23.3. defaults list List all the default parameters Usage Options -h , --help - Print help 3.23.4. defaults providers List all the providers Usage Options -h , --help - Print help 3.24. discovery Manipulate discovered hosts. Usage Options -h , --help - Print help 3.24.1. discovery auto-provision Auto provision a host Usage Options --all - Auto provision all discovered hosts --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.24.2. discovery delete Delete a discovered host Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.24.3. discovery facts List all fact values Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results -h , --help - Print help Table 3.70. Predefined field sets FIELDS ALL DEFAULT Fact x x Value x x Search / Order fields fact - string fact_short_name - string facts - string host - string host.hostgroup - string host_id - integer location - string location_id - integer name - string organization - string organization_id - integer origin - string reported_at - datetime short_name - string type - string value - string 3.24.4. discovery info Show a discovered host Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.71. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Mac x x Cpus x x Memory x x Disk count x x Disks size x x Subnet x x Last report x x Ip x x Model x x Organization x x Location x x 3.24.5. discovery list List all discovered hosts Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --order VALUE - Sort results --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --page VALUE - Paginate results --per-page VALUE - Number of entries per request --search VALUE - Filter results -h , --help - Print help Table 3.72. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Mac x x Cpus x x Memory x x Disk count x x Disks size x x Subnet x x Last report x x 3.24.6. discovery provision Provision a discovered host Usage Options --architecture VALUE - Architecture name --architecture-id NUMBER - Required if host is managed and value is not inherited from host group --ask-root-password BOOLEAN --build BOOLEAN --capabilities VALUE --domain VALUE - Domain name --domain-id NUMBER - Required if host is managed and value is not inherited from host group --enabled BOOLEAN --hostgroup VALUE - Hostgroup name --hostgroup-id NUMBER --hostgroup-title VALUE - Hostgroup title --id VALUE --image VALUE - Name to search by --image-id NUMBER --interface KEY_VALUE_LIST - Interface parameters Can be specified multiple times. --ip VALUE - Not required if using a subnet with DHCP Capsule --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --mac VALUE - Not required if it`s a virtual machine --managed BOOLEAN --medium VALUE - Medium name --medium-id VALUE - Required if not imaged based provisioning and host is managed and value is not inherited from host group --model VALUE - Model name --model-id NUMBER --name VALUE --new-name VALUE --operatingsystem VALUE - Operating system title --operatingsystem-id NUMBER - Required if host is managed and value is not inherited from host group --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --overwrite BOOLEAN --owner-id NUMBER --owner-type ENUM - Host`s owner type Possible value(s): User , Usergroup --parameters KEY_VALUE_LIST - Host parameters --partition-table VALUE - Partition table name --partition-table-id NUMBER --progress-report-id VALUE - UUID to track orchestration tasks status, GET /api/orchestration/:UUID/tasks --provision-method ENUM - Possible value(s): build , image --pxe-loader ENUM - DHCP filename option (Grub2 or PXELinux by default) Possible value(s): None , PXELinux BIOS , PXELinux UEFI , Grub UEFI , Grub2 BIOS , Grub2 ELF , Grub2 UEFI , Grub2 UEFI SecureBoot , Grub2 UEFI HTTP , Grub2 UEFI HTTPS , Grub2 UEFI HTTPS SecureBoot , iPXE Embedded , iPXE UEFI HTTP , iPXE Chain BIOS , iPXE Chain UEFI --root-password VALUE --sp-subnet-id NUMBER --subnet VALUE - Subnet name --subnet-id NUMBER - Required if host is managed and value is not inherited from host group -h , --help - Print help 3.24.7. discovery reboot Reboot a host Usage Options --all - Reboot all discovered hosts --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.24.8. discovery refresh-facts Refresh the facts of a host Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.25. discovery-rule Manipulate discovered rules. Usage Options -h , --help - Print help 3.25.1. discovery-rule create Create a discovery rule Usage Options --enabled BOOLEAN - Flag is used for temporary shutdown of rules --hostgroup VALUE - Hostgroup name --hostgroup-id NUMBER - The hostgroup that is used to auto provision a host --hostgroup-title VALUE - Hostgroup title --hostname VALUE - Defines a pattern to assign human-readable hostnames to the matching hosts --hosts-limit VALUE - Enables to limit maximum amount of provisioned hosts per rule --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-ids LIST - Location ID for provisioned hosts --location-title VALUE - Set the current location context for the request --location-titles LIST --locations LIST --name VALUE - Represents rule name shown to the users --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-ids LIST - Organization ID for provisioned hosts --organization-title VALUE - Set the current organization context for the request --organization-titles LIST --organizations LIST --priority NUMBER - Puts the rules in order, low numbers go first. Must be greater then zero --search VALUE - Query to match discovered hosts for the particular rule -h , --help - Print help 3.25.2. discovery-rule delete Delete a rule Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.25.3. discovery-rule info Show a discovery rule Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.73. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Priority x x Search x x Host group x x Hosts limit x x Enabled x x Hostname template x x Hosts/ x x Locations/ x x Organizations/ x x 3.25.4. discovery-rule list List all discovery rules Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --order VALUE - Sort results --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --page VALUE - Paginate results --per-page VALUE - Number of entries per request --search VALUE - Filter results -h , --help - Print help Table 3.74. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Priority x x Search x x Host group x x Hosts limit x x Enabled x x 3.25.5. discovery-rule update Update a rule Usage Options --enabled BOOLEAN - Flag is used for temporary shutdown of rules --hostgroup VALUE - Hostgroup name --hostgroup-id NUMBER - The hostgroup that is used to auto provision a host --hostgroup-title VALUE - Hostgroup title --hostname VALUE - Defines a pattern to assign human-readable hostnames to the matching hosts --hosts-limit VALUE - Enables to limit maximum amount of provisioned hosts per rule --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-ids LIST - Location ID for provisioned hosts --location-title VALUE - Set the current location context for the request --location-titles LIST --locations LIST --name VALUE - Represents rule name shown to the users --new-name VALUE - Represents rule name shown to the users --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-ids LIST - Organization ID for provisioned hosts --organization-title VALUE - Set the current organization context for the request --organization-titles LIST --organizations LIST --priority NUMBER - Puts the rules in order, low numbers go first. Must be greater then zero --search VALUE - Query to match discovered hosts for the particular rule -h , --help - Print help 3.26. docker Manipulate docker content Usage Options -h , --help - Print help 3.26.1. docker manifest Manage docker manifests Usage Options -h , --help - Print help 3.26.1.1. docker manifest info Show a docker manifest Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE - A docker manifest identifier --name VALUE - Name to search by --organization VALUE - Organization name to search by --organization-id NUMBER - Organization identifier --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --repository VALUE - Repository name to search by --repository-id NUMBER - Repository identifier -h , --help - Print help Table 3.75. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Schema version x x Digest x x Downloaded x x Tags/name x x 3.26.1.2. docker manifest list List docker_manifests Usage Options --content-view VALUE - Content view name to search by --content-view-filter VALUE - Name to search by --content-view-filter-id NUMBER - Content view filter identifier --content-view-filter-rule VALUE - Name to search by --content-view-filter-rule-id NUMBER Content view filter rule identifier --content-view-id NUMBER - Content view numeric identifier --content-view-version VALUE - Content view version number --content-view-version-id NUMBER - Content view version identifier --environment VALUE - Lifecycle environment name to search by (--environment is deprecated: Use --lifecycle-environment instead) --environment-id NUMBER - (--environment-id is deprecated: Use --lifecycle-environment-id instead) --fields LIST - Show specified fields or predefined field sets only. (See below) --full-result BOOLEAN - Whether or not to show all results --ids LIST - Ids to filter content by --include-filter-ids BOOLEAN - Includes associated content view filter ids in response --lifecycle-environment VALUE - Lifecycle environment name to search by --lifecycle-environment-id NUMBER - Environment identifier --order VALUE - Sort field and order, eg. id DESC --organization VALUE - Organization name to search by --organization-id NUMBER - Organization identifier --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --page NUMBER - Page number, starting at 1 --per-page NUMBER - Number of results per page to return --product VALUE - Product name to search by --product-id NUMBER - Product numeric identifier --repository VALUE - Repository name to search by --repository-id NUMBER - Repository identifier --search VALUE - Search string -h , --help - Print help Table 3.76. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Schema version x x Digest x x Downloaded x x Tags x x 3.26.2. docker tag Manage docker tags Usage Options -h , --help - Print help 3.26.2.1. docker tag info Show a docker tag Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE - A docker tag identifier --name VALUE - Name to search by --organization VALUE - Organization name to search by --organization-id NUMBER - Organization identifier --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --repository VALUE - Repository name to search by --repository-id NUMBER - Repository identifier -h , --help - Print help Table 3.77. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Tag x x x Repository id x x Docker manifest id x x Docker manifest name x x 3.26.2.2. docker tag list List docker_tags Usage Options --content-view VALUE - Content view name to search by --content-view-filter VALUE - Name to search by --content-view-filter-id NUMBER - Content view filter identifier --content-view-filter-rule VALUE - Name to search by --content-view-filter-rule-id NUMBER Content view filter rule identifier --content-view-id NUMBER - Content view numeric identifier --content-view-version VALUE - Content view version number --content-view-version-id NUMBER - Content view version identifier --environment VALUE - Lifecycle environment name to search by (--environment is deprecated: Use --lifecycle-environment instead) --environment-id NUMBER - (--environment-id is deprecated: Use --lifecycle-environment-id instead) --fields LIST - Show specified fields or predefined field sets only. (See below) --full-result BOOLEAN - Whether or not to show all results --ids LIST - Ids to filter content by --include-filter-ids BOOLEAN - Includes associated content view filter ids in response --lifecycle-environment VALUE - Lifecycle environment name to search by --lifecycle-environment-id NUMBER - Environment identifier --order VALUE - Sort field and order, eg. id DESC --organization VALUE - Organization name to search by --organization-id NUMBER - Organization identifier --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --page NUMBER - Page number, starting at 1 --per-page NUMBER - Number of results per page to return --product VALUE - Product name to search by --product-id NUMBER - Product numeric identifier --repository VALUE - Repository name to search by --repository-id NUMBER - Repository identifier --search VALUE - Search string -h , --help - Print help Table 3.78. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Tag x x x Repository id x x 3.27. domain Manipulate domains Usage Options -h , --help - Print help 3.27.1. domain create Create a domain Usage Options --description VALUE - Full name describing the domain --dns VALUE - Name of DNS proxy to use within this domain --dns-id NUMBER - DNS Capsule ID to use within this domain --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-ids LIST - REPLACE locations with given ids --location-title VALUE - Set the current location context for the request --location-titles LIST --locations LIST --name VALUE - The full DNS domain name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-ids LIST - REPLACE organizations with given ids. --organization-title VALUE - Set the current organization context for the request --organization-titles LIST --organizations LIST -h , --help - Print help 3.27.2. domain delete Delete a domain Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Domain name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.27.3. domain delete-parameter Delete parameter for a domain Usage Options --domain VALUE - Domain name --domain-id NUMBER - Numerical ID or domain name --name VALUE - Parameter name -h , --help - Print help 3.27.4. domain info Show a domain Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE - Numerical ID or domain name --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Domain name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --show-hidden-parameters BOOLEAN Display hidden parameter values -h , --help - Print help Table 3.79. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Description x x Dns id x x Subnets/ x x Locations/ x x Organizations/ x x Parameters/ x x Created at x x Updated at x x 3.27.5. domain list List of domains Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Scope by locations --location-title VALUE - Set the current location context for the request --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Scope by organizations --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results --subnet VALUE - Subnet name --subnet-id VALUE - ID of subnet -h , --help - Print help Table 3.80. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Search / Order fields fullname - string id - integer location - string location_id - integer name - string organization - string organization_id - integer params - string 3.27.6. domain set-parameter Create or update parameter for a domain Usage Options --domain VALUE - Domain name --domain-id NUMBER - Numerical ID or domain name --hidden-value BOOLEAN - Should the value be hidden --name VALUE - Parameter name --parameter-type ENUM - Type of the parameter Possible value(s): string , boolean , integer , real , array , hash , yaml , json Default: "string" --value VALUE - Parameter value -h , --help - Print help 3.27.7. domain update Update a domain Usage Options --description VALUE - Full name describing the domain --dns VALUE - Name of DNS proxy to use within this domain --dns-id NUMBER - DNS Capsule ID to use within this domain --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-ids LIST - REPLACE locations with given ids --location-title VALUE - Set the current location context for the request --location-titles LIST --locations LIST --name VALUE - The full DNS domain name --new-name VALUE - The full DNS domain name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-ids LIST - REPLACE organizations with given ids. --organization-title VALUE - Set the current organization context for the request --organization-titles LIST --organizations LIST -h , --help - Print help 3.28. erratum Manipulate errata Usage Options -h , --help - Print help 3.28.1. erratum info Show an erratum Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE - An erratum identifier --name VALUE - Name to search by --organization VALUE - Organization name to search by --organization-id NUMBER - Organization identifier --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --repository VALUE - Repository name to search by --repository-id NUMBER - Repository identifier -h , --help - Print help Table 3.81. Predefined field sets FIELDS ALL DEFAULT Title x x Version x x Description x x Status x x Id x x Errata id x x Reboot suggested x x Updated x x Issued x x Release x x Solution x x Packages x x Module streams/name x x Module streams/stream x x Module streams/packages x x 3.28.2. erratum list List errata Usage Options --available-for VALUE - Return errata that can be added to the specified object. The values content_view_version and `content_view_filter are supported. --content-view VALUE - Content view name to search by --content-view-filter VALUE - Name to search by --content-view-filter-id NUMBER - Content View Filter identifier --content-view-id NUMBER - Content view numeric identifier --content-view-version VALUE - Content view version number --content-view-version-id NUMBER - Content View Version identifier --cve VALUE - CVE identifier --environment VALUE - Lifecycle environment name to search by (--environment is deprecated: Use --lifecycle-environment instead) --environment-id NUMBER - (--environment-id is deprecated: Use --lifecycle-environment-id instead) --errata-restrict-applicable BOOLEAN Return errata that are applicable to one or more hosts (defaults to true if host_id is specified) --errata-restrict-installable BOOLEAN Return errata that are upgradable on one or more hosts --fields LIST - Show specified fields or predefined field sets only. (See below) --full-result BOOLEAN - Whether or not to show all results --host VALUE - Host name --host-id NUMBER - Host id to list applicable errata for --lifecycle-environment VALUE - Lifecycle environment name to search by --lifecycle-environment-id NUMBER - Environment identifier --order VALUE - Sort field and order, eg. id DESC --organization VALUE - Organization name to search by --organization-id NUMBER - Organization identifier --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --page NUMBER - Page number, starting at 1 --per-page NUMBER - Number of results per page to return --product VALUE - Product name to search by --product-id NUMBER - Product numeric identifier --repository VALUE - Repository name to search by --repository-id NUMBER - Repository identifier --search VALUE - Search string -h , --help - Print help Table 3.82. Predefined field sets FIELDS ALL DEFAULT Id x x Errata id x x Type x x Title x x Issued x x Updated x x Search / Order fields bug - string cve - string db_id - integer errata_id - string errata_type - string id - string issued - date modular - Values: true, false package - string package_name - string reboot_suggested - boolean repository - string severity - string synopsis - string title - string type - string updated - date 3.29. export-templates Export templates to a git repo or a directory on the server Usage Options --branch VALUE - Branch in Git repo. --commit-msg VALUE - Custom commit message for templates export --dirname VALUE - The directory within Git repo containing the templates --filter VALUE - Export templates with names matching this regex (case-insensitive; snippets are not filtered). --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-ids LIST - REPLACE locations with given ids --location-title VALUE - Set the current location context for the request --location-titles LIST --locations LIST --metadata-export-mode ENUM - Specify how to handle metadata Possible value(s): refresh , keep , remove --negate BOOLEAN - Negate the prefix (for purging). --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-ids LIST - REPLACE organizations with given ids. --organization-title VALUE - Set the current organization context for the request --organization-titles LIST --organizations LIST --repo VALUE - Override the default repo from settings. --verbose BOOLEAN - Be verbose -h , --help - Print help 3.30. fact Search facts Usage Options -h , --help - Print help 3.30.1. fact list List all fact values Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results -h , --help - Print help Table 3.83. Predefined field sets FIELDS ALL DEFAULT Host x x Fact x x Value x x Search / Order fields fact - string fact_short_name - string facts - string host - string host.hostgroup - string host_id - integer location - string location_id - integer name - string organization - string organization_id - integer origin - string reported_at - datetime short_name - string type - string value - string 3.31. file Manipulate files Usage Options -h , --help - Print help 3.31.1. file info Show a file Usage Options --content-view VALUE - Content view name to search by --content-view-id NUMBER - Content view numeric identifier --content-view-version VALUE - Content view version number --content-view-version-id NUMBER Content view version identifier --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE - A file identifier --name VALUE - File name to search by --organization VALUE - Organization name to search by --organization-id NUMBER - Organization identifier --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --product VALUE - Product name to search by --product-id NUMBER - Product numeric identifier --repository VALUE - Repository name to search by --repository-id NUMBER - Repository identifier -h , --help - Print help Table 3.84. Predefined field sets FIELDS ALL DEFAULT THIN Id x x Name x x x Path x x Uuid x x Checksum x x 3.31.2. file list List files Usage Options --content-view VALUE - Content view name to search by --content-view-filter VALUE - Name to search by --content-view-filter-id NUMBER - Content view filter identifier --content-view-filter-rule VALUE - Name to search by --content-view-filter-rule-id NUMBER Content view filter rule identifier --content-view-id NUMBER - Content view numeric identifier --content-view-version VALUE - Content view version number --content-view-version-id NUMBER - Content view version identifier --fields LIST - Show specified fields or predefined field sets only. (See below) --full-result BOOLEAN - Whether or not to show all results --ids LIST - Ids to filter content by --include-filter-ids BOOLEAN - Includes associated content view filter ids in response --lifecycle-environment-id NUMBER - Environment identifier --order VALUE - Sort field and order, eg. id DESC --organization VALUE - Organization name to search by --organization-id NUMBER - Organization identifier --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --page NUMBER - Page number, starting at 1 --per-page NUMBER - Number of results per page to return --product VALUE - Product name to search by --product-id NUMBER - Product numeric identifier --repository VALUE - Repository name to search by --repository-id NUMBER - Repository identifier --search VALUE - Search string -h , --help - Print help Table 3.85. Predefined field sets FIELDS ALL DEFAULT THIN Id x x Name x x x Path x x 3.32. filter Manage permission filters Usage Options -h , --help - Print help 3.32.1. filter available-permissions List all permissions Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results -h , --help - Print help Table 3.86. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Resource x x Search / Order fields id - integer name - string resource_type - string 3.32.2. filter available-resources List available resource types Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.87. Predefined field sets FIELDS ALL DEFAULT THIN Name x x x 3.32.3. filter create Create a filter Usage Options --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-ids LIST --location-title VALUE - Set the current location context for the request --location-titles LIST --locations LIST --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-ids LIST --organization-title VALUE - Set the current organization context for the request --organization-titles LIST --organizations LIST --override BOOLEAN --permission-ids LIST --permissions LIST --role VALUE - User role name --role-id VALUE --search VALUE -h , --help - Print help Overriding organizations and locations: Filters inherit organizations and locations from its role by default. This behavior can be changed by setting --override=true . Therefore options --organization[s|-ids] and --location[s|-ids] are applicable only when the override flag is set. 3.32.4. filter delete Delete a filter Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.32.5. filter info Show a filter Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.88. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Resource type x x Search x x Unlimited? x x Override? x x Role x x Permissions x x Locations/ x x Organizations/ x x Created at x x Updated at x x 3.32.6. filter list List all filters Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results -h , --help - Print help Table 3.89. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Resource type x x Search x x Unlimited? x x Override? x x Role x x Permissions x x Search / Order fields id - integer limited - Values: true, false location - string location_id - integer organization - string organization_id - integer override - Values: true, false permission - string resource - string role - string role_id - integer search - text unlimited - Values: true, false 3.32.7. filter update Update a filter Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-ids LIST --location-title VALUE - Set the current location context for the request --location-titles LIST --locations LIST --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-ids LIST --organization-title VALUE - Set the current organization context for the request --organization-titles LIST --organizations LIST --override BOOLEAN --permission-ids LIST --permissions LIST --role VALUE - User role name --role-id VALUE --search VALUE -h , --help - Print help Overriding organizations and locations: Filters inherit organizations and locations from its role by default. This behavior can be changed by setting --override=true . Therefore options --organization[s|-ids] and --location[s|-ids] are applicable only when the override flag is set. 3.33. foreign-input-set Manage foreign input sets Usage Options -h , --help - Print help 3.33.1. foreign-input-set create Create a foreign input set Usage Options --description VALUE - Input set description --exclude VALUE - A comma separated list of input names to be included from the foreign template. --include VALUE - A comma separated list of input names to be included from the foreign template. --include-all BOOLEAN - Include all inputs from the foreign template --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --target-template-id VALUE - Target template ID --template-id VALUE -h , --help - Print help 3.33.2. foreign-input-set delete Delete a foreign input set Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --template-id VALUE -h , --help - Print help 3.33.3. foreign-input-set info Show foreign input set details Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --template-id VALUE -h , --help - Print help Table 3.90. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Target template id x x Target template name x x Include all x x Include x x Exclude x x 3.33.4. foreign-input-set list List foreign input sets Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results --template-id VALUE -h , --help - Print help Table 3.91. Predefined field sets FIELDS ALL DEFAULT Id x x Target template id x x Target template name x x 3.33.5. foreign-input-set update Update a foreign input set Usage Options --description VALUE - Input set description --exclude VALUE - A comma separated list of input names to be included from the foreign template. --id VALUE --include VALUE - A comma separated list of input names to be included from the foreign template. --include-all BOOLEAN - Include all inputs from the foreign template --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --new-name VALUE --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --target-template-id VALUE - Target template ID --template-id VALUE -h , --help - Print help 3.34. full-help Print help for all hammer commands Usage Options --md - Format output in markdown -h , --help - Print help 3.35. global-parameter Manipulate global parameters Usage Options -h , --help - Print help 3.35.1. global-parameter delete Delete a global parameter Usage Options --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Common parameter name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.35.2. global-parameter list List all global parameters Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results --show-hidden BOOLEAN - Display hidden values -h , --help - Print help Table 3.92. Predefined field sets FIELDS ALL DEFAULT THIN Name x x x Value x x Type x x Search / Order fields domain_name - string host_group_name - string host_name - string id - integer key_type - string location_name - string name - string organization_name - string os_name - string parameter_type - string subnet_name - text type - string value - text 3.35.3. global-parameter set Set a global parameter Usage Options --hidden-value BOOLEAN - Should the value be hidden --name VALUE - Parameter name --parameter-type ENUM - Type of the parameter Possible value(s): string , boolean , integer , real , array , hash , yaml , json Default: "string" --value VALUE - Parameter value -h , --help - Print help 3.36. host Manipulate hosts Usage Options -h , --help - Print help 3.36.1. host ansible-roles Manage Ansible roles on a host Usage Options -h , --help - Print help 3.36.1.1. host ansible-roles add Associate an Ansible role Usage Options --ansible-role VALUE - Name to search by --ansible-role-id NUMBER --force - Associate the Ansible role even if it already is associated indirectly --id VALUE --name VALUE - Host name -h , --help - Print help 3.36.1.2. host ansible-roles assign Assigns Ansible roles to a host Usage Options --ansible-role-ids LIST - Ansible roles to assign to a host --ansible-roles LIST --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Host name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.36.1.3. host ansible-roles list List all Ansible roles for a host Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Host name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.93. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Imported at x x Inherited x x Directly assigned x x 3.36.1.4. host ansible-roles play Runs all Ansible roles on a host Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Host name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.36.1.5. host ansible-roles remove Disassociate an Ansible role Usage Options --ansible-role VALUE - Name to search by --ansible-role-id NUMBER --id VALUE --name VALUE - Host name -h , --help - Print help 3.36.2. host boot Boot host from specified device Usage Options --device VALUE - Boot device, valid devices are disk, cdrom, pxe, bios --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Host name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.36.3. host config-reports List all reports Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE - Host id --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Host name --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results -h , --help - Print help Table 3.94. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Host x x Last report x x Origin x x Applied x x Restarted x x Failed x x Restart failures x x Skipped x x Pending x x Search / Order fields applied - integer eventful - Values: true, false failed - integer failed_restarts - integer host - string host_id - integer host_owner_id - integer hostgroup - string hostgroup_fullname - string hostgroup_title - string id - integer last_report - datetime location - string log - text organization - string origin - string pending - integer reported - datetime resource - text restarted - integer skipped - integer 3.36.4. host create Create a host Usage Options --ansible-role-ids LIST - IDs of associated ansible roles --ansible-roles LIST --architecture VALUE - Architecture name --architecture-id NUMBER - Required if host is managed and value is not inherited from host group --ask-root-password BOOLEAN --autoheal BOOLEAN - Sets whether the Host will autoheal subscriptions upon checkin --build BOOLEAN --comment VALUE - Additional information about this host --compute-attributes KEY_VALUE_LIST - Compute resource attributes --compute-profile VALUE - Compute profile name --compute-profile-id NUMBER --compute-resource VALUE - Compute resource name --compute-resource-id NUMBER - Nil means host is bare metal --content-source VALUE - Content Source name --content-source-id NUMBER --content-view VALUE - Name to search by --content-view-id NUMBER --domain VALUE - Domain name --domain-id NUMBER - Required if host is managed and value is not inherited from host group --enabled BOOLEAN - Include this host within Satellite reporting --hostgroup VALUE - Hostgroup name --hostgroup-id NUMBER --hostgroup-title VALUE - Hostgroup title --hypervisor-guest-uuids LIST - List of hypervisor guest uuids --image VALUE - Name to search by --image-id NUMBER --installed-products-attributes SCHEMA List of products installed on the host --interface KEY_VALUE_LIST - Interface parameters Can be specified multiple times. --ip VALUE - Not required if using a subnet with DHCP Capsule --kickstart-repository VALUE - Kickstart repository name --kickstart-repository-id NUMBER - Repository Id associated with the kickstart repo used for provisioning --lifecycle-environment VALUE - Name to search by --lifecycle-environment-id NUMBER --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --mac VALUE - Required for managed host that is bare metal, not required if it`s a virtual machine --managed BOOLEAN - True/False flag whether a host is managed or unmanaged. Note: this value also determines whether several parameters are required or not --medium VALUE - Medium name --medium-id VALUE - Required if not imaged based provisioning and host is managed and value is not inherited from host group --model VALUE - Model name --model-id NUMBER --name VALUE --openscap-proxy-id NUMBER - ID of OpenSCAP Capsule --operatingsystem VALUE - Operating system title --operatingsystem-id NUMBER - Required if host is managed and value is not inherited from host group --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --overwrite BOOLEAN - Default: "true" --owner VALUE - Login of the owner --owner-id VALUE - ID of the owner --owner-type ENUM - Host`s owner type Possible value(s): User , Usergroup --parameters KEY_VALUE_LIST - Replaces with new host parameters --partition-table VALUE - Partition table name --partition-table-id NUMBER - Required if host is managed and custom partition has not been defined --product VALUE - Name to search by --product-id NUMBER - Product id as listed from a host`s installed products, this is not the same product id as the products api returns --progress-report-id VALUE - UUID to track orchestration tasks status, GET /api/orchestration/:UUID/tasks --provision-method ENUM - The method used to provision the host. Possible value(s): build , image , bootdisk --puppet-ca-proxy-id NUMBER - Puppet CA Capsule ID --puppet-proxy-id NUMBER - Puppet Capsule ID --purpose-addons LIST - Sets the system add-ons --purpose-role VALUE - Sets the system purpose usage --purpose-usage VALUE - Sets the system purpose usage --pxe-loader ENUM - DHCP filename option (Grub2/PXELinux by default) Possible value(s): None , PXELinux BIOS , PXELinux UEFI , Grub UEFI , Grub2 BIOS , Grub2 ELF , Grub2 UEFI , Grub2 UEFI SecureBoot , Grub2 UEFI HTTP , Grub2 UEFI HTTPS , Grub2 UEFI HTTPS SecureBoot , iPXE Embedded , iPXE UEFI HTTP , iPXE Chain BIOS , iPXE Chain UEFI --realm VALUE - Name to search by --realm-id NUMBER --release-version VALUE - Release version for this Host to use (7Server, 7.1, etc) --root-password VALUE - Required if host is managed and value is not inherited from host group or default password in settings --service-level VALUE - Service level to be used for autoheal --subnet VALUE - Subnet name --subnet-id NUMBER - Required if host is managed and value is not inherited from host group --typed-parameters SCHEMA - Replaces with new host parameters (with type support) --volume KEY_VALUE_LIST - Volume parameters Can be specified multiple times. -h , --help - Print help Following parameters accept format defined by its schema (bold are required; <> contains acceptable type; [] contains acceptable value): --typed-parameters " name =<string>, value =<string>,parameter_type=[string|boolean|integer|real|array|hash|yaml|json],hidden_value=[true|false|1|0], ... " --installed-products-attributes "product_id=<string>,product_name=<string>,arch=<string>,version=<string>, ... " Available keys for --interface: mac ip type Possible values: interface, bmc, bond, bridge name subnet_id domain_id identifier managed true/false primary true/false, each managed hosts needs to have one primary interface. provision true/false virtual true/false For virtual=true: tag VLAN tag, this attribute has precedence over the subnet VLAN ID. Only for virtual interfaces. attached_to Identifier of the interface to which this interface belongs, e.g. eth1. For type=bond: mode Possible values: balance-rr, active-backup, balance-xor, broadcast, 802.3ad, balance-tlb, balance-alb attached_devices Identifiers of slave interfaces, e.g. [eth1,eth2] bond_options For type=bmc: provider always IPMI username password Provider specific options Bold attributes are required. EC2: --volume : --interface : --compute-attributes : availability_zone - flavor_id - groups - security_group_ids - managed_ip - Libvirt: --volume : pool_name - One of available storage pools capacity - String value, e.g. 10G allocation - Initial allocation, e.g. 0G format_type - Possible values: raw, qcow2 --interface : compute_type - Possible values: bridge, network compute_bridge - Name of interface according to type compute_model - Possible values: virtio, rtl8139, ne2k_pci, pcnet, e1000 compute_network - Libvirt instance network, e.g. default --compute-attributes : cpus - Number of CPUs memory - String, amount of memory, value in bytes cpu_mode - Possible values: default, host-model, host-passthrough boot_order - Device names to specify the boot order start - Boolean (expressed as 0 or 1), whether to start the machine or not OpenStack: --volume : --interface : --compute-attributes : availability_zone - boot_from_volume - flavor_ref - image_ref - tenant_id - security_groups - network - Red Hat Virtualization: --volume : size_gb - Volume size in GB, integer value storage_domain - ID or name of storage domain bootable - Boolean, set 1 for bootable, only one volume can be bootable preallocate - Boolean, set 1 to preallocate wipe_after_delete - Boolean, set 1 to wipe disk after delete interface - Disk interface name, must be ide, virtio or virtio_scsi --interface : compute_name - Compute name, e.g. eth0 compute_network - Select one of available networks for a cluster, must be an ID or a name compute_interface - Interface type compute_vnic_profile - Vnic Profile --compute-attributes : cluster - ID or name of cluster to use template - Hardware profile to use cores - Integer value, number of cores sockets - Integer value, number of sockets memory - Amount of memory, integer value in bytes ha - Boolean, set 1 to high availability display_type - Possible values: VNC, SPICE keyboard_layout - Possible values: ar, de-ch, es, fo, fr-ca, hu, ja, mk, no, pt-br, sv, da, en-gb, et, fr, fr-ch, is, lt, nl, pl, ru, th, de, en-us, fi, fr-be, hr, it, lv, nl-be, pt, sl, tr. Not usable if display type is SPICE. start - Boolean, set 1 to start the vm Rackspace: --volume : --interface : --compute-attributes : flavor_id - VMware: --volume : name - storage_pod - Storage Pod ID from VMware datastore - Datastore ID from VMware mode - persistent/independent_persistent/independent_nonpersistent size_gb - Integer number, volume size in GB thin - true/false eager_zero - true/false controller_key - Associated SCSI controller key --interface : compute_type - Type of the network adapter, for example one of: VirtualVmxnet3, VirtualE1000, See documentation center for your version of vSphere to find more details about available adapter types: https://www.vmware.com/support/pubs/ compute_network - Network ID or Network Name from VMware --compute-attributes : cluster - Cluster ID from VMware corespersocket - Number of cores per socket (applicable to hardware versions < 10 only) cpus - CPU count memory_mb - Integer number, amount of memory in MB path - Path to folder resource_pool - Resource Pool ID from VMware firmware - automatic/bios/efi guest_id - Guest OS ID form VMware hardware_version - Hardware version ID from VMware memoryHotAddEnabled - Must be a 1 or 0, lets you add memory resources while the machine is on cpuHotAddEnabled - Must be a 1 or 0, lets you add CPU resources while the machine is on add_cdrom - Must be a 1 or 0, Add a CD-ROM drive to the virtual machine annotation - Annotation Notes scsi_controllers - List with SCSI controllers definitions type - ID of the controller from VMware key - Key of the controller (e.g. 1000) boot_order - Device names to specify the boot order start - Must be a 1 or 0, whether to start the machine or not AzureRM: --volume : disk_size_gb - Volume Size in GB (integer value) data_disk_caching - Data Disk Caching (None, ReadOnly, ReadWrite) --interface : compute_network - Select one of available Azure Subnets, must be an ID compute_public_ip - Public IP (None, Static, Dynamic) compute_private_ip - Static Private IP (expressed as true or false) --compute-attributes : resource_group - Existing Azure Resource Group of user vm_size - VM Size, eg. Standard_A0 etc. username - The Admin username password - The Admin password platform - OS type eg. Linux ssh_key_data - SSH key for passwordless authentication os_disk_caching - OS disk caching premium_os_disk - Premium OS Disk, Boolean as 0 or 1 script_command - Custom Script Command script_uris - Comma seperated file URIs GCE: --volume : size_gb - Volume size in GB, integer value --interface : --compute-attributes : machine_type - network - associate_external_ip - 3.36.5. host deb-package Manage deb packages on your hosts Usage Options -h , --help - Print help 3.36.5.1. host deb-package list List deb packages installed on the host Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --full-result BOOLEAN - Whether or not to show all results --host VALUE - Host name --host-id NUMBER - ID of the host --order VALUE - Sort field and order, eg. id DESC --page NUMBER - Page number, starting at 1 --per-page NUMBER - Number of results per page to return --search VALUE - Search string -h , --help - Print help Table 3.95. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Version x x Arch x x 3.36.6. host delete Delete a host Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Host name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.36.7. host delete-parameter Delete parameter for a host Usage Options --host VALUE - Host name --host-id NUMBER --name VALUE - Parameter name -h , --help - Print help 3.36.8. host disassociate Disassociate a host Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Host name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.36.9. host enc-dump Dump host's ENC YAML Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Host name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.36.10. host errata Manage errata on your hosts Usage Options -h , --help - Print help 3.36.10.1. host errata apply Not supported. Use the remote execution equivalent hammer job-invocation create --feature katello_errata_install . Usage Options -h , --help - Unsupported Operation - Use the remote execution equivalent hammer job-invocation create `--feature katello_errata_install`. Unfortunately the server does not support such operation. 3.36.10.2. host errata info Retrieve a single errata for a host Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --host VALUE - Host name --host-id NUMBER - Host ID --id VALUE - Errata id of the erratum (RHSA-2012:108) --name VALUE - Name to search by -h , --help - Print help Table 3.96. Predefined field sets FIELDS ALL DEFAULT Title x x Version x x Description x x Status x x Id x x Errata id x x Reboot suggested x x Updated x x Issued x x Release x x Solution x x Packages x x Module streams/name x x Module streams/stream x x Module streams/packages x x 3.36.10.3. host errata list List errata available for the content host Usage Options --content-view VALUE - Content view name to search by --content-view-id NUMBER - Calculate Applicable Errata based on a particular Content View --environment VALUE - Lifecycle environment name to search by (--environment is deprecated: Use --lifecycle-environment instead) --environment-id NUMBER - (--environment-id is deprecated: Use --lifecycle-environment-id instead) --fields LIST - Show specified fields or predefined field sets only. (See below) --full-result BOOLEAN - Whether or not to show all results --host VALUE - Host name --host-id NUMBER - UUID of the content host --include-applicable BOOLEAN - Return errata that are applicable to this host. Defaults to false) --lifecycle-environment VALUE - Lifecycle environment name to search by --lifecycle-environment-id NUMBER Calculate Applicable Errata based on a particular Environment --order VALUE - Sort field and order, eg. id DESC --organization VALUE - Organization name to search by --organization-id VALUE - Organization ID to search by --organization-label VALUE - Organization label to search by --page NUMBER - Page number, starting at 1 --per-page NUMBER - Number of results per page to return --search VALUE - Search string --severity VALUE - Return only errata of a particular severity (None, Low, Moderate, Important, Critical) --type VALUE - Return only errata of a particular type (security, bugfix, enhancement) -h , --help - Print help Table 3.97. Predefined field sets FIELDS ALL DEFAULT Id x x Erratum id x x Type x x Title x x Installable x x 3.36.10.4. host errata recalculate Force regenerate applicability. Usage Options --host VALUE - Host name --host-id NUMBER - Host ID -h , --help - Print help 3.36.11. host facts List all fact values Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Host name --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results -h , --help - Print help Table 3.98. Predefined field sets FIELDS ALL DEFAULT Fact x x Value x x Search / Order fields fact - string fact_short_name - string facts - string host - string host.hostgroup - string host_id - integer location - string location_id - integer name - string organization - string organization_id - integer origin - string reported_at - datetime short_name - string type - string value - string 3.36.12. host info Show a host Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Host name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --show-hidden-parameters BOOLEAN Display hidden parameter values -h , --help - Print help Table 3.99. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Uuid x x Name x x x Organization x x Location x x Host group x x Compute resource x x Compute profile x x Cert name x x Token x x Managed x x Installed at x x Last report x x Uptime (seconds) x x Status/global status x x Status/build status x x Network/ipv4 address x x Network/ipv6 address x x Network/mac x x Network/subnet ipv4 x x Network/subnet ipv6 x x Network/domain x x Network/service provider/sp name x x Network/service provider/sp ip x x Network/service provider/sp mac x x Network/service provider/sp subnet x x Network interfaces/id x x Network interfaces/identifier x x Network interfaces/type x x Network interfaces/mac address x x Network interfaces/ipv4 address x x Network interfaces/ipv6 address x x Network interfaces/fqdn x x Operating system/architecture x x Operating system/operating system x x Operating system/build x x Operating system/medium x x Operating system/partition table x x Operating system/pxe loader x x Operating system/custom partition table x x Operating system/image x x Operating system/image file x x Operating system/use image x x Parameters/ x x All parameters/ x x Additional info/owner x x Additional info/owner id x x Additional info/owner type x x Additional info/enabled x x Additional info/model x x Additional info/comment x x Openscap proxy x x Content information/content view environments/content view/id x x Content information/content view environments/content view/name x x Content information/content view environments/content view/composite x x Content information/content view environments/lifecycle environment/id x x Content information/content view environments/lifecycle environment/name x x Content information/content source/id x x Content information/content source/name x x Content information/kickstart repository/id x x Content information/kickstart repository/name x x Content information/applicable packages x x Content information/upgradable packages x x Content information/applicable errata/enhancement x x Content information/applicable errata/bug fix x x Content information/applicable errata/security x x Subscription information/uuid x x Subscription information/last checkin x x Subscription information/release version x x Subscription information/autoheal x x Subscription information/registered to x x Subscription information/registered at x x Subscription information/registered by activation keys/ x x Subscription information/system purpose/service level x x Subscription information/system purpose/purpose usage x x Subscription information/system purpose/purpose role x x Subscription information/system purpose/purpose addons x x Trace status x x Host collections/id x x Host collections/name x x 3.36.13. host interface View and manage host's network interfaces Usage Options -h , --help - Print help 3.36.13.1. host interface create Create an interface on a host Usage Options --attached-devices LIST - Identifiers of attached interfaces, e.g. [`eth1 , eth2 ]`. For bond interfaces those are the slaves. Only for bond and bridges interfaces. --attached-to VALUE - Identifier of the interface to which this interface belongs, e.g. eth1. Only for virtual interfaces. --bond-options VALUE - Space separated options, e.g. miimon=100. Only for bond interfaces. --compute-attributes KEY_VALUE_LIST Compute resource specific attributes --domain VALUE - Domain name --domain-id NUMBER - Satellite domain ID of interface. Required for primary interfaces on managed hosts. --execution BOOLEAN - Should this interface be used for remote execution? --host VALUE - Host name --host-id VALUE - ID or name of host --identifier VALUE - Device identifier, e.g. eth0 or eth1.1 --ip VALUE - IPv4 address of interface --ip6 VALUE - IPv6 address of interface --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --mac VALUE - MAC address of interface. Required for managed interfaces on bare metal. --managed BOOLEAN - Should this interface be managed via DHCP and DNS capsule and should it be configured during provisioning? --mode ENUM - Bond mode of the interface, e.g. balance-rr. Only for bond interfaces. Possible value(s): balance-rr , active-backup , balance-xor , broadcast , 802.3ad , balance-tlb , balance-alb --mtu NUMBER - MTU, this attribute has precedence over the subnet MTU. --name VALUE - Interface`s DNS name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --password VALUE - Only for BMC interfaces. --primary - Should this interface be used for constructing the FQDN of the host? Each managed hosts needs to have one primary interface --provider ENUM - Interface provider, e.g. IPMI. Only for BMC interfaces. Possible value(s): IPMI , Redfish , SSH --provision - Should this interface be used for TFTP of PXELinux (or SSH for image-based hosts)? Each managed hosts needs to have one provision interface --subnet VALUE - Subnet name --subnet-id NUMBER - Satellite subnet ID of IPv4 interface --subnet6-id NUMBER - Satellite subnet ID of IPv6 interface --tag VALUE - VLAN tag, this attribute has precedence over the subnet VLAN ID. Only for virtual interfaces. --type ENUM - Interface type, e.g. bmc. Default is interface Possible value(s): interface , bmc , bond , bridge --username VALUE - Only for BMC interfaces. --virtual BOOLEAN - Alias or VLAN device -h , --help - Print help 3.36.13.2. host interface delete Delete a host's interface Usage Options --host VALUE - Host name --host-id VALUE - ID or name of host --id VALUE - ID of interface --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.36.13.3. host interface info Show an interface for host Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --host VALUE - Host name --host-id VALUE - ID or name of host --id VALUE - ID or name of interface --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.100. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Identifier x x Type x x Mac address x x Ip address x x Dns name x x Subnet x x Domain x x Managed x x Primary x x Provision x x Virtual x x Tag x x Attached to x x Bmc/username x x Bmc/provider x x Bond/mode x x Bond/attached devices x x Bond/bond options x x Execution x x 3.36.13.4. host interface list List all interfaces for host Usage Options --domain VALUE - Domain name --domain-id VALUE - ID or name of domain --fields LIST - Show specified fields or predefined field sets only. (See below) --host VALUE - Host name --host-id VALUE - ID or name of host --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --subnet VALUE - Subnet name --subnet-id VALUE - ID or name of subnet -h , --help - Print help Table 3.101. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Identifier x x Type x x Mac address x x Ip address x x Dns name x x 3.36.13.5. host interface update Update a host's interface Usage Options --attached-devices LIST - Identifiers of attached interfaces, e.g. [`eth1 , eth2 ]`. For bond interfaces those are the slaves. Only for bond and bridges interfaces. --attached-to VALUE - Identifier of the interface to which this interface belongs, e.g. eth1. Only for virtual interfaces. --bond-options VALUE - Space separated options, e.g. miimon=100. Only for bond interfaces. --compute-attributes KEY_VALUE_LIST Compute resource specific attributes --domain VALUE - Domain name --domain-id NUMBER - Satellite domain ID of interface. Required for primary interfaces on managed hosts. --execution BOOLEAN - Should this interface be used for remote execution? --host VALUE - Host name --host-id VALUE - ID or name of host --id VALUE - ID of interface --identifier VALUE - Device identifier, e.g. eth0 or eth1.1 --ip VALUE - IPv4 address of interface --ip6 VALUE - IPv6 address of interface --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --mac VALUE - MAC address of interface. Required for managed interfaces on bare metal. --managed BOOLEAN - Should this interface be managed via DHCP and DNS capsule and should it be configured during provisioning? --mode ENUM - Bond mode of the interface, e.g. balance-rr. Only for bond interfaces. Possible value(s): balance-rr , active-backup , balance-xor , broadcast , 802.3ad , balance-tlb , balance-alb --mtu NUMBER - MTU, this attribute has precedence over the subnet MTU. --name VALUE - Interface`s DNS name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --password VALUE - Only for BMC interfaces. --primary - Should this interface be used for constructing the FQDN of the host? Each managed hosts needs to have one primary interface --provider ENUM - Interface provider, e.g. IPMI. Only for BMC interfaces. Possible value(s): IPMI , Redfish , SSH --provision - Should this interface be used for TFTP of PXELinux (or SSH for image-based hosts)? Each managed hosts needs to have one provision interface --subnet VALUE - Subnet name --subnet-id NUMBER - Satellite subnet ID of IPv4 interface --subnet6-id NUMBER - Satellite subnet ID of IPv6 interface --tag VALUE - VLAN tag, this attribute has precedence over the subnet VLAN ID. Only for virtual interfaces. --type ENUM - Interface type, e.g. bmc. Default is interface Possible value(s): interface , bmc , bond , bridge --username VALUE - Only for BMC interfaces. --virtual BOOLEAN - Alias or VLAN device -h , --help - Print help 3.36.14. host list List all hosts Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --hostgroup VALUE - Hostgroup name --hostgroup-id VALUE - ID of host group --hostgroup-title VALUE - Hostgroup title --location VALUE - Set the current location context for the request --location-id VALUE - ID of location --location-title VALUE - Set the current location context for the request --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id VALUE - ID of organization --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results --thin BOOLEAN - Only list ID and name of hosts -h , --help - Print help Table 3.102. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Operating system x x Host group x x Ip x x Mac x x Global status x x Organization x Location x Additional information x Content view x x Lifecycle environment x x Security x Bugfix x Enhancement x Trace status x x Search / Order fields activation_key - string activation_key_id - string addon - string addons_status - Values: mismatched, matched, not_specified ansible_role - string applicable_debs - string applicable_errata - string applicable_errata_issued - date applicable_rpms - string architecture - string autoheal - boolean boot_time build - Values: true, false build_status - Values: built, pending, token_expired, build_failed comment - text compute_resource - string compute_resource_id - integer configuration_status.applied - integer configuration_status.enabled - Values: true, false configuration_status.failed - integer configuration_status.failed_restarts - integer configuration_status.interesting - Values: true, false configuration_status.pending - integer configuration_status.restarted - integer configuration_status.skipped - integer content_source - string content_views - string created_at - datetime cve_id - integer domain - string domain_id - integer errata_status - Values: security_needed, errata_needed, updated, unknown execution_status - Values: ok, error facts - string global_status - Values: ok, warning, error has_ip - string has_ip6 string has_mac - string hostgroup - string hostgroup_fullname - string hostgroup_id - integer hostgroup_name - string hostgroup_title - string hypervisor - boolean hypervisor_host - string id - integer image - string infrastructure_facet.foreman - Values: true, false infrastructure_facet.smart_proxy_id insights_client_report_status - Values: reporting, no_report insights_inventory_sync_status - Values: disconnect, sync insights_recommendations_count - integer installable_errata - string installed_at - datetime ip - string ip6 string job_invocation.id - string job_invocation.result - Values: cancelled, failed, pending, success last_checkin - datetime last_report - datetime lifecycle_environments - string location - string location_id - integer mac - string managed - Values: true, false model - string name - string organization - string organization_id - integer origin - string os - string os_description - string os_id - integer os_major - string os_minor - string os_title - string owner - string owner_id - integer owner_type - string params - string params_name - string parent_hostgroup - string puppet_ca - string puppet_proxy_id - integer puppetmaster - string purpose_status - Values: mismatched, matched, not_specified pxe_loader - Values: PXELinux_BIOS, PXELinux_UEFI, Grub_UEFI, Grub2_BIOS, Grub2_ELF, Grub2_UEFI, Grub2_UEFI_SecureBoot, Grub2_UEFI_HTTP, Grub2_UEFI_HTTPS, Grub2_UEFI_HTTPS_SecureBoot, iPXE_Embedded, iPXE_UEFI_HTTP, iPXE_Chain_BIOS, iPXE_Chain_UEFI realm - string realm_id - integer registered_at - datetime registered_through - string release_version - string reported.bios_release_date reported.bios_vendor reported.bios_version reported.boot_time reported.cores reported.disks_total reported.kernel_version reported.ram reported.sockets reported.virtual - Values: true, false repository - string repository_content_label - string rhel_lifecycle_status - Values: full_support, maintenance_support, approaching_end_of_maintenance, extended_support, approaching_end_of_support, support_ended role - text role_status - Values: mismatched, matched, not_specified service_level - string sla_status - Values: mismatched, matched, not_specified smart_proxy - string status.applied - integer status.enabled - Values: true, false status.failed - integer status.failed_restarts - integer status.interesting - Values: true, false status.pending - integer status.restarted - integer status.skipped - integer subnet - string subnet.name - text subnet6 string subnet6.name text subscription_id - string subscription_name - string subscription_status - Values: valid, partial, invalid, unknown, disabled, unsubscribed_hypervisor subscription_uuid - string trace_status - Values: reboot_needed, process_restart_needed, updated upgradable_debs - string upgradable_rpms - string usage - text usage_status - Values: mismatched, matched, not_specified user.firstname - string user.lastname - string user.login - string user.mail - string usergroup - string usergroup.name - string uuid - string 3.36.15. host package Manage packages on your hosts Usage Options -h , --help - Print help 3.36.15.1. host package install Not supported. Use the remote execution equivalent hammer job-invocation create --feature katello_package_install . Usage Options -h , --help - Unsupported Operation - Use the remote execution equivalent hammer job-invocation create `--feature katello_package_install`. Unfortunately the server does not support such operation. 3.36.15.2. host package list List packages installed on the host Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --full-result BOOLEAN - Whether or not to show all results --host VALUE - Host name --host-id NUMBER - ID of the host --include-latest-upgradable BOOLEAN Also include the latest upgradable package version for each host package --order VALUE - Sort field and order, eg. id DESC --page NUMBER - Page number, starting at 1 --per-page NUMBER - Number of results per page to return --search VALUE - Search string --status VALUE - Return only packages of a particular status (upgradable or up-to-date) -h , --help - Print help Table 3.103. Predefined field sets FIELDS ALL DEFAULT Nvra x x Search / Order fields arch - string epoch - string id - integer name - string nvra - string nvrea - string release - string vendor - string version - string 3.36.15.3. host package remove Not supported. Use the remote execution equivalent hammer job-invocation create --feature katello_package_remove . Usage Options -h , --help - Unsupported Operation - Use the remote execution equivalent hammer job-invocation create `--feature katello_package_remove`. Unfortunately the server does not support such operation. 3.36.15.4. host package upgrade Not supported. Use the remote execution equivalent hammer job-invocation create --feature katello_package_update . Usage Options -h , --help - Unsupported Operation - Use the remote execution equivalent hammer job-invocation create `--feature katello_package_update`. Unfortunately the server does not support such operation. 3.36.15.5. host package upgrade-all Not supported. Use the remote execution equivalent hammer job-invocation create --feature katello_package_update . Usage Options -h , --help - Unsupported Operation - Use the remote execution equivalent hammer job-invocation create `--feature katello_package_update`. Unfortunately the server does not support such operation. 3.36.16. host package-group Manage package-groups on your hosts. These commands are no longer available Use the remote execution equivalent Usage Options -h , --help - Print help 3.36.16.1. host package-group install Not supported. Use the remote execution equivalent hammer job-invocation create --feature katello_group_install . Usage Options -h , --help - Unsupported Operation - Use the remote execution equivalent hammer job-invocation create `--feature katello_group_install`. Unfortunately the server does not support such operation. 3.36.16.2. host package-group remove Not supported. Use the remote execution equivalent hammer job-invocation create --feature katello_group_remove . Usage Options -h , --help - Unsupported Operation - Use the remote execution equivalent hammer job-invocation create `--feature katello_group_remove`. Unfortunately the server does not support such operation. 3.36.17. host policies-enc View policies ENC for host Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE - The identifier of the host --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Host name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.104. Predefined field sets FIELDS ALL DEFAULT Id x x Profile id x x Content path x x Content download path x x Tailoring path x x Tailoring download path x x Day of month x x Hour x x Minute x x Month x x Week x x 3.36.18. host reboot Reboot a host Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Host name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.36.19. host rebuild-config Rebuild orchestration related configurations for host Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Host name --only LIST - Limit rebuild steps, valid steps are DHCP, DNS, TFTP, Content_Host_Status, Refresh_Content_Host_Status --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.36.20. host reports List all reports Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE - Host id --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Host name --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results -h , --help - Print help Table 3.105. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Host x x Last report x x Origin x x Applied x x Restarted x x Failed x x Restart failures x x Skipped x x Pending x x Search / Order fields applied - integer eventful - Values: true, false failed - integer failed_restarts - integer host - string host_id - integer host_owner_id - integer hostgroup - string hostgroup_fullname - string hostgroup_title - string id - integer last_report - datetime location - string log - text organization - string origin - string pending - integer reported - datetime resource - text restarted - integer skipped - integer 3.36.21. host reset Reset a host Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Host name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.36.22. host set-parameter Create or append a parameter for a host Usage Options --hidden-value BOOLEAN - Should the value be hidden --host VALUE - Host name --host-id NUMBER --name VALUE - Parameter name --parameter-type ENUM - Type of the parameter Possible value(s): string , boolean , integer , real , array , hash , yaml , json Default: "string" --value VALUE - Parameter value -h , --help - Print help 3.36.23. host start Power a host on Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Host name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.36.24. host status Get status of host Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Host name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --type ENUM - Status type, can be one of Global Configuration Build Possible value(s): HostStatus::Global , configuration , build * -h , --help - Print help 3.36.25. host stop Power a host off Usage Options --force - Force turning off a host --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Host name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.36.26. host subscription Manage subscription information on your hosts Usage Options -h , --help - Print help 3.36.26.1. host subscription attach Add a subscription to a host Usage Options --host VALUE - Host name --host-id NUMBER - Id of the host --quantity VALUE - Quantity of this subscriptions to add. Defaults to 1 --subscription-id VALUE - ID of subscription -h , --help - Print help 3.36.26.2. host subscription auto-attach Trigger an auto-attach of subscriptions Usage Options --host VALUE - Host name --host-id NUMBER - Id of the host -h , --help - Print help 3.36.26.3. host subscription content-override Override product content defaults Usage Options --content-label VALUE - Label of the content --enabled BOOLEAN - Set true to override to enabled; Set false to override to disabled.` --full-result BOOLEAN - Whether or not to show all results --host VALUE - Host name --host-id VALUE - Id of the content host --order VALUE - Sort field and order, eg. id DESC --override-name VALUE - Override parameter key or name. To enable or disable a repo select enabled . Default value: enabled Default: "enabled" --page NUMBER - Page number, starting at 1 --per-page NUMBER - Number of results per page to return --remove - Remove a content override --search VALUE - Search string --sort-by VALUE - Field to sort the results on --sort-order VALUE - How to order the sorted results (e.g. ASC for ascending) --value VALUE - Override value. Note for repo enablement you can use a boolean value -h , --help - Print help 3.36.26.4. host subscription enabled-repositories Show repositories enabled on the host that are known to Katello Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --host VALUE - Host name --host-id VALUE - Id of host -h , --help - Print help Table 3.106. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Label x x Content type x x Checksum x x Content view id x x Content view name x x Content view version x x Environment name x x Product name x x 3.36.26.5. host subscription product-content List associated products Usage Options --content-access-mode-all BOOLEAN Get all content available, not just that provided by subscriptions --content-access-mode-env BOOLEAN Limit content to just that available in the host`s content view version --fields LIST - Show specified fields or predefined field sets only. (See below) --full-result BOOLEAN - Whether or not to show all results --host VALUE - Host name --host-id VALUE - Id of the host --order VALUE - Sort field and order, eg. id DESC --page NUMBER - Page number, starting at 1 --per-page NUMBER - Number of results per page to return --search VALUE - Search string -h , --help - Print help Table 3.107. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Type x x Url x x Gpg key x x Label x x Default enabled? x x Override x x 3.36.26.6. host subscription register Register a host with subscription and information Usage Options --content-view VALUE - Content view name to search by --content-view-id NUMBER - Content View ID --environment VALUE - Lifecycle environment name to search by (--environment is deprecated: Use --lifecycle-environment instead) --environment-id NUMBER - (--environment-id is deprecated: Use --lifecycle-environment-id instead) --hypervisor-guest-uuids LIST - UUIDs of the virtual guests from the host`s hypervisor --lifecycle-environment VALUE - Lifecycle environment name to search by --lifecycle-environment-id NUMBER Lifecycle Environment ID --name VALUE - Name of the host --organization VALUE - Organization name to search by --organization-id VALUE - Organization ID to search by --organization-label VALUE - Organization label to search by --release-version VALUE - Release version of the content host --service-level VALUE - A service level for auto-healing process, e.g. SELF-SUPPORT --uuid VALUE - UUID to use for registered host, random uuid is generated if not provided -h , --help - Print help 3.36.26.7. host subscription remove Usage Options --host VALUE - Host name --host-id NUMBER - Id of the host --quantity VALUE - Remove the first instance of a subscription with matching id and quantity --subscription-id VALUE - ID of subscription -h , --help - Print help 3.36.26.8. host subscription unregister Unregister the host as a subscription consumer Usage Options --host VALUE - Host name --host-id NUMBER - Id of the host -h , --help - Print help 3.36.27. host traces List traces on your hosts Usage Options -h , --help - Print help 3.36.27.1. host traces list List services that need restarting on the host Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --host VALUE - Host name --host-id NUMBER - ID of the host -h , --help - Print help Table 3.108. Predefined field sets FIELDS ALL DEFAULT Trace id x x Application x x Helper x x Type x x 3.36.27.2. host traces resolve Resolve traces Usage Options --async - Do not wait for the task --host VALUE - Host name --host-id NUMBER - ID of the host --trace-ids LIST - Array of Trace IDs -h , --help - Print help 3.36.28. host update Update a host Usage Options --ansible-role-ids LIST - IDs of associated ansible roles --ansible-roles LIST --architecture VALUE - Architecture name --architecture-id NUMBER - Required if host is managed and value is not inherited from host group --ask-root-password BOOLEAN --autoheal BOOLEAN - Sets whether the Host will autoheal subscriptions upon checkin --build BOOLEAN --comment VALUE - Additional information about this host --compute-attributes KEY_VALUE_LIST - Compute resource attributes --compute-profile VALUE - Compute profile name --compute-profile-id NUMBER --compute-resource VALUE - Compute resource name --compute-resource-id NUMBER - Nil means host is bare metal --content-source VALUE - Content Source name --content-source-id NUMBER --content-view VALUE - Name to search by --content-view-id NUMBER --domain VALUE - Domain name --domain-id NUMBER - Required if host is managed and value is not inherited from host group --enabled BOOLEAN - Include this host within Satellite reporting --hostgroup VALUE - Hostgroup name --hostgroup-id NUMBER --hostgroup-title VALUE - Hostgroup title --hypervisor-guest-uuids LIST - List of hypervisor guest uuids --id VALUE --image VALUE - Name to search by --image-id NUMBER --installed-products-attributes SCHEMA List of products installed on the host --interface KEY_VALUE_LIST - Interface parameters Can be specified multiple times. --ip VALUE - Not required if using a subnet with DHCP Capsule --kickstart-repository VALUE - Kickstart repository name --kickstart-repository-id NUMBER - Repository Id associated with the kickstart repo used for provisioning --lifecycle-environment VALUE - Name to search by --lifecycle-environment-id NUMBER --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --mac VALUE - Required for managed host that is bare metal, not required if it`s a virtual machine --managed BOOLEAN - True/False flag whether a host is managed or unmanaged. Note: this value also determines whether several parameters are required or not --medium VALUE - Medium name --medium-id VALUE - Required if not imaged based provisioning and host is managed and value is not inherited from host group --model VALUE - Model name --model-id NUMBER --name VALUE --new-location VALUE - Use to update associated location --new-location-id NUMBER - Use to update associated location --new-location-title VALUE - Use to update associated location --new-name VALUE --new-organization VALUE - Use to update associated organization --new-organization-id NUMBER - Use to update associated organization --new-organization-title VALUE - Use to update associated organization --openscap-proxy-id NUMBER - ID of OpenSCAP Capsule --operatingsystem VALUE - Operating system title --operatingsystem-id NUMBER - Required if host is managed and value is not inherited from host group --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --overwrite BOOLEAN --owner VALUE - Login of the owner --owner-id VALUE - ID of the owner --owner-type ENUM - Host`s owner type Possible value(s): User , Usergroup --parameters KEY_VALUE_LIST - Replaces with new host parameters --partition-table VALUE - Partition table name --partition-table-id NUMBER - Required if host is managed and custom partition has not been defined --product VALUE - Name to search by --product-id NUMBER - Product id as listed from a host`s installed products, this is not the same product id as the products api returns --progress-report-id VALUE - UUID to track orchestration tasks status, GET /api/orchestration/:UUID/tasks --provision-method ENUM - The method used to provision the host. Possible value(s): build , image , bootdisk --puppet-ca-proxy-id NUMBER - Puppet CA Capsule ID --puppet-proxy-id NUMBER - Puppet Capsule ID --purpose-addons LIST - Sets the system add-ons --purpose-role VALUE - Sets the system purpose usage --purpose-usage VALUE - Sets the system purpose usage --pxe-loader ENUM - DHCP filename option (Grub2/PXELinux by default) Possible value(s): None , PXELinux BIOS , PXELinux UEFI , Grub UEFI , Grub2 BIOS , Grub2 ELF , Grub2 UEFI , Grub2 UEFI SecureBoot , Grub2 UEFI HTTP , Grub2 UEFI HTTPS , Grub2 UEFI HTTPS SecureBoot , iPXE Embedded , iPXE UEFI HTTP , iPXE Chain BIOS , iPXE Chain UEFI --realm VALUE - Name to search by --realm-id NUMBER --release-version VALUE - Release version for this Host to use (7Server, 7.1, etc) --root-password VALUE - Required if host is managed and value is not inherited from host group or default password in settings --service-level VALUE - Service level to be used for autoheal --subnet VALUE - Subnet name --subnet-id NUMBER - Required if host is managed and value is not inherited from host group --typed-parameters SCHEMA - Replaces with new host parameters (with type support) --volume KEY_VALUE_LIST - Volume parameters Can be specified multiple times. -h , --help - Print help Following parameters accept format defined by its schema (bold are required; <> contains acceptable type; [] contains acceptable value): --typed-parameters "name=<string>,value=<string>,parameter_type=[string|boolean|integer|real|array|hash|yaml|json],hidden_value=[true|false|1|0], ... " --installed-products-attributes "product_id=<string>,product_name=<string>,arch=<string>,version=<string>, ... " Available keys for --interface: mac ip type Possible values: interface, bmc, bond, bridge name subnet_id domain_id identifier managed true/false primary true/false, each managed hosts needs to have one primary interface. provision true/false virtual true/false For virtual=true: tag VLAN tag, this attribute has precedence over the subnet VLAN ID. Only for virtual interfaces. attached_to Identifier of the interface to which this interface belongs, e.g. eth1. For type=bond: mode Possible values: balance-rr, active-backup, balance-xor, broadcast, 802.3ad, balance-tlb, balance-alb attached_devices Identifiers of slave interfaces, e.g. [eth1,eth2] bond_options For type=bmc: provider always IPMI username password Provider specific options Bold attributes are required. EC2: --volume : --interface : --compute-attributes : availability_zone - flavor_id - groups - security_group_ids - managed_ip - Libvirt: --volume : pool_name - One of available storage pools capacity - String value, e.g. 10G allocation - Initial allocation, e.g. 0G format_type - Possible values: raw, qcow2 --interface : compute_type - Possible values: bridge, network compute_bridge - Name of interface according to type compute_model - Possible values: virtio, rtl8139, ne2k_pci, pcnet, e1000 compute_network - Libvirt instance network, e.g. default --compute-attributes : cpus - Number of CPUs memory - String, amount of memory, value in bytes cpu_mode - Possible values: default, host-model, host-passthrough boot_order - Device names to specify the boot order start - Boolean (expressed as 0 or 1), whether to start the machine or not OpenStack: --volume : --interface : --compute-attributes : availability_zone - boot_from_volume - flavor_ref - image_ref - tenant_id - security_groups - network - Red Hat Virtualization: --volume : size_gb - Volume size in GB, integer value storage_domain - ID or name of storage domain bootable - Boolean, set 1 for bootable, only one volume can be bootable preallocate - Boolean, set 1 to preallocate wipe_after_delete - Boolean, set 1 to wipe disk after delete interface - Disk interface name, must be ide, virtio or virtio_scsi --interface : compute_name - Compute name, e.g. eth0 compute_network - Select one of available networks for a cluster, must be an ID or a name compute_interface - Interface type compute_vnic_profile - Vnic Profile --compute-attributes : cluster - ID or name of cluster to use template - Hardware profile to use cores - Integer value, number of cores sockets - Integer value, number of sockets memory - Amount of memory, integer value in bytes ha - Boolean, set 1 to high availability display_type - Possible values: VNC, SPICE keyboard_layout - Possible values: ar, de-ch, es, fo, fr-ca, hu, ja, mk, no, pt-br, sv, da, en-gb, et, fr, fr-ch, is, lt, nl, pl, ru, th, de, en-us, fi, fr-be, hr, it, lv, nl-be, pt, sl, tr. Not usable if display type is SPICE. start - Boolean, set 1 to start the vm Rackspace: --volume : --interface : --compute-attributes : flavor_id - VMware: --volume : name - storage_pod - Storage Pod ID from VMware datastore - Datastore ID from VMware mode - persistent/independent_persistent/independent_nonpersistent size_gb - Integer number, volume size in GB thin - true/false eager_zero - true/false controller_key - Associated SCSI controller key --interface : compute_type - Type of the network adapter, for example one of: VirtualVmxnet3, VirtualE1000, See documentation center for your version of vSphere to find more details about available adapter types: https://www.vmware.com/support/pubs/ compute_network - Network ID or Network Name from VMware --compute-attributes : cluster - Cluster ID from VMware corespersocket - Number of cores per socket (applicable to hardware versions < 10 only) cpus - CPU count memory_mb - Integer number, amount of memory in MB path - Path to folder resource_pool - Resource Pool ID from VMware firmware - automatic/bios/efi guest_id - Guest OS ID form VMware hardware_version - Hardware version ID from VMware memoryHotAddEnabled - Must be a 1 or 0, lets you add memory resources while the machine is on cpuHotAddEnabled - Must be a 1 or 0, lets you add CPU resources while the machine is on add_cdrom - Must be a 1 or 0, Add a CD-ROM drive to the virtual machine annotation - Annotation Notes scsi_controllers - List with SCSI controllers definitions type - ID of the controller from VMware key - Key of the controller (e.g. 1000) boot_order - Device names to specify the boot order start - Must be a 1 or 0, whether to start the machine or not AzureRM: --volume : disk_size_gb - Volume Size in GB (integer value) data_disk_caching - Data Disk Caching (None, ReadOnly, ReadWrite) --interface : compute_network - Select one of available Azure Subnets, must be an ID compute_public_ip - Public IP (None, Static, Dynamic) compute_private_ip - Static Private IP (expressed as true or false) --compute-attributes : resource_group - Existing Azure Resource Group of user vm_size - VM Size, eg. Standard_A0 etc. username - The Admin username password - The Admin password platform - OS type eg. Linux ssh_key_data - SSH key for passwordless authentication os_disk_caching - OS disk caching premium_os_disk - Premium OS Disk, Boolean as 0 or 1 script_command - Custom Script Command script_uris - Comma seperated file URIs GCE: --volume : size_gb - Volume size in GB, integer value --interface : --compute-attributes : machine_type - network - associate_external_ip - 3.37. host-collection Manipulate host collections Usage Options -h , --help - Print help 3.37.1. host-collection add-host Add host to the host collection Usage Options --host-ids LIST - Array of host ids --hosts LIST --id NUMBER - Id of the host collection --name VALUE - Host collection name to search by --organization VALUE - Organization name to search by --organization-id NUMBER - Organization ID --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title -h , --help - Print help 3.37.2. host-collection copy Copy a host collection Usage Options --id NUMBER - ID of the host collection --name VALUE - New host collection name --new-name VALUE - New host collection name --organization VALUE - Organization name to search by --organization-id NUMBER - Organization ID --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title -h , --help - Print help 3.37.3. host-collection create Create a host collection Usage Options --description VALUE --host-ids LIST - List of host ids to replace the hosts in host collection --hosts LIST --max-hosts NUMBER - Maximum number of hosts in the host collection --name VALUE - Host Collection name --organization VALUE - Organization name to search by --organization-id NUMBER - Organization identifier --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --unlimited-hosts - Set hosts max to unlimited -h , --help - Print help 3.37.4. host-collection delete Destroy a host collection Usage Options --id NUMBER - Id of the host collection --name VALUE - Host collection name to search by --organization VALUE - Organization name to search by --organization-id NUMBER - Organization ID --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title -h , --help - Print help 3.37.5. host-collection erratum Manage errata on your host collections. These commands are no longer available. Use the remote execution equivalent Usage Options -h , --help - Print help 3.37.5.1. host-collection erratum install Not supported. Use the remote execution equivalent hammer job-invocation create --feature katello_errata_install . Specify the host collection with the --search-query parameter, e.g. --search-query "host_collection = MyCollection" or --search-query "host_collection_id=6" Usage Options -h , --help - Unsupported Operation - Use the remote execution equivalent hammer job-invocation create `--feature katello_errata_install`. Specify the host collection with the --search-query parameter, e.g. --search-query "host_collection = MyCollection" or --search-query "host_collection_id=6" Unfortunately the server does not support such operation. 3.37.6. host-collection hosts List all hosts Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --hostgroup VALUE - Hostgroup name --hostgroup-id VALUE - ID of host group --hostgroup-title VALUE - Hostgroup title --id VALUE - Host Collection ID --include ENUM - Array of extra information types to include Possible value(s): parameters , all_parameters --location VALUE - Set the current location context for the request --location-id VALUE - ID of location --location-title VALUE - Set the current location context for the request --name VALUE - Host Collection Name --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Organization name to search by --organization-id VALUE - ID of organization --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results --thin BOOLEAN - Only list ID and name of hosts -h , --help - Print help Table 3.109. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Security x Bugfix x Enhancement x Search / Order fields activation_key - string activation_key_id - string addon - string addons_status - Values: mismatched, matched, not_specified ansible_role - string applicable_debs - string applicable_errata - string applicable_errata_issued - date applicable_rpms - string architecture - string autoheal - boolean boot_time build - Values: true, false build_status - Values: built, pending, token_expired, build_failed comment - text compute_resource - string compute_resource_id - integer configuration_status.applied - integer configuration_status.enabled - Values: true, false configuration_status.failed - integer configuration_status.failed_restarts - integer configuration_status.interesting - Values: true, false configuration_status.pending - integer configuration_status.restarted - integer configuration_status.skipped - integer content_source - string content_views - string created_at - datetime cve_id - integer domain - string domain_id - integer errata_status - Values: security_needed, errata_needed, updated, unknown execution_status - Values: ok, error facts - string global_status - Values: ok, warning, error has_ip - string has_ip6 string has_mac - string hostgroup - string hostgroup_fullname - string hostgroup_id - integer hostgroup_name - string hostgroup_title - string hypervisor - boolean hypervisor_host - string id - integer image - string infrastructure_facet.foreman - Values: true, false infrastructure_facet.smart_proxy_id insights_client_report_status - Values: reporting, no_report insights_inventory_sync_status - Values: disconnect, sync insights_recommendations_count - integer installable_errata - string installed_at - datetime ip - string ip6 string job_invocation.id - string job_invocation.result - Values: cancelled, failed, pending, success last_checkin - datetime last_report - datetime lifecycle_environments - string location - string location_id - integer mac - string managed - Values: true, false model - string name - string organization - string organization_id - integer origin - string os - string os_description - string os_id - integer os_major - string os_minor - string os_title - string owner - string owner_id - integer owner_type - string params - string params_name - string parent_hostgroup - string puppet_ca - string puppet_proxy_id - integer puppetmaster - string purpose_status - Values: mismatched, matched, not_specified pxe_loader - Values: PXELinux_BIOS, PXELinux_UEFI, Grub_UEFI, Grub2_BIOS, Grub2_ELF, Grub2_UEFI, Grub2_UEFI_SecureBoot, Grub2_UEFI_HTTP, Grub2_UEFI_HTTPS, Grub2_UEFI_HTTPS_SecureBoot, iPXE_Embedded, iPXE_UEFI_HTTP, iPXE_Chain_BIOS, iPXE_Chain_UEFI realm - string realm_id - integer registered_at - datetime registered_through - string release_version - string reported.bios_release_date reported.bios_vendor reported.bios_version reported.boot_time reported.cores reported.disks_total reported.kernel_version reported.ram reported.sockets reported.virtual - Values: true, false repository - string repository_content_label - string rhel_lifecycle_status - Values: full_support, maintenance_support, approaching_end_of_maintenance, extended_support, approaching_end_of_support, support_ended role - text role_status - Values: mismatched, matched, not_specified service_level - string sla_status - Values: mismatched, matched, not_specified smart_proxy - string status.applied - integer status.enabled - Values: true, false status.failed - integer status.failed_restarts - integer status.interesting - Values: true, false status.pending - integer status.restarted - integer status.skipped - integer subnet - string subnet.name - text subnet6 string subnet6.name text subscription_id - string subscription_name - string subscription_status - Values: valid, partial, invalid, unknown, disabled, unsubscribed_hypervisor subscription_uuid - string trace_status - Values: reboot_needed, process_restart_needed, updated upgradable_debs - string upgradable_rpms - string usage - text usage_status - Values: mismatched, matched, not_specified user.firstname - string user.lastname - string user.login - string user.mail - string usergroup - string usergroup.name - string uuid - string 3.37.7. host-collection info Show a host collection Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id NUMBER - Id of the host collection --name VALUE - Host collection name to search by --organization VALUE - Organization name to search by --organization-id NUMBER - Organization identifier --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title -h , --help - Print help Table 3.110. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Limit x x Description x x Total hosts x x 3.37.8. host-collection list List host collections Usage Options --activation-key VALUE - Activation key name to search by --activation-key-id VALUE - Activation key identifier --available-for VALUE - Interpret specified object to return only Host Collections that can be associated with specified object. The value host is supported. --fields LIST - Show specified fields or predefined field sets only. (See below) --full-result BOOLEAN - Whether or not to show all results --host VALUE - Host name --host-id NUMBER - Filter products by host id --name VALUE - Host collection name to filter by --order VALUE - Sort field and order, eg. id DESC --organization VALUE - Organization name to search by --organization-id NUMBER - Organization identifier --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --page NUMBER - Page number, starting at 1 --per-page NUMBER - Number of results per page to return --search VALUE - Search string -h , --help - Print help Table 3.111. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Limit x x Description x x Search / Order fields host - string name - string organization_id - integer 3.37.9. host-collection package Manage packages on your host collections. These commands are no longer available. Use the remote execution equivalent Usage Options -h , --help - Print help 3.37.9.1. host-collection package install Not supported. Use the remote execution equivalent hammer job-invocation create --feature katello_package_install . Specify the host collection with the --search-query parameter, e.g. --search-query "host_collection = MyCollection" or --search-query "host_collection_id=6" Usage Options -h , --help - Unsupported Operation - Use the remote execution equivalent hammer job-invocation create `--feature katello_package_install`. Specify the host collection with the --search-query parameter, e.g. --search-query "host_collection = MyCollection" or --search-query "host_collection_id=6" Unfortunately the server does not support such operation. 3.37.9.2. host-collection package remove Not supported. Use the remote execution equivalent hammer job-invocation create --feature katello_package_remove . Specify the host collection with the --search-query parameter, e.g. --search-query "host_collection = MyCollection" or --search-query "host_collection_id=6" Usage Options -h , --help - Unsupported Operation - Use the remote execution equivalent hammer job-invocation create `--feature katello_package_remove`. Specify the host collection with the --search-query parameter, e.g. --search-query "host_collection = MyCollection" or --search-query "host_collection_id=6" Unfortunately the server does not support such operation. 3.37.9.3. host-collection package update Not supported. Use the remote execution equivalent hammer job-invocation create --feature katello_package_update . Specify the host collection with the --search-query parameter, e.g. --search-query "host_collection = MyCollection" or --search-query "host_collection_id=6" Usage Options -h , --help - Unsupported Operation - Use the remote execution equivalent hammer job-invocation create `--feature katello_package_update`. Specify the host collection with the --search-query parameter, e.g. --search-query "host_collection = MyCollection" or --search-query "host_collection_id=6" Unfortunately the server does not support such operation. 3.37.10. host-collection package-group Manage package-groups on your host collections. These commands are no longer available. Use the remote execution equivalent Usage Options -h , --help - Print help 3.37.10.1. host-collection package-group install Not supported. Use the remote execution equivalent hammer job-invocation create --feature katello_group_install . Specify the host collection with the --search-query parameter, e.g. --search-query "host_collection = MyCollection" or --search-query "host_collection_id=6" Usage Options -h , --help - Unsupported Operation - Use the remote execution equivalent hammer job-invocation create `--feature katello_group_install`. Specify the host collection with the --search-query parameter, e.g. --search-query "host_collection = MyCollection" or --search-query "host_collection_id=6" Unfortunately the server does not support such operation. 3.37.10.2. host-collection package-group remove Not supported. Use the remote execution equivalent hammer job-invocation create --feature katello_group_remove . Specify the host collection with the --search-query parameter, e.g. --search-query "host_collection = MyCollection" or --search-query "host_collection_id=6" Usage Options -h , --help - Unsupported Operation - Use the remote execution equivalent hammer job-invocation create `--feature katello_group_remove`. Specify the host collection with the --search-query parameter, e.g. --search-query "host_collection = MyCollection" or --search-query "host_collection_id=6" Unfortunately the server does not support such operation. 3.37.10.3. host-collection package-group update Not supported. Use the remote execution equivalent hammer job-invocation create --feature katello_group_update . Specify the host collection with the --search-query parameter, e.g. --search-query "host_collection = MyCollection" or --search-query "host_collection_id=6" Usage Options -h , --help - Unsupported Operation - Use the remote execution equivalent hammer job-invocation create `--feature katello_group_update`. Specify the host collection with the --search-query parameter, e.g. --search-query "host_collection = MyCollection" or --search-query "host_collection_id=6" Unfortunately the server does not support such operation. 3.37.11. host-collection remove-host Remove hosts from the host collection Usage Options --host-ids LIST - Array of host ids --hosts LIST --id NUMBER - Id of the host collection --name VALUE - Host collection name to search by --organization VALUE - Organization name to search by --organization-id NUMBER - Organization ID --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title -h , --help - Print help 3.37.12. host-collection update Update a host collection Usage Options --description VALUE --host-ids LIST - List of host ids to replace the hosts in host collection --hosts LIST --id NUMBER - Id of the host collection --max-hosts NUMBER - Maximum number of hosts in the host collection --name VALUE - Host Collection name --new-name VALUE - Host Collection name --organization VALUE - Organization name to search by --organization-id NUMBER - Organization ID --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --unlimited-hosts - Set hosts max to unlimited -h , --help - Print help 3.38. host-registration Host Registration Usage Options -h , --help - Print help 3.38.1. host-registration generate-command Generate global registration command Usage Options --activation-key VALUE - Activation key for subscription-manager client, required for CentOS and Red Hat Enterprise Linux. For multiple keys use activation_keys param instead. --activation-keys LIST - Activation keys for subscription-manager client, required for CentOS and Red Hat Enterprise Linux. Required only if host group has no activation keys. --force BOOLEAN - Clear any registration and run subscription-manager with -force. --hostgroup VALUE - Hostgroup name --hostgroup-id NUMBER - ID of the Host group to register the host in --hostgroup-title VALUE - Hostgroup title --ignore-subman-errors BOOLEAN - Ignore subscription-manager errors for subscription-manager register command --insecure BOOLEAN - Enable insecure argument for the initial curl --jwt-expiration NUMBER - Expiration of the authorization token (in hours) --lifecycle-environment VALUE - Name to search by --lifecycle-environment-id NUMBER - Lifecycle environment for the host. --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --operatingsystem VALUE - Operating system title --operatingsystem-id NUMBER - ID of the Operating System to register the host in. Operating system must have a host_init_config template assigned --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --packages VALUE - Packages to install on the host when registered. Can be set by host_packages parameter, example: pkg1 pkg2 --remote-execution-interface VALUE - Identifier of the Host interface for Remote execution --repo VALUE - Repository URL / details, for example for Debian OS family: deb deb.example.com/ buster 1.0 , for Red Hat and SUSE OS family: yum.theforeman.org/client/latest/el8/x86_64/ --repo-gpg-key-url VALUE - URL of the GPG key for the repository --setup-insights BOOLEAN - Set host_registration_insights parameter for the host. If it is set to true, insights client will be installed and registered on Red Hat family operating systems --setup-remote-execution BOOLEAN - Set host_registration_remote_execution parameter for the host. If it is set to true, SSH keys will be installed on the host --setup-remote-execution-pull BOOLEAN Set host_registration_remote_execution_pull parameter for the host. If it is set to true, pull provider client will be deployed on the host --smart-proxy VALUE - Name to search by --smart-proxy-id NUMBER - ID of the Capsule. This Capsule must have enabled both the Templates and Registration features --update-packages BOOLEAN - Update all packages on the host -h , --help - Print help 3.39. hostgroup Manipulate hostgroups Usage Options -h , --help - Print help 3.39.1. hostgroup ansible-roles Manage Ansible roles on a hostgroup Usage Options -h , --help - Print help 3.39.1.1. hostgroup ansible-roles add Associate an Ansible role Usage Options --ansible-role VALUE - Name to search by --ansible-role-id NUMBER --force - Associate the Ansible role even if it already is associated indirectly --id VALUE --name VALUE - Hostgroup name --title VALUE - Hostgroup title -h , --help - Print help 3.39.1.2. hostgroup ansible-roles assign Assigns Ansible roles to a hostgroup Usage Options --ansible-role-ids LIST - Ansible roles to assign to a hostgroup --ansible-roles LIST --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Hostgroup name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --title VALUE - Hostgroup title -h , --help - Print help 3.39.1.3. hostgroup ansible-roles list List all Ansible roles for a hostgroup Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Hostgroup name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --title VALUE - Hostgroup title -h , --help - Print help Table 3.112. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Imported at x x Inherited x x Directly assigned x x 3.39.1.4. hostgroup ansible-roles play Runs all Ansible roles on a hostgroup Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Hostgroup name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --title VALUE - Hostgroup title -h , --help - Print help 3.39.1.5. hostgroup ansible-roles remove Disassociate an Ansible role Usage Options --ansible-role VALUE - Name to search by --ansible-role-id NUMBER --id VALUE --name VALUE - Hostgroup name --title VALUE - Hostgroup title -h , --help - Print help 3.39.2. hostgroup create Create a host group Usage Options --ansible-role-ids LIST - IDs of associated ansible roles --ansible-roles LIST --architecture VALUE - Architecture name --architecture-id NUMBER - Architecture ID --ask-root-password BOOLEAN --compute-profile VALUE - Compute profile name --compute-profile-id NUMBER - Compute profile ID --compute-resource VALUE - Compute resource name --compute-resource-id NUMBER - Compute resource ID --content-source VALUE - Content Source name --content-source-id NUMBER - Content source ID --content-view VALUE - Name to search by --content-view-id NUMBER - Content view ID --description VALUE - Host group description --domain VALUE - Domain name --domain-id NUMBER - Domain ID --group-parameters-attributes SCHEMA Array of parameters --kickstart-repository VALUE - Kickstart repository name --kickstart-repository-id NUMBER - Kickstart repository ID --lifecycle-environment VALUE - Name to search by --lifecycle-environment-id NUMBER - Lifecycle environment ID --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-ids LIST - REPLACE locations with given ids --location-title VALUE - Set the current location context for the request --location-titles LIST --locations LIST --medium VALUE - Medium name --medium-id NUMBER - Media ID --name VALUE - Name of the host group --openscap-proxy-id NUMBER - ID of OpenSCAP Capsule --operatingsystem VALUE - Operating system title --operatingsystem-id NUMBER - Operating system ID --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-ids LIST - REPLACE organizations with given ids. --organization-title VALUE - Set the current organization context for the request --organization-titles LIST --organizations LIST --parent VALUE - Name of parent hostgroup --parent-id NUMBER - Parent ID of the host group --parent-title VALUE - Title of parent hostgroup --partition-table VALUE - Partition table name --partition-table-id NUMBER - Partition table ID --puppet-ca-proxy-id NUMBER - Puppet CA Capsule ID --puppet-proxy-id NUMBER - Puppet Capsule ID --pxe-loader ENUM - DHCP filename option (Grub2/PXELinux by default) Possible value(s): None , PXELinux BIOS , PXELinux UEFI , Grub UEFI , Grub2 BIOS , Grub2 ELF , Grub2 UEFI , Grub2 UEFI SecureBoot , Grub2 UEFI HTTP , Grub2 UEFI HTTPS , Grub2 UEFI HTTPS SecureBoot , iPXE Embedded , iPXE UEFI HTTP , iPXE Chain BIOS , iPXE Chain UEFI --query-organization VALUE - Organization name to search by --query-organization-id VALUE - Organization ID to search by --query-organization-label VALUE - Organization label to search by --realm VALUE - Name to search by --realm-id NUMBER - Realm ID --root-password VALUE - Root password --subnet VALUE - Subnet name --subnet-id NUMBER - Subnet ID --subnet6 VALUE - Subnet IPv6 name --subnet6-id NUMBER - Subnet IPv6 ID -h , --help - Print help Following parameters accept format defined by its schema (bold are required; <> contains acceptable type; [] contains acceptable value): --group-parameters-attributes " name =<string>, value =<string>,parameter_type=[string|boolean|integer|real|array|hash|yaml|json],hidden_value=[true|false|1|0], ... " 3.39.3. hostgroup delete Delete a host group Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Hostgroup name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --title VALUE - Hostgroup title -h , --help - Print help 3.39.4. hostgroup delete-parameter Delete parameter for a hostgroup Usage Options --hostgroup VALUE - Hostgroup name --hostgroup-id NUMBER --hostgroup-title VALUE - Hostgroup title --name VALUE - Parameter name -h , --help - Print help 3.39.5. hostgroup info Show a host group Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Hostgroup name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --show-hidden-parameters BOOLEAN Display hidden parameter values --title VALUE - Hostgroup title -h , --help - Print help Table 3.113. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Title x x x Model x x Description x x Parent x x Compute profile x x Compute resource x x Network/subnet ipv4 x x Network/subnet ipv6 x x Network/realm x x Network/domain x x Operating system/architecture x x Operating system/operating system x x Operating system/medium x x Operating system/partition table x x Operating system/pxe loader x x Parameters/ x x Locations/ x x Organizations/ x x Openscap proxy x x Content view/id x x Content view/name x x Lifecycle environment/id x x Lifecycle environment/name x x Content source/id x x Content source/name x x Kickstart repository/id x x Kickstart repository/name x x 3.39.6. hostgroup list List all host groups Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Scope by locations --location-title VALUE - Set the current location context for the request --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Scope by organizations --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results -h , --help - Print help Table 3.114. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Title x x x Operating system x x Model x x Search / Order fields ansible_role - string architecture - string host - string id - integer label - string location - string location_id - integer medium - string name - string organization - string organization_id - integer os - string os_description - string os_id - integer os_major - string os_minor - string os_title - string oval_policy_id - string params - string template - string title - string 3.39.7. hostgroup rebuild-config Rebuild orchestration config Usage Options --children-hosts BOOLEAN - Operate on child hostgroup hosts --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Hostgroup name --only LIST - Limit rebuild steps, valid steps are DHCP, DNS, TFTP, Content_Host_Status, Refresh_Content_Host_Status --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --title VALUE - Hostgroup title -h , --help - Print help 3.39.8. hostgroup set-parameter Create or update parameter for a hostgroup Usage Options --hidden-value BOOLEAN - Should the value be hidden --hostgroup VALUE - Hostgroup name --hostgroup-id NUMBER --hostgroup-title VALUE - Hostgroup title --name VALUE - Parameter name --parameter-type ENUM - Type of the parameter Possible value(s): string , boolean , integer , real , array , hash , yaml , json Default: "string" --value VALUE - Parameter value -h , --help - Print help 3.39.9. hostgroup update Update a host group Usage Options --ansible-role-ids LIST - IDs of associated ansible roles --ansible-roles LIST --architecture VALUE - Architecture name --architecture-id NUMBER - Architecture ID --ask-root-password BOOLEAN --compute-profile VALUE - Compute profile name --compute-profile-id NUMBER - Compute profile ID --compute-resource VALUE - Compute resource name --compute-resource-id NUMBER - Compute resource ID --content-source VALUE - Content Source name --content-source-id NUMBER - Content source ID --content-view VALUE - Name to search by --content-view-id NUMBER - Content view ID --description VALUE - Host group description --domain VALUE - Domain name --domain-id NUMBER - Domain ID --group-parameters-attributes SCHEMA Array of parameters --id VALUE --kickstart-repository VALUE - Kickstart repository name --kickstart-repository-id NUMBER - Kickstart repository ID --lifecycle-environment VALUE - Name to search by --lifecycle-environment-id NUMBER - Lifecycle environment ID --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-ids LIST - REPLACE locations with given ids --location-title VALUE - Set the current location context for the request --location-titles LIST --locations LIST --medium VALUE - Medium name --medium-id NUMBER - Media ID --name VALUE - Name of the host group --new-name VALUE - Name of the host group --openscap-proxy-id NUMBER - ID of OpenSCAP Capsule --operatingsystem VALUE - Operating system title --operatingsystem-id NUMBER - Operating system ID --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-ids LIST - REPLACE organizations with given ids. --organization-title VALUE - Set the current organization context for the request --organization-titles LIST --organizations LIST --parent VALUE - Name of parent hostgroup --parent-id NUMBER - Parent ID of the host group --parent-title VALUE - Title of parent hostgroup --partition-table VALUE - Partition table name --partition-table-id NUMBER - Partition table ID --puppet-ca-proxy-id NUMBER - Puppet CA Capsule ID --puppet-proxy-id NUMBER - Puppet Capsule ID --pxe-loader ENUM - DHCP filename option (Grub2/PXELinux by default) Possible value(s): None , PXELinux BIOS , PXELinux UEFI , Grub UEFI , Grub2 BIOS , Grub2 ELF , Grub2 UEFI , Grub2 UEFI SecureBoot , Grub2 UEFI HTTP , Grub2 UEFI HTTPS , Grub2 UEFI HTTPS SecureBoot , iPXE Embedded , iPXE UEFI HTTP , iPXE Chain BIOS , iPXE Chain UEFI --query-organization VALUE - Organization name to search by --query-organization-id VALUE - Organization ID to search by --query-organization-label VALUE - Organization label to search by --realm VALUE - Name to search by --realm-id NUMBER - Realm ID --root-password VALUE - Root password --subnet VALUE - Subnet name --subnet-id NUMBER - Subnet ID --subnet6 VALUE - Subnet IPv6 name --subnet6-id NUMBER - Subnet IPv6 ID --title VALUE - Hostgroup title -h , --help - Print help Following parameters accept format defined by its schema (bold are required; <> contains acceptable type; [] contains acceptable value): --group-parameters-attributes "name=<string>,value=<string>,parameter_type=[string|boolean|integer|real|array|hash|yaml|json],hidden_value=[true|false|1|0], ... " 3.40. http-proxy Manipulate http proxies Usage Options -h , --help - Print help 3.40.1. http-proxy create Create an HTTP Proxy Usage Options --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-ids LIST - REPLACE locations with given ids --location-title VALUE - Set the current location context for the request --location-titles LIST --locations LIST --name VALUE - The HTTP Proxy name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-ids LIST - REPLACE organizations with given ids. --organization-title VALUE - Set the current organization context for the request --organization-titles LIST --organizations LIST --password VALUE - Password used to authenticate with the HTTP Proxy --url VALUE - URL of the HTTP Proxy --username VALUE - Username used to authenticate with the HTTP Proxy -h , --help - Print help 3.40.2. http-proxy delete Delete an HTTP Proxy Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.40.3. http-proxy info Show an HTTP Proxy Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE - Identifier of the HTTP Proxy --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.115. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Username x x Url x x Locations/ x x Organizations/ x x 3.40.4. http-proxy list List of HTTP Proxies Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Scope by locations --location-title VALUE - Set the current location context for the request --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Scope by organizations --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results -h , --help - Print help Table 3.116. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Search / Order fields id - integer location - string location_id - integer name - string organization - string organization_id - integer url - string 3.40.5. http-proxy update Update an HTTP Proxy Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-ids LIST - REPLACE locations with given ids --location-title VALUE - Set the current location context for the request --location-titles LIST --locations LIST --name VALUE - The HTTP Proxy name --new-name VALUE - The HTTP Proxy name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-ids LIST - REPLACE organizations with given ids. --organization-title VALUE - Set the current organization context for the request --organization-titles LIST --organizations LIST --password VALUE - Password used to authenticate with the HTTP Proxy --url VALUE - URL of the HTTP Proxy --username VALUE - Username used to authenticate with the HTTP Proxy -h , --help - Print help 3.41. import-templates Import templates from a git repo or a directory on the server Usage Options --associate ENUM - Associate to OS`s, Locations & Organizations. Options are: always, new or never. Possible value(s): always , new , never --branch VALUE - Branch in Git repo. --dirname VALUE - The directory within Git repo containing the templates --filter VALUE - Export templates with names matching this regex (case-insensitive; snippets are not filtered). --force BOOLEAN - Update templates that are locked --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-ids LIST - REPLACE locations with given ids --location-title VALUE - Set the current location context for the request --location-titles LIST --locations LIST --lock ENUM - Lock imported templates Possible value(s): lock , keep_lock_new , keep , unlock , true , false , 0 , 1 --negate BOOLEAN - Negate the prefix (for purging). --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-ids LIST - REPLACE organizations with given ids. --organization-title VALUE - Set the current organization context for the request --organization-titles LIST --organizations LIST --prefix VALUE - The string all imported templates should begin with. --repo VALUE - Override the default repo from settings. --verbose BOOLEAN - Show template diff in response -h , --help - Print help 3.42. job-invocation Manage job invocations Usage Options -h , --help - Print help 3.42.1. job-invocation cancel Cancel the job Usage Options --force BOOLEAN --id VALUE --location-id NUMBER - Set the current location context for the request --organization-id NUMBER - Set the current organization context for the request -h , --help - Print help 3.42.2. job-invocation create Create a job invocation Usage Options --async - Do not wait for the task --bookmark VALUE - Name to search by --bookmark-id NUMBER --concurrency-level NUMBER - Run at most N tasks at a time --cron-line VALUE - Create a recurring execution Cron line format a b c d e , where: is minute (range: 0-59) is hour (range: 0-23) is day of month (range: 1-31) is month (range: 1-12) is day of week (range: 0-6) --description-format VALUE - Override the description format from the template for this invocation only --dynamic - Dynamic search queries are evaluated at run time --effective-user VALUE - What user should be used to run the script (using sudo-like mechanisms). Defaults to a template parameter or global setting. --effective-user-password VALUE - Set password for effective user (using sudo-like mechanisms) --end-time DATETIME - Perform no more executions after this time, used with --cron-line --execution-timeout-interval NUMBER Override the timeout interval from the template for this invocation only --feature VALUE - Remote execution feature label that should be triggered, job template assigned to this feature will be used --input-files KEY_VALUE_LIST - Read input values from files Comma-separated list of key=file, where file is a path to a text file to be read --inputs KEY_VALUE_LIST - Specify inputs from command line --job-template VALUE - Name to search by --job-template-id VALUE - The job template to use, parameter is required unless feature was specified --key-passphrase VALUE - Set SSH key passphrase --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --max-iteration NUMBER - Repeat a maximum of N times --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --password VALUE - Set SSH password --purpose VALUE - Designation of a special purpose --randomized-ordering BOOLEAN - Execute the jobs on hosts in randomized order --search-query VALUE --ssh-user VALUE - Set SSH user --start-at DATETIME - Schedule the execution for a later time --start-before DATETIME - Execution should be cancelled if it cannot be started before --start-at --tags VALUE - A comma separated list of tags to use for Ansible run --tags-flag ENUM - IncludeExclude tags for Ansible run Possible value(s): include , exclude --time-to-pickup NUMBER - Override the global time to pickup interval for this invocation only -h , --help - Print help 3.42.3. job-invocation info Show job invocation Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location-id NUMBER - Set the current location context for the request --organization-id NUMBER - Set the current organization context for the request --show-host-status - Show job status for the hosts --show-inputs - Show the complete input of the job -h , --help - Print help Table 3.117. Predefined field sets FIELDS ALL DEFAULT Id x x Description x x Status x x Success x x Failed x x Pending x x Missing x x Total x x Start x x Randomized ordering x x Inputs x x Job category x x Mode x x Cron line x x Recurring logic id x x Time to pickup x x Hosts x x 3.42.4. job-invocation list List job invocations Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results -h , --help - Print help Table 3.118. Predefined field sets FIELDS ALL DEFAULT Id x x Description x x Status x x Success x x Failed x x Pending x x Total x x Start x x Randomized ordering x x Inputs x x 3.42.5. job-invocation output View the output for a host Usage Options --async - Do not wait for job to complete, shows current output only --host VALUE - Host name --host-id VALUE --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.42.6. job-invocation rerun Rerun the job Usage Options --failed-only BOOLEAN --id VALUE --location-id NUMBER - Set the current location context for the request --organization-id NUMBER - Set the current organization context for the request -h , --help - Print help 3.43. job-template Manage job templates Usage Options -h , --help - Print help 3.43.1. job-template create Create a job template Usage Options --ansible-callback-enabled BOOLEAN Enable the callback plugin for this template --audit-comment VALUE --current-user BOOLEAN - Whether the current user login should be used as the effective user --description VALUE --description-format VALUE - This template is used to generate the description. Input values can be used using the syntax %{package}. You may also include the job category and template name using %{job_category} and %{template_name}. --file FILE - Path to a file that contains the template --job-category VALUE - Job category --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-ids LIST - REPLACE locations with given ids --location-title VALUE - Set the current location context for the request --location-titles LIST --locations LIST --locked BOOLEAN - Whether or not the template is locked for editing --name VALUE - Template name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-ids LIST - REPLACE organizations with given ids. --organization-title VALUE - Set the current organization context for the request --organization-titles LIST --organizations LIST --overridable BOOLEAN - Whether it should be allowed to override the effective user from the invocation form. --provider-type ENUM - Provider type Possible value(s): SSH , script , Ansible --snippet BOOLEAN --value VALUE - What user should be used to run the script (using sudo-like mechanisms) -h , --help - Print help 3.43.2. job-template delete Delete a job template Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.43.3. job-template dump View job template content Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.119. Predefined field sets FIELDS 3.43.4. job-template export Export a template including all metadata Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.120. Predefined field sets FIELDS 3.43.5. job-template import Import a job template from ERB Usage Options --file FILE - Path to a file that contains the template - must include ERB metadata --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --overwrite BOOLEAN - Overwrite template if it already exists -h , --help - Print help 3.43.6. job-template info Show job template details Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.121. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Job category x x Provider x x Type x x Ansible callback enabled x x Description x x Inputs x x Locations/ x x Organizations/ x x 3.43.7. job-template list List job templates Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Scope by locations --location-title VALUE - Set the current location context for the request --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Scope by organizations --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results -h , --help - Print help Table 3.122. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Job category x x Provider x x Type x x 3.43.8. job-template update Update a job template Usage Options --ansible-callback-enabled BOOLEAN Enable the callback plugin for this template --audit-comment VALUE --current-user BOOLEAN - Whether the current user login should be used as the effective user --description VALUE --description-format VALUE - This template is used to generate the description. Input values can be used using the syntax %{package}. You may also include the job category and template name using %{job_category} and %{template_name}. --file FILE - Path to a file that contains the template --id VALUE --job-category VALUE - Job category --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-ids LIST - REPLACE locations with given ids --location-title VALUE - Set the current location context for the request --location-titles LIST --locations LIST --locked BOOLEAN - Whether or not the template is locked for editing --name VALUE - Template name --new-name VALUE - Template name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-ids LIST - REPLACE organizations with given ids. --organization-title VALUE - Set the current organization context for the request --organization-titles LIST --organizations LIST --overridable BOOLEAN - Whether it should be allowed to override the effective user from the invocation form. --provider-type ENUM - Provider type Possible value(s): SSH , script , Ansible --snippet BOOLEAN --value VALUE - What user should be used to run the script (using sudo-like mechanisms) -h , --help - Print help 3.44. lifecycle-environment Manipulate lifecycle_environments on the server Usage Options -h , --help - Print help 3.44.1. lifecycle-environment create Create an environment Usage Options --description VALUE - Description of the environment --label VALUE - Label of the environment --name VALUE - Name of the environment --organization VALUE - Organization name to search by --organization-id NUMBER - Name of organization --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --prior VALUE - Name of the prior environment --prior-id NUMBER - ID of an environment that is prior to the new environment in the chain. It has to be either the ID of Library or the ID of an environment at the end of a chain. --registry-name-pattern VALUE - Pattern for container image names --registry-unauthenticated-pull BOOLEAN Allow unauthenticed pull of container images -h , --help - Print help 3.44.2. lifecycle-environment delete Destroy an environment Usage Options --id NUMBER - ID of the environment --name VALUE - Lifecycle environment name to search by --organization VALUE - Organization name to search by --organization-id NUMBER - Organization identifier --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title -h , --help - Print help 3.44.3. lifecycle-environment info Show an environment Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id NUMBER - ID of the environment --name VALUE - Lifecycle environment name to search by --organization VALUE - Organization name to search by --organization-id NUMBER - ID of the organization --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title -h , --help - Print help Table 3.123. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Label x x Description x x Organization x x Library x x Prior lifecycle environment x x Unauthenticated pull x x Registry name pattern x x 3.44.4. lifecycle-environment list List environments in an organization Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --full-result BOOLEAN - Whether or not to show all results --label VALUE - Filter only environments containing this label --library BOOLEAN - Set true if you want to see only library environments --name VALUE - Filter only environments containing this name --order VALUE - Sort field and order, eg. id DESC --organization VALUE - Organization name to search by --organization-id NUMBER - Organization identifier --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --page NUMBER - Page number, starting at 1 --per-page NUMBER - Number of results per page to return --search VALUE - Search string -h , --help - Print help Table 3.124. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Prior x x Search / Order fields id - integer label - string name - string organization_id - integer 3.44.5. lifecycle-environment paths List environment paths Usage Options --content-source-id NUMBER - Show whether each lifecycle environment is associated with the given Capsule id. --fields LIST - Show specified fields or predefined field sets only. (See below) --organization VALUE - Organization name to search by --organization-id NUMBER - Organization identifier --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --permission-type VALUE - The associated permission type. One of (readable | promotable) Default: readable -h , --help - Print help Table 3.125. Predefined field sets FIELDS ALL DEFAULT Lifecycle path x x 3.44.6. lifecycle-environment update Update an environment Usage Options --async BOOLEAN - Do not wait for the update action to finish. Default: true --description VALUE - Description of the environment --id NUMBER - ID of the environment --name VALUE - Lifecycle environment name to search by --new-name VALUE - New name to be given to the environment --organization VALUE - Organization name to search by --organization-id NUMBER - Name of the organization --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --registry-name-pattern VALUE - Pattern for container image names --registry-unauthenticated-pull BOOLEAN Allow unauthenticed pull of container images -h , --help - Print help 3.45. location Manipulate locations Usage Options -h , --help - Print help 3.45.1. location add-compute-resource Associate a compute resource Usage Options --compute-resource VALUE - Compute resource name --compute-resource-id NUMBER --id VALUE --name VALUE - Set the current location context for the request --title VALUE - Set the current location context for the request -h , --help - Print help 3.45.2. location add-domain Associate a domain Usage Options --domain VALUE - Domain name --domain-id NUMBER - Numerical ID or domain name --id VALUE --name VALUE - Set the current location context for the request --title VALUE - Set the current location context for the request -h , --help - Print help 3.45.3. location add-hostgroup Associate a hostgroup Usage Options --hostgroup VALUE - Hostgroup name --hostgroup-id NUMBER --hostgroup-title VALUE - Hostgroup title --id VALUE --name VALUE - Set the current location context for the request --title VALUE - Set the current location context for the request -h , --help - Print help 3.45.4. location add-medium Associate a medium Usage Options --id VALUE --medium VALUE - Medium name --medium-id NUMBER --name VALUE - Set the current location context for the request --title VALUE - Set the current location context for the request -h , --help - Print help 3.45.5. location add-organization Associate an organization Usage Options --id VALUE --name VALUE - Set the current location context for the request --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Organization ID --organization-title VALUE - Set the current organization context for the request --title VALUE - Set the current location context for the request -h , --help - Print help 3.45.6. location add-provisioning-template Associate provisioning templates Usage Options --id VALUE --name VALUE - Set the current location context for the request --provisioning-template VALUE - Name to search by --provisioning-template-id NUMBER --provisioning-template-ids LIST - List of provisioning template ids --provisioning-template-search VALUE Provisioning template name regex to search, all matching templates will be associated --provisioning-templates LIST - List of provisioning template names --title VALUE - Set the current location context for the request -h , --help - Print help 3.45.7. location add-smart-proxy Associate a smart proxy Usage Options --id VALUE --name VALUE - Set the current location context for the request --smart-proxy VALUE - Name to search by --smart-proxy-id NUMBER --title VALUE - Set the current location context for the request -h , --help - Print help 3.45.8. location add-subnet Associate a subnet Usage Options --id VALUE --name VALUE - Set the current location context for the request --subnet VALUE - Subnet name --subnet-id NUMBER --title VALUE - Set the current location context for the request -h , --help - Print help 3.45.9. location add-user Associate an user Usage Options --id VALUE --name VALUE - Set the current location context for the request --title VALUE - Set the current location context for the request --user VALUE - User`s login to search by --user-id NUMBER -h , --help - Print help 3.45.10. location create Create a location Usage Options --compute-resource-ids LIST - Compute resource IDs --compute-resources LIST --description VALUE --domain-ids LIST - Domain IDs --domains LIST --environment-ids LIST - Environment IDs --hostgroup-ids LIST - Host group IDs --hostgroup-titles LIST --hostgroups LIST --ignore-types LIST - List of resources types that will be automatically associated --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --media LIST --medium-ids LIST - Medium IDs --name VALUE --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-ids LIST - Associated organization IDs --organization-title VALUE - Set the current organization context for the request --organization-titles LIST --organizations LIST --parent-id NUMBER - Parent ID --partition-table-ids LIST - Partition template IDs --partition-tables LIST --provisioning-template-ids LIST Provisioning template IDs --provisioning-templates LIST --realm-ids LIST - Realm IDs --realms LIST --smart-proxies LIST --smart-proxy-ids LIST - Capsule IDs --subnet-ids LIST - Subnet IDs --subnets LIST --user-ids LIST - User IDs --users LIST -h , --help - Print help 3.45.11. location delete Delete a location Usage Options --id VALUE - Location numeric id to search by --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Set the current organization context for the request --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --title VALUE - Set the current organization context for the request -h , --help - Print help 3.45.12. location delete-parameter Delete parameter for a location Usage Options --location VALUE - Set the current location context for the request --location-id NUMBER --location-title VALUE - Set the current location context for the request --name VALUE - Parameter name -h , --help - Print help 3.45.13. location info Show a location Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE - Location numeric id to search by --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Set the current organization context for the request --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --show-hidden-parameters BOOLEAN Display hidden parameter values --title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.126. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Title x x x Name x x x Description x x Parent x x Users/ x x Smart proxies/ x x Subnets/ x x Compute resources/ x x Installation media/ x x Templates/ x x Partition tables/ x x Domains/ x x Realms/ x x Hostgroups/ x x Parameters/ x x Organizations/ x x Created at x x Updated at x x 3.45.14. location list List all locations Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results -h , --help - Print help Table 3.127. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Title x x x Name x x x Description x x Search / Order fields description - text id - integer location_id - integer name - string title - string 3.45.15. location remove-compute-resource Disassociate a compute resource Usage Options --compute-resource VALUE - Compute resource name --compute-resource-id NUMBER --id VALUE --name VALUE - Set the current location context for the request --title VALUE - Set the current location context for the request -h , --help - Print help 3.45.16. location remove-domain Disassociate a domain Usage Options --domain VALUE - Domain name --domain-id NUMBER - Numerical ID or domain name --id VALUE --name VALUE - Set the current location context for the request --title VALUE - Set the current location context for the request -h , --help - Print help 3.45.17. location remove-hostgroup Disassociate a hostgroup Usage Options --hostgroup VALUE - Hostgroup name --hostgroup-id NUMBER --hostgroup-title VALUE - Hostgroup title --id VALUE --name VALUE - Set the current location context for the request --title VALUE - Set the current location context for the request -h , --help - Print help 3.45.18. location remove-medium Disassociate a medium Usage Options --id VALUE --medium VALUE - Medium name --medium-id NUMBER --name VALUE - Set the current location context for the request --title VALUE - Set the current location context for the request -h , --help - Print help 3.45.19. location remove-organization Disassociate an organization Usage Options --id VALUE --name VALUE - Set the current location context for the request --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Organization ID --organization-title VALUE - Set the current organization context for the request --title VALUE - Set the current location context for the request -h , --help - Print help 3.45.20. location remove-provisioning-template Disassociate provisioning templates Usage Options --id VALUE --name VALUE - Set the current location context for the request --provisioning-template VALUE - Name to search by --provisioning-template-id NUMBER --provisioning-template-ids LIST - List of provisioning template ids --provisioning-template-search VALUE Provisioning template name regex to search, all matching templates will be disassociated --provisioning-templates LIST - List of provisioning template names --title VALUE - Set the current location context for the request -h , --help - Print help 3.45.21. location remove-smart-proxy Disassociate a smart proxy Usage Options --id VALUE --name VALUE - Set the current location context for the request --smart-proxy VALUE - Name to search by --smart-proxy-id NUMBER --title VALUE - Set the current location context for the request -h , --help - Print help 3.45.22. location remove-subnet Disassociate a subnet Usage Options --id VALUE --name VALUE - Set the current location context for the request --subnet VALUE - Subnet name --subnet-id NUMBER --title VALUE - Set the current location context for the request -h , --help - Print help 3.45.23. location remove-user Disassociate an user Usage Options --id VALUE --name VALUE - Set the current location context for the request --title VALUE - Set the current location context for the request --user VALUE - User`s login to search by --user-id NUMBER -h , --help - Print help 3.45.24. location set-parameter Create or update parameter for a location Usage Options --hidden-value BOOLEAN - Should the value be hidden --location VALUE - Set the current location context for the request --location-id NUMBER --location-title VALUE - Set the current location context for the request --name VALUE - Parameter name --parameter-type ENUM - Type of the parameter Possible value(s): string , boolean , integer , real , array , hash , yaml , json Default: "string" --value VALUE - Parameter value -h , --help - Print help 3.45.25. location update Update a location Usage Options --compute-resource-ids LIST - Compute resource IDs --compute-resources LIST --description VALUE --domain-ids LIST - Domain IDs --domains LIST --environment-ids LIST - Environment IDs --hostgroup-ids LIST - Host group IDs --hostgroup-titles LIST --hostgroups LIST --id VALUE - Location numeric id to search by --ignore-types LIST - List of resources types that will be automatically associated --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --media LIST --medium-ids LIST - Medium IDs --name VALUE --new-name VALUE --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-ids LIST - Associated organization IDs --organization-title VALUE - Set the current organization context for the request --organization-titles LIST --organizations LIST --parent-id NUMBER - Parent ID --partition-table-ids LIST - Partition template IDs --partition-tables LIST --provisioning-template-ids LIST Provisioning template IDs --provisioning-templates LIST --realm-ids LIST - Realm IDs --realms LIST --smart-proxies LIST --smart-proxy-ids LIST - Capsule IDs --subnet-ids LIST - Subnet IDs --subnets LIST --title VALUE - Set the current location context for the request --user-ids LIST - User IDs --users LIST -h , --help - Print help 3.46. mail-notification Manage mail notifications Usage Options -h , --help - Print help 3.46.1. mail-notification info Show an email notification Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE - Numerical ID or email notification name --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.128. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Description x x Subscription type x x 3.46.2. mail-notification list List of email notifications Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results -h , --help - Print help Table 3.129. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Search / Order fields description - text id - integer name - string user - string 3.47. medium Manipulate installation media Usage Options -h , --help - Print help 3.47.1. medium add-operatingsystem Associate an operating system Usage Options --id VALUE --name VALUE - Medium name --operatingsystem VALUE - Operating system title --operatingsystem-id NUMBER -h , --help - Print help 3.47.2. medium create Create a medium Usage Options --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-ids LIST - REPLACE locations with given ids --location-title VALUE - Set the current location context for the request --location-titles LIST --locations LIST --name VALUE - Name of media --operatingsystem-ids LIST --operatingsystems LIST --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-ids LIST - REPLACE organizations with given ids. --organization-title VALUE - Set the current organization context for the request --organization-titles LIST --organizations LIST --os-family VALUE - Operating system family, available values: AIX, Altlinux, Archlinux, Coreos, Debian, Fcos, Freebsd, Gentoo, Junos, NXOS, Rancheros, Redhat, Rhcos, Solaris, Suse, VRP, Windows, Xenserver --path VALUE - The path to the medium, can be a URL or a valid NFS server (exclusive of the architecture). For example mirror.centos.org/centos/USDversion/os/USDarch where USDarch will be substituted for the host`s actual OS architecture and USDversion, USDmajor and USDminor will be substituted for the version of the operating system. Solaris and Debian media may also use USDrelease. * -h , --help - Print help 3.47.3. medium delete Delete a medium Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Medium name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.47.4. medium info Show a medium Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Medium name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.130. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Path x x Os family x x Operating systems/ x x Locations/ x x Organizations/ x x Created at x x Updated at x x 3.47.5. medium list List all installation media Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Scope by locations --location-title VALUE - Set the current location context for the request --operatingsystem VALUE - Operating system title --operatingsystem-id NUMBER - ID of operating system --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Scope by organizations --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results -h , --help - Print help Table 3.131. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Path x x Search / Order fields family - string id - integer location - string location_id - integer name - string organization - string organization_id - integer path - string 3.47.6. medium remove-operatingsystem Disassociate an operating system Usage Options --id VALUE --name VALUE - Medium name --operatingsystem VALUE - Operating system title --operatingsystem-id NUMBER -h , --help - Print help 3.47.7. medium update Update a medium Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-ids LIST - REPLACE locations with given ids --location-title VALUE - Set the current location context for the request --location-titles LIST --locations LIST --name VALUE - Name of media --new-name VALUE - Name of media --operatingsystem-ids LIST --operatingsystems LIST --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-ids LIST - REPLACE organizations with given ids. --organization-title VALUE - Set the current organization context for the request --organization-titles LIST --organizations LIST --os-family VALUE - Operating system family, available values: AIX, Altlinux, Archlinux, Coreos, Debian, Fcos, Freebsd, Gentoo, Junos, NXOS, Rancheros, Redhat, Rhcos, Solaris, Suse, VRP, Windows, Xenserver --path VALUE - The path to the medium, can be a URL or a valid NFS server (exclusive of the architecture). For example mirror.centos.org/centos/USDversion/os/USDarch where USDarch will be substituted for the host`s actual OS architecture and USDversion, USDmajor and USDminor will be substituted for the version of the operating system. Solaris and Debian media may also use USDrelease. * -h , --help - Print help 3.48. model Manipulate hardware models Usage Options -h , --help - Print help 3.48.1. model create Create a hardware model Usage Options --hardware-model VALUE --info VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --vendor-class VALUE -h , --help - Print help 3.48.2. model delete Delete a hardware model Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Model name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.48.3. model info Show a hardware model Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Model name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.132. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Vendor class x x Hw model x x Info x x Created at x x Updated at x x 3.48.4. model list List all hardware models Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results -h , --help - Print help Table 3.133. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Vendor class x x Hw model x x Search / Order fields hardware_model - string id - integer info - text name - string vendor_class - string 3.48.5. model update Update a hardware model Usage Options --hardware-model VALUE --id VALUE --info VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE --new-name VALUE --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --vendor-class VALUE -h , --help - Print help 3.49. module-stream View Module Streams Usage Options -h , --help - Print help 3.49.1. module-stream info Show a module stream Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE - A module stream identifier --name VALUE - Module stream name to search by --organization VALUE - Organization name to search by --organization-id NUMBER - Organization identifier --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --product VALUE - Product name to search by --product-id NUMBER - Product numeric identifier --repository VALUE - Repository name to search by --repository-id NUMBER - Repository identifier -h , --help - Print help Table 3.134. Predefined field sets FIELDS ALL DEFAULT THIN Id x x Module stream name x x x Stream x x Uuid x x Version x x Architecture x x Context x x Repositories/id x x Repositories/name x x Repositories/label x x Artifacts/id x x Artifacts/name x x Profiles/id x x Profiles/name x x Profiles/rpms/id x x Profiles/rpms/name x x 3.49.2. module-stream list List module streams Usage Options --content-view-filter VALUE - Name to search by --content-view-filter-id NUMBER - Content view filter identifier --content-view-filter-rule VALUE - Name to search by --content-view-filter-rule-id NUMBER Content view filter rule identifier --content-view-version VALUE - Content view version number --content-view-version-id NUMBER - Content view version identifier --fields LIST - Show specified fields or predefined field sets only. (See below) --full-result BOOLEAN - Whether or not to show all results --host-ids LIST - List of host id to list available module streams for --hosts LIST --ids LIST - Ids to filter content by --include-filter-ids BOOLEAN - Includes associated content view filter ids in response --lifecycle-environment-id NUMBER - Environment identifier --name-stream-only BOOLEAN - Return name and stream information only) --order VALUE - Sort field and order, eg. id DESC --organization VALUE - Organization name to search by --organization-id NUMBER - Organization identifier --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --page NUMBER - Page number, starting at 1 --per-page NUMBER - Number of results per page to return --product VALUE - Product name to search by --product-id NUMBER - Product numeric identifier --repository VALUE - Repository name to search by --repository-id NUMBER - Repository identifier --search VALUE - Search string -h , --help - Print help Table 3.135. Predefined field sets FIELDS ALL DEFAULT THIN Id x x Module stream name x x x Stream x x Uuid x x Version x x Architecture x x Context x x 3.50. organization Manipulate organizations Usage Options -h , --help - Print help 3.50.1. organization add-compute-resource Associate a compute resource Usage Options --compute-resource VALUE - Compute resource name --compute-resource-id NUMBER --id VALUE - Organization ID --name VALUE - Set the current organization context for the request --title VALUE - Set the current organization context for the request -h , --help - Print help 3.50.2. organization add-domain Associate a domain Usage Options --domain VALUE - Domain name --domain-id NUMBER - Numerical ID or domain name --id VALUE - Organization ID --name VALUE - Set the current organization context for the request --title VALUE - Set the current organization context for the request -h , --help - Print help 3.50.3. organization add-hostgroup Associate a hostgroup Usage Options --hostgroup VALUE - Hostgroup name --hostgroup-id NUMBER --hostgroup-title VALUE - Hostgroup title --id VALUE - Organization ID --name VALUE - Set the current organization context for the request --title VALUE - Set the current organization context for the request -h , --help - Print help 3.50.4. organization add-location Associate a location Usage Options --id VALUE - Organization ID --location VALUE - Set the current location context for the request --location-id NUMBER --location-title VALUE - Set the current location context for the request --name VALUE - Set the current organization context for the request --title VALUE - Set the current organization context for the request -h , --help - Print help 3.50.5. organization add-medium Associate a medium Usage Options --id VALUE - Organization ID --medium VALUE - Medium name --medium-id NUMBER --name VALUE - Set the current organization context for the request --title VALUE - Set the current organization context for the request -h , --help - Print help 3.50.6. organization add-provisioning-template Associate provisioning templates Usage Options --id VALUE - Organization ID --name VALUE - Set the current organization context for the request --provisioning-template VALUE - Name to search by --provisioning-template-id NUMBER --provisioning-template-ids LIST - List of provisioning template ids --provisioning-template-search VALUE Provisioning template name regex to search, all matching templates will be associated --provisioning-templates LIST - List of provisioning template names --title VALUE - Set the current organization context for the request -h , --help - Print help 3.50.7. organization add-smart-proxy Associate a smart proxy Usage Options --id VALUE - Organization ID --name VALUE - Set the current organization context for the request --smart-proxy VALUE - Name to search by --smart-proxy-id NUMBER --title VALUE - Set the current organization context for the request -h , --help - Print help 3.50.8. organization add-subnet Associate a subnet Usage Options --id VALUE - Organization ID --name VALUE - Set the current organization context for the request --subnet VALUE - Subnet name --subnet-id NUMBER --title VALUE - Set the current organization context for the request -h , --help - Print help 3.50.9. organization add-user Associate an user Usage Options --id VALUE - Organization ID --name VALUE - Set the current organization context for the request --title VALUE - Set the current organization context for the request --user VALUE - User`s login to search by --user-id NUMBER -h , --help - Print help 3.50.10. organization configure-cdn Update the CDN configuration Usage Options --custom-cdn-auth-enabled BOOLEAN - If product certificates should be used to authenticate to a custom CDN. --id VALUE - ID of the Organization --label VALUE - Organization label to search by --name VALUE - Organization name to search by --password VALUE - Password for authentication. Relevant only for upstream_server type. --ssl-ca-credential-id NUMBER - Content Credential to use for SSL CA. Relevant only for upstream_server type. --title VALUE - Organization title --type VALUE - CDN configuration type. One of redhat_cdn, network_sync, export_sync, custom_cdn. --upstream-content-view-label VALUE - Upstream Content View Label, default: Default_Organization_View. Relevant only for upstream_server type. --upstream-lifecycle-environment-label VALUE Upstream Lifecycle Environment, default: Library. Relevant only for upstream_server type. --upstream-organization-label VALUE - Upstream organization to sync CDN content from. Relevant only for upstream_server type. --url VALUE - Upstream satellite server to sync CDN content from. Relevant only for upstream_server type. --username VALUE - Username for authentication. Relevant only for upstream_server type. -h , --help - Print help 3.50.11. organization create Create organization Usage Options --compute-resource-ids LIST - Compute resource IDs --compute-resources LIST --description VALUE --domain-ids LIST - Domain IDs --domains LIST --environment-ids LIST - Environment IDs --hostgroup-ids LIST - Host group IDs --hostgroup-titles LIST --hostgroups LIST --ignore-types LIST - List of resources types that will be automatically associated --label VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-ids LIST - Associated location IDs --location-title VALUE - Set the current location context for the request --location-titles LIST --locations LIST --media LIST --medium-ids LIST - Medium IDs --name VALUE --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-label VALUE - Organization label to search by --organization-title VALUE - Set the current organization context for the request --partition-table-ids LIST - Partition template IDs --partition-tables LIST --provisioning-template-ids LIST Provisioning template IDs --provisioning-templates LIST --realm-ids LIST - Realm IDs --realms LIST --simple-content-access BOOLEAN Whether to turn on Simple Content Access for the organization. --smart-proxies LIST --smart-proxy-ids LIST - Capsule IDs --subnet-ids LIST - Subnet IDs --subnets LIST --user-ids LIST - User IDs --users LIST -h , --help - Print help 3.50.12. organization delete Delete an organization Usage Options --async - Do not wait for the task --id VALUE --label VALUE - Organization label to search by --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Set the current organization context for the request --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-label VALUE - Organization label to search by --organization-title VALUE - Set the current organization context for the request --title VALUE - Set the current organization context for the request -h , --help - Print help 3.50.13. organization delete-parameter Delete parameter for an organization Usage Options --name VALUE - Parameter name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Organization ID --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.50.14. organization info Show organization Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --label VALUE - Organization label to search by --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Set the current organization context for the request --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-label VALUE - Organization label to search by --organization-title VALUE - Set the current organization context for the request --title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.136. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Title x x x Name x x x Description x x Parent x x Users/ x x Smart proxies/ x x Subnets/ x x Compute resources/ x x Installation media/ x x Templates/ x x Partition tables/ x x Domains/ x x Realms/ x x Hostgroups/ x x Parameters/ x x Locations/ x x Created at x x Updated at x x Label x x x Description x x Simple content access x x Service levels x x Cdn configuration/type x x Cdn configuration/url x x Cdn configuration/upstream organization x x Cdn configuration/upstream lifecycle environment x x Cdn configuration/upstream content view x x Cdn configuration/username x x Cdn configuration/ssl ca credential id x x 3.50.15. organization list List all organizations Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --full-result BOOLEAN - Whether or not to show all results --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --order VALUE - Sort field and order, eg. id DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-label VALUE - Organization label to search by --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page NUMBER - Number of results per page to return --search VALUE - Search string --sort-by VALUE - Field to sort the results on --sort-order VALUE - How to order the sorted results (e.g. ASC for ascending) -h , --help - Print help Table 3.137. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Title x x x Name x x x Description x x Label x x x Search / Order fields description - text id - integer label - string name - string organization_id - integer title - string 3.50.16. organization remove-compute-resource Disassociate a compute resource Usage Options --compute-resource VALUE - Compute resource name --compute-resource-id NUMBER --id VALUE - Organization ID --name VALUE - Set the current organization context for the request --title VALUE - Set the current organization context for the request -h , --help - Print help 3.50.17. organization remove-domain Disassociate a domain Usage Options --domain VALUE - Domain name --domain-id NUMBER - Numerical ID or domain name --id VALUE - Organization ID --name VALUE - Set the current organization context for the request --title VALUE - Set the current organization context for the request -h , --help - Print help 3.50.18. organization remove-hostgroup Disassociate a hostgroup Usage Options --hostgroup VALUE - Hostgroup name --hostgroup-id NUMBER --hostgroup-title VALUE - Hostgroup title --id VALUE - Organization ID --name VALUE - Set the current organization context for the request --title VALUE - Set the current organization context for the request -h , --help - Print help 3.50.19. organization remove-location Disassociate a location Usage Options --id VALUE - Organization ID --location VALUE - Set the current location context for the request --location-id NUMBER --location-title VALUE - Set the current location context for the request --name VALUE - Set the current organization context for the request --title VALUE - Set the current organization context for the request -h , --help - Print help 3.50.20. organization remove-medium Disassociate a medium Usage Options --id VALUE - Organization ID --medium VALUE - Medium name --medium-id NUMBER --name VALUE - Set the current organization context for the request --title VALUE - Set the current organization context for the request -h , --help - Print help 3.50.21. organization remove-provisioning-template Disassociate provisioning templates Usage Options --id VALUE - Organization ID --name VALUE - Set the current organization context for the request --provisioning-template VALUE - Name to search by --provisioning-template-id NUMBER --provisioning-template-ids LIST - List of provisioning template ids --provisioning-template-search VALUE Provisioning template name regex to search, all matching templates will be disassociated --provisioning-templates LIST - List of provisioning template names --title VALUE - Set the current organization context for the request -h , --help - Print help 3.50.22. organization remove-smart-proxy Disassociate a smart proxy Usage Options --id VALUE - Organization ID --name VALUE - Set the current organization context for the request --smart-proxy VALUE - Name to search by --smart-proxy-id NUMBER --title VALUE - Set the current organization context for the request -h , --help - Print help 3.50.23. organization remove-subnet Disassociate a subnet Usage Options --id VALUE - Organization ID --name VALUE - Set the current organization context for the request --subnet VALUE - Subnet name --subnet-id NUMBER --title VALUE - Set the current organization context for the request -h , --help - Print help 3.50.24. organization remove-user Disassociate an user Usage Options --id VALUE - Organization ID --name VALUE - Set the current organization context for the request --title VALUE - Set the current organization context for the request --user VALUE - User`s login to search by --user-id NUMBER -h , --help - Print help 3.50.25. organization set-parameter Create or update parameter for an organization Usage Options --hidden-value BOOLEAN - Should the value be hidden --name VALUE - Parameter name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Organization ID --organization-title VALUE - Set the current organization context for the request --parameter-type ENUM - Type of the parameter Possible value(s): string , boolean , integer , real , array , hash , yaml , json Default: "string" --value VALUE - Parameter value -h , --help - Print help 3.50.26. organization update Update organization Usage Options --compute-resource-ids LIST - Compute resource IDs --compute-resources LIST --description VALUE --domain-ids LIST - Domain IDs --domains LIST --environment-ids LIST - Environment IDs --hostgroup-ids LIST - Host group IDs --hostgroup-titles LIST --hostgroups LIST --id VALUE --ignore-types LIST - List of resources types that will be automatically associated --label VALUE - Organization label to search by --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-ids LIST - Associated location IDs --location-title VALUE - Set the current location context for the request --location-titles LIST --locations LIST --media LIST --medium-ids LIST - Medium IDs --name VALUE --new-name VALUE --new-title VALUE --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-label VALUE - Organization label to search by --organization-title VALUE - Set the current organization context for the request --partition-table-ids LIST - Partition template IDs --partition-tables LIST --provisioning-template-ids LIST Provisioning template IDs --provisioning-templates LIST --realm-ids LIST - Realm IDs --realms LIST --redhat-repository-url VALUE - Red Hat CDN URL --simple-content-access BOOLEAN Whether Simple Content Access should be enabled for the organization. --smart-proxies LIST --smart-proxy-ids LIST - Capsule IDs --subnet-ids LIST - Subnet IDs --subnets LIST --title VALUE - Set the current organization context for the request --user-ids LIST - User IDs --users LIST -h , --help - Print help 3.51. os Manipulate operating system Usage Options -h , --help - Print help 3.51.1. os add-architecture Associate an architecture Usage Options --architecture VALUE - Architecture name --architecture-id NUMBER --id VALUE --title VALUE - Operating system title -h , --help - Print help 3.51.2. os add-provisioning-template Associate provisioning templates Usage Options --id VALUE --provisioning-template VALUE - Name to search by --provisioning-template-id NUMBER --provisioning-template-ids LIST - List of provisioning template ids --provisioning-template-search VALUE Provisioning template name regex to search, all matching templates will be associated --provisioning-templates LIST - List of provisioning template names --title VALUE - Operating system title -h , --help - Print help 3.51.3. os add-ptable Associate a partition table Usage Options --id VALUE --partition-table VALUE - Partition table name --partition-table-id NUMBER --title VALUE - Operating system title -h , --help - Print help 3.51.4. os create Create an operating system Usage Options --architecture-ids LIST - IDs of associated architectures --architectures LIST --description VALUE --family VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --major VALUE --media LIST --medium-ids LIST - IDs of associated media --minor VALUE --name VALUE --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --os-parameters-attributes SCHEMA Array of parameters --partition-table-ids LIST - IDs of associated partition tables --partition-tables LIST --password-hash ENUM - Root password hash function to use Possible value(s): SHA256 , SHA512 , Base64 , Base64-Windows , MD5 --provisioning-template-ids LIST IDs of associated provisioning templates --provisioning-templates LIST --release-name VALUE -h , --help - Print help Following parameters accept format defined by its schema (bold are required; <> contains acceptable type; [] contains acceptable value): --os-parameters-attributes " name =<string>, value =<string>, ... " 3.51.5. os delete Delete an operating system Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --title VALUE - Operating system title -h , --help - Print help 3.51.6. os delete-default-template Usage Options --id VALUE - Operatingsystem id --type VALUE - Type of the provisioning template -h , --help - Print help 3.51.7. os delete-parameter Delete parameter for an operating system Usage Options --name VALUE - Parameter name --operatingsystem VALUE - Operating system title --operatingsystem-id NUMBER -h , --help - Print help 3.51.8. os info Show an operating system Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --show-hidden-parameters BOOLEAN Display hidden parameter values --title VALUE - Operating system title -h , --help - Print help Table 3.138. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Title x x x Release name x x Family x x Name x x Major version x x Minor version x x Partition tables/ x x Default templates/ x x Architectures/ x x Installation media/ x x Templates/ x x Parameters/ x x 3.51.9. os list List all operating systems Usage Options --architecture VALUE - Architecture name --architecture-id VALUE - ID of architecture --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --medium VALUE - Medium name --medium-id VALUE - ID of medium --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --os-parameters-attributes SCHEMA Array of parameters --page NUMBER - Page number, starting at 1 --partition-table VALUE - Partition table name --partition-table-id VALUE - ID of partition table --per-page VALUE - Number of results per page to return, all to return all results --provisioning-template VALUE - Name to search by --provisioning-template-id VALUE ID of template --search VALUE - Filter results -h , --help - Print help Table 3.139. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Title x x x Release name x x Family x x Following parameters accept format defined by its schema (bold are required; <> contains acceptable type; [] contains acceptable value): --os-parameters-attributes " name =<string>, value =<string>, ... " Search / Order fields architecture - string description - string family - string id - integer major - string medium - string minor - string name - string params - string template - string title - string 3.51.10. os remove-architecture Disassociate an architecture Usage Options --architecture VALUE - Architecture name --architecture-id NUMBER --id VALUE --title VALUE - Operating system title -h , --help - Print help 3.51.11. os remove-provisioning-template Disassociate provisioning templates Usage Options --id VALUE --provisioning-template VALUE - Name to search by --provisioning-template-id NUMBER --provisioning-template-ids LIST - List of provisioning template ids --provisioning-template-search VALUE Provisioning template name regex to search, all matching templates will be disassociated --provisioning-templates LIST - List of provisioning template names --title VALUE - Operating system title -h , --help - Print help 3.51.12. os remove-ptable Disassociate a partition table Usage Options --id VALUE --partition-table VALUE - Partition table name --partition-table-id NUMBER --title VALUE - Operating system title -h , --help - Print help 3.51.13. os set-default-template Usage Options --id VALUE - Operatingsystem id --provisioning-template-id VALUE Provisioning template id to be set -h , --help - Print help 3.51.14. os set-parameter Create or update parameter for an operating system Usage Options --hidden-value BOOLEAN - Should the value be hidden --name VALUE - Parameter name --operatingsystem VALUE - Operating system title --operatingsystem-id NUMBER --parameter-type ENUM - Type of the parameter Possible value(s): string , boolean , integer , real , array , hash , yaml , json Default: "string" --value VALUE - Parameter value -h , --help - Print help 3.51.15. os update Update an operating system Usage Options --architecture-ids LIST - IDs of associated architectures --architectures LIST --description VALUE --family VALUE --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --major VALUE --media LIST --medium-ids LIST - IDs of associated media --minor VALUE --name VALUE --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --os-parameters-attributes SCHEMA Array of parameters --partition-table-ids LIST - IDs of associated partition tables --partition-tables LIST --password-hash ENUM - Root password hash function to use Possible value(s): SHA256 , SHA512 , Base64 , Base64-Windows , MD5 --provisioning-template-ids LIST IDs of associated provisioning templates --provisioning-templates LIST --release-name VALUE --title VALUE - Operating system title -h , --help - Print help Following parameters accept format defined by its schema (bold are required; <> contains acceptable type; [] contains acceptable value): --os-parameters-attributes "name=<string>,value=<string>, ... " 3.52. package Manipulate packages Usage Options -h , --help - Print help 3.52.1. package info Show a package Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE - A package identifier --name VALUE - Name to search by --organization VALUE - Organization name to search by --organization-id NUMBER - Organization identifier --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --repository VALUE - Repository name to search by --repository-id NUMBER - Repository identifier -h , --help - Print help Table 3.140. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Pulp id x x Uuid x x Name x x x Version x x Architecture x x Epoch x x Release x x Author x x Filename x x Source rpm x x Nvrea x x Build host x x Available host count x x Applicable host count x x Children x x Vendor x x License x x Relative path x x Description x x Summary x x Url x x Build time x x Group x x Requires x x Provides x x Files x x Size x x Modular x x 3.52.2. package list List packages Usage Options --available-for VALUE - Return packages that can be added to the specified object. Only the value content_view_version is supported. --content-view VALUE - Content view name to search by --content-view-filter VALUE - Name to search by --content-view-filter-id NUMBER - Content View Filter identifier --content-view-id NUMBER - Content view numeric identifier --content-view-version VALUE - Content view version number --content-view-version-id NUMBER - Content View Version identifier --environment VALUE - Lifecycle environment name to search by (--environment is deprecated: Use --lifecycle-environment instead) --environment-id NUMBER - (--environment-id is deprecated: Use --lifecycle-environment-id instead) --fields LIST - Show specified fields or predefined field sets only. (See below) --full-result BOOLEAN - Whether or not to show all results --host VALUE - Host name --host-id NUMBER - Host id to list applicable packages for --ids LIST - Package identifiers to filter content by --lifecycle-environment VALUE - Lifecycle environment name to search by --lifecycle-environment-id NUMBER - Environment identifier --order VALUE - Sort field and order, eg. id DESC --organization VALUE - Organization name to search by --organization-id NUMBER - Organization identifier --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --packages-restrict-applicable BOOLEAN Return packages that are applicable to one or more hosts (defaults to true if host_id is specified) --packages-restrict-latest BOOLEAN - Return only the latest version of each package --packages-restrict-upgradable BOOLEAN Return packages that are upgradable on one or more hosts --page NUMBER - Page number, starting at 1 --per-page NUMBER - Number of results per page to return --product VALUE - Product name to search by --product-id NUMBER - Product numeric identifier --repository VALUE - Repository name to search by --repository-id NUMBER - Repository identifier --search VALUE - Search string -h , --help - Print help Table 3.141. Predefined field sets FIELDS ALL DEFAULT Id x x Filename x x Source rpm x x 3.53. package-group Manipulate package groups Usage Options -h , --help - Print help 3.53.1. package-group info Show a package group Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE - A package group identifier --name VALUE - Name to search by --organization VALUE - Organization name to search by --organization-id NUMBER - Organization identifier --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --repository VALUE - Repository name to search by --repository-id NUMBER - Repository identifier -h , --help - Print help Table 3.142. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Package group name x x x Repository name x x Uuid x x Description x x Default packages x x Mandatory packages x x Conditional packages x x Optional packages x x 3.53.2. package-group list List package_groups Usage Options --content-view VALUE - Content view name to search by --content-view-filter VALUE - Name to search by --content-view-filter-id NUMBER - Content view filter identifier --content-view-filter-rule VALUE - Name to search by --content-view-filter-rule-id NUMBER Content view filter rule identifier --content-view-id NUMBER - Content view numeric identifier --content-view-version VALUE - Content view version number --content-view-version-id NUMBER - Content view version identifier --fields LIST - Show specified fields or predefined field sets only. (See below) --full-result BOOLEAN - Whether or not to show all results --ids LIST - Ids to filter content by --include-filter-ids BOOLEAN - Includes associated content view filter ids in response --lifecycle-environment-id NUMBER - Environment identifier --order VALUE - Sort field and order, eg. id DESC --organization VALUE - Organization name to search by --organization-id NUMBER - Organization identifier --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --page NUMBER - Page number, starting at 1 --per-page NUMBER - Number of results per page to return --product VALUE - Product name to search by --product-id NUMBER - Product numeric identifier --repository VALUE - Repository name to search by --repository-id NUMBER - Repository identifier --search VALUE - Search string -h , --help - Print help Table 3.143. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Package group name x x x Repository name x x Uuid x x 3.54. partition-table Manipulate partition tables Usage Options -h , --help - Print help 3.54.1. partition-table add-operatingsystem Associate an operating system Usage Options --id VALUE --name VALUE - Partition table name --operatingsystem VALUE - Operating system title --operatingsystem-id NUMBER -h , --help - Print help 3.54.2. partition-table create Create a partition table Usage Options --audit-comment VALUE --description VALUE --file FILE - Path to a file that contains the partition layout --host-ids LIST - Array of host IDs to associate with the partition table --hostgroup-ids LIST - Array of host group IDs to associate with the partition table --hostgroup-titles LIST --hostgroups LIST --hosts LIST --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-ids LIST - REPLACE locations with given ids --location-title VALUE - Set the current location context for the request --location-titles LIST --locations LIST --locked BOOLEAN - Whether or not the template is locked for editing --name VALUE --operatingsystem-ids LIST - Array of operating system IDs to associate with the partition table --operatingsystems LIST --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-ids LIST - REPLACE organizations with given ids. --organization-title VALUE - Set the current organization context for the request --organization-titles LIST --organizations LIST --os-family VALUE --snippet BOOLEAN -h , --help - Print help 3.54.3. partition-table delete Delete a partition table Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Partition table name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.54.4. partition-table dump View partition table content Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Partition table name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.144. Predefined field sets FIELDS 3.54.5. partition-table export Export a partition template to ERB Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Partition table name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --path VALUE - Path to directory where downloaded content will be saved -h , --help - Print help 3.54.6. partition-table import Import a partition table Usage Options --associate ENUM - Determines when the template should associate objects based on metadata, new means only when new template is being created, always means both for new and existing template which is only being updated, never ignores metadata Possible value(s): new , always , never --default BOOLEAN - Makes the template default meaning it will be automatically associated with newly created organizations and locations (false by default) --file FILE - Path to a file that contains the template content including metadata --force BOOLEAN - Use if you want update locked templates --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-ids LIST - REPLACE locations with given ids --location-title VALUE - Set the current location context for the request --location-titles LIST --locations LIST --lock BOOLEAN - Lock imported templates (false by default) --name VALUE - Template name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-ids LIST - REPLACE organizations with given ids. --organization-title VALUE - Set the current organization context for the request --organization-titles LIST --organizations LIST -h , --help - Print help 3.54.7. partition-table info Show a partition table Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Partition table name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.145. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Os family x x Description x x Locked x x Operating systems/ x x Created at x x Updated at x x Locations/ x x Organizations/ x x 3.54.8. partition-table list List all partition tables Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Scope by locations --location-title VALUE - Set the current location context for the request --operatingsystem VALUE - Operating system title --operatingsystem-id NUMBER - ID of operating system --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Scope by organizations --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results -h , --help - Print help Table 3.146. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Os family x x Search / Order fields default - Values: true, false family - string id - integer layout - text location - string location_id - integer locked - Values: true, false name - string organization - string organization_id - integer snippet - Values: true, false template - text vendor - string 3.54.9. partition-table remove-operatingsystem Disassociate an operating system Usage Options --id VALUE --name VALUE - Partition table name --operatingsystem VALUE - Operating system title --operatingsystem-id NUMBER -h , --help - Print help 3.54.10. partition-table update Update a partition table Usage Options --audit-comment VALUE --description VALUE --file FILE - Path to a file that contains the partition layout --host-ids LIST - Array of host IDs to associate with the partition table --hostgroup-ids LIST - Array of host group IDs to associate with the partition table --hostgroup-titles LIST --hostgroups LIST --hosts LIST --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-ids LIST - REPLACE locations with given ids --location-title VALUE - Set the current location context for the request --location-titles LIST --locations LIST --locked BOOLEAN - Whether or not the template is locked for editing --name VALUE --new-name VALUE --operatingsystem-ids LIST - Array of operating system IDs to associate with the partition table --operatingsystems LIST --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-ids LIST - REPLACE organizations with given ids. --organization-title VALUE - Set the current organization context for the request --organization-titles LIST --organizations LIST --os-family VALUE --snippet BOOLEAN -h , --help - Print help 3.55. ping Get the status of the server and/or it's subcomponents Usage Options -h , --help - Print help 3.55.1. ping foreman Shows status of Satellite system and it's subcomponents Usage Options -h , --help - Print help 3.55.2. ping katello Shows status of Katello system and it's subcomponents Usage Options -h , --help - Print help 3.56. policy Manipulate policies Usage Options -h , --help - Print help 3.56.1. policy create Create a Policy Usage Options --cron-line VALUE - Policy schedule cron line (only if period == "custom") --day-of-month NUMBER - Policy schedule day of month (only if period == "monthly") --deploy-by ENUM - How the policy should be deployed Possible value(s): puppet , ansible , manual --description VALUE - Policy description --host-ids LIST - Apply policy to hosts --hostgroup-ids LIST - Apply policy to host groups --hostgroups LIST --hosts LIST --location VALUE - Name to search by --location-id NUMBER - Set the current location context for the request --location-ids LIST - REPLACE locations with given ids --locations LIST --name VALUE - Policy name --organization VALUE - Name to search by --organization-id NUMBER - Set the current organization context for the request --organization-ids LIST - REPLACE organizations with given ids. --organizations LIST --period VALUE - Policy schedule period (weekly, monthly, custom) --scap-content VALUE - SCAP content title --scap-content-id NUMBER - Policy SCAP content ID --scap-content-profile VALUE - Name to search by --scap-content-profile-id NUMBER - Policy SCAP content profile ID --tailoring-file VALUE - Tailoring file name --tailoring-file-id NUMBER - Tailoring file ID --tailoring-file-profile-id NUMBER Tailoring file profile ID --weekday VALUE - Policy schedule weekday (only if period == "weekly") -h , --help - Print help 3.56.2. policy delete Delete a Policy Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.56.3. policy hosts List all hosts Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --hostgroup VALUE - Hostgroup name --hostgroup-id VALUE - ID of host group --hostgroup-title VALUE - Hostgroup title --id VALUE - Policy Id --include ENUM - Array of extra information types to include Possible value(s): parameters , all_parameters --location VALUE - Set the current location context for the request --location-id VALUE - ID of location --location-title VALUE - Set the current location context for the request --name VALUE - Policy name --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id VALUE - ID of organization --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results --thin BOOLEAN - Only list ID and name of hosts -h , --help - Print help Table 3.147. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Operating system x x Host group x x Ip x x Mac x x Global status x x Organization x Location x Additional information x Content view x x Lifecycle environment x x Security x Bugfix x Enhancement x Trace status x x Search / Order fields activation_key - string activation_key_id - string addon - string addons_status - Values: mismatched, matched, not_specified ansible_role - string applicable_debs - string applicable_errata - string applicable_errata_issued - date applicable_rpms - string architecture - string autoheal - boolean boot_time build - Values: true, false build_status - Values: built, pending, token_expired, build_failed comment - text compute_resource - string compute_resource_id - integer configuration_status.applied - integer configuration_status.enabled - Values: true, false configuration_status.failed - integer configuration_status.failed_restarts - integer configuration_status.interesting - Values: true, false configuration_status.pending - integer configuration_status.restarted - integer configuration_status.skipped - integer content_source - string content_views - string created_at - datetime cve_id - integer domain - string domain_id - integer errata_status - Values: security_needed, errata_needed, updated, unknown execution_status - Values: ok, error facts - string global_status - Values: ok, warning, error has_ip - string has_ip6 string has_mac - string hostgroup - string hostgroup_fullname - string hostgroup_id - integer hostgroup_name - string hostgroup_title - string hypervisor - boolean hypervisor_host - string id - integer image - string infrastructure_facet.foreman - Values: true, false infrastructure_facet.smart_proxy_id insights_client_report_status - Values: reporting, no_report insights_inventory_sync_status - Values: disconnect, sync insights_recommendations_count - integer installable_errata - string installed_at - datetime ip - string ip6 string job_invocation.id - string job_invocation.result - Values: cancelled, failed, pending, success last_checkin - datetime last_report - datetime lifecycle_environments - string location - string location_id - integer mac - string managed - Values: true, false model - string name - string organization - string organization_id - integer origin - string os - string os_description - string os_id - integer os_major - string os_minor - string os_title - string owner - string owner_id - integer owner_type - string params - string params_name - string parent_hostgroup - string puppet_ca - string puppet_proxy_id - integer puppetmaster - string purpose_status - Values: mismatched, matched, not_specified pxe_loader - Values: PXELinux_BIOS, PXELinux_UEFI, Grub_UEFI, Grub2_BIOS, Grub2_ELF, Grub2_UEFI, Grub2_UEFI_SecureBoot, Grub2_UEFI_HTTP, Grub2_UEFI_HTTPS, Grub2_UEFI_HTTPS_SecureBoot, iPXE_Embedded, iPXE_UEFI_HTTP, iPXE_Chain_BIOS, iPXE_Chain_UEFI realm - string realm_id - integer registered_at - datetime registered_through - string release_version - string reported.bios_release_date reported.bios_vendor reported.bios_version reported.boot_time reported.cores reported.disks_total reported.kernel_version reported.ram reported.sockets reported.virtual - Values: true, false repository - string repository_content_label - string rhel_lifecycle_status - Values: full_support, maintenance_support, approaching_end_of_maintenance, extended_support, approaching_end_of_support, support_ended role - text role_status - Values: mismatched, matched, not_specified service_level - string sla_status - Values: mismatched, matched, not_specified smart_proxy - string status.applied - integer status.enabled - Values: true, false status.failed - integer status.failed_restarts - integer status.interesting - Values: true, false status.pending - integer status.restarted - integer status.skipped - integer subnet - string subnet.name - text subnet6 string subnet6.name text subscription_id - string subscription_name - string subscription_status - Values: valid, partial, invalid, unknown, disabled, unsubscribed_hypervisor subscription_uuid - string trace_status - Values: reboot_needed, process_restart_needed, updated upgradable_debs - string upgradable_rpms - string usage - text usage_status - Values: mismatched, matched, not_specified user.firstname - string user.lastname - string user.login - string user.mail - string usergroup - string usergroup.name - string uuid - string 3.56.4. policy info Show a Policy Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.148. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Created at x x Period x x Weekday x x Day of month x x Cron line x x Scap content id x x Scap content profile id x x Tailoring file id x x Tailoring file profile id x x Deployment option x x Locations/ x x Organizations/ x x Hostgroups/ x x 3.56.5. policy list List Policies Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results -h , --help - Print help Table 3.149. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Created at x x Search / Order fields content - string location - string location_id - integer name - string organization - string organization_id - integer profile - string tailoring_file - string tailoring_file_profile - string 3.56.6. policy update Update a Policy Usage Options --cron-line VALUE - Policy schedule cron line (only if period == "custom") --day-of-month NUMBER - Policy schedule day of month (only if period == "monthly") --deploy-by ENUM - How the policy should be deployed Possible value(s): puppet , ansible , manual --description VALUE - Policy description --host-ids LIST - Apply policy to hosts --hostgroup-ids LIST - Apply policy to host groups --hostgroups LIST --hosts LIST --id VALUE --location VALUE - Name to search by --location-id NUMBER - Set the current location context for the request --location-ids LIST - REPLACE locations with given ids --locations LIST --name VALUE - Policy name --new-name VALUE - Policy name --organization VALUE - Name to search by --organization-id NUMBER - Set the current organization context for the request --organization-ids LIST - REPLACE organizations with given ids. --organizations LIST --period VALUE - Policy schedule period (weekly, monthly, custom) --scap-content VALUE - SCAP content title --scap-content-id NUMBER - Policy SCAP content ID --scap-content-profile VALUE - Name to search by --scap-content-profile-id NUMBER - Policy SCAP content profile ID --tailoring-file VALUE - Tailoring file name --tailoring-file-id NUMBER - Tailoring file ID --tailoring-file-profile-id NUMBER Tailoring file profile ID --weekday VALUE - Policy schedule weekday (only if period == "weekly") -h , --help - Print help 3.57. prebuild-bash-completion Prepare map of options and subcommands for Bash completion Usage Options -h , --help - Print help 3.58. product Manipulate products Usage Options -h , --help - Print help 3.58.1. product create Create a product Usage Options --description VALUE - Product description --gpg-key-id NUMBER - Identifier of the GPG key --label VALUE --name VALUE - Product name --organization VALUE - Organization name to search by --organization-id NUMBER - ID of the organization --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --ssl-ca-cert-id NUMBER - Idenifier of the SSL CA Cert --ssl-client-cert-id NUMBER - Identifier of the SSL Client Cert --ssl-client-key-id NUMBER - Identifier of the SSL Client Key --sync-plan VALUE - Sync plan name to search by --sync-plan-id NUMBER - Plan numeric identifier -h , --help - Print help 3.58.2. product delete Destroy a product Usage Options --id NUMBER - Product numeric identifier --name VALUE - Product name to search by --organization VALUE - Organization name to search by --organization-id NUMBER --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title -h , --help - Print help 3.58.3. product info Show a product Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id NUMBER - Product numeric identifier --name VALUE - Product name to search by --organization VALUE - Organization name to search by --organization-id NUMBER - Organization ID --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title -h , --help - Print help Table 3.150. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Label x x Description x x Sync state (all) x x Sync state (last) x x Sync plan id x x Gpg/gpg key id x x Gpg/gpg key x x Organization x x Readonly x x Deletable x x Content/repo name x x Content/url x x Content/content type x x 3.58.4. product list List products Usage Options --available-for VALUE - Interpret specified object to return only Products that can be associated with specified object. Only sync_plan is supported. --custom BOOLEAN - Return custom products only --enabled BOOLEAN - Return enabled products only --fields LIST - Show specified fields or predefined field sets only. (See below) --full-result BOOLEAN - Whether or not to show all results --include-available-content BOOLEAN Whether to include available content attribute in results --name VALUE - Filter products by name --order VALUE - Sort field and order, eg. id DESC --organization VALUE - Organization name to search by --organization-id NUMBER - Filter products by organization --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --page NUMBER - Page number, starting at 1 --per-page NUMBER - Number of results per page to return --redhat-only BOOLEAN - Return Red Hat (non-custom) products only --search VALUE - Search string --subscription VALUE - Subscription name to search by --subscription-id NUMBER - Filter products by subscription --sync-plan VALUE - Sync plan name to search by --sync-plan-id NUMBER - Filter products by sync plan id -h , --help - Print help Table 3.151. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Description x x Organization x x Repositories x x Sync state x x Search / Order fields description - text label - string name - string organization_id - integer redhat - Values: true, false 3.58.5. product remove-sync-plan Delete assignment sync plan and product Usage Options --description VALUE - Product description --gpg-key-id NUMBER - Identifier of the GPG key --id NUMBER - Product numeric identifier --name VALUE - Product name --new-name VALUE - Product name --organization VALUE - Organization name to search by --organization-id NUMBER --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --ssl-ca-cert-id NUMBER - Idenifier of the SSL CA Cert --ssl-client-cert-id NUMBER - Identifier of the SSL Client Cert --ssl-client-key-id NUMBER - Identifier of the SSL Client Key -h , --help - Print help 3.58.6. product set-sync-plan Assign sync plan to product Usage Options --id NUMBER - Product numeric identifier --name VALUE - Product name to search by --new-name VALUE - Product name --organization VALUE - Organization name to search by --organization-id NUMBER --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --ssl-ca-cert-id NUMBER - Idenifier of the SSL CA Cert --ssl-client-cert-id NUMBER - Identifier of the SSL Client Cert --ssl-client-key-id NUMBER - Identifier of the SSL Client Key --sync-plan VALUE - Sync plan name to search by --sync-plan-id NUMBER - Plan numeric identifier -h , --help - Print help 3.58.7. product synchronize Sync all repositories for a product Usage Options --async - Do not wait for the task --id NUMBER - Product ID --name VALUE - Product name to search by --organization VALUE - Organization name to search by --organization-id NUMBER --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title -h , --help - Print help 3.58.8. product update Updates a product Usage Options --description VALUE - Product description --gpg-key-id NUMBER - Identifier of the GPG key --id NUMBER - Product numeric identifier --name VALUE - Product name --new-name VALUE - Product name --organization VALUE - Organization name to search by --organization-id NUMBER --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --ssl-ca-cert-id NUMBER - Idenifier of the SSL CA Cert --ssl-client-cert-id NUMBER - Identifier of the SSL Client Cert --ssl-client-key-id NUMBER - Identifier of the SSL Client Key --sync-plan VALUE - Sync plan name to search by --sync-plan-id NUMBER - Plan numeric identifier -h , --help - Print help 3.58.9. product update-proxy Updates an HTTP Proxy for a product Usage Options --http-proxy VALUE - Name to search by --http-proxy-id NUMBER - HTTP Proxy identifier to associated --http-proxy-policy ENUM - Policy for HTTP Proxy for content sync Possible value(s): global_default_http_proxy , none , use_selected_http_proxy --ids LIST - List of product ids -h , --help - Print help 3.59. proxy Manipulate smart proxies Usage Options -h , --help - Print help 3.59.1. proxy content Manage the capsule content Usage Options -h , --help - Print help 3.59.1.1. proxy content add-lifecycle-environment Add lifecycle environments to the capsule Usage Options --environment VALUE - Lifecycle environment name to search by (--environment is deprecated: Use --lifecycle-environment instead) --environment-id NUMBER - (--environment-id is deprecated: Use --lifecycle-environment-id instead) --id NUMBER - Id of the capsule --lifecycle-environment VALUE - Lifecycle environment name to search by --lifecycle-environment-id NUMBER Id of the lifecycle environment --name VALUE - Name to search by --organization VALUE - Organization name --organization-id VALUE - Organization ID -h , --help - Print help 3.59.1.2. proxy content available-lifecycle-environments List the lifecycle environments not attached to the capsule Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id NUMBER - Id of the capsule --name VALUE - Name to search by --organization VALUE - Organization name to search by --organization-id NUMBER - Id of the organization to limit environments on --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title -h , --help - Print help Table 3.152. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Organization x x 3.59.1.3. proxy content cancel-synchronization Cancel running capsule synchronization Usage Options --id NUMBER - Id of the capsule --name VALUE - Name to search by -h , --help - Print help 3.59.1.4. proxy content info Get current capsule synchronization status Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id NUMBER - Id of the capsule --name VALUE - Name to search by --organization VALUE - Organization name to search by --organization-id NUMBER - Id of the organization to get the status for --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title -h , --help - Print help Table 3.153. Predefined field sets FIELDS ALL DEFAULT Lifecycle environments/name x x Lifecycle environments/organization x x Lifecycle environments/content views/name x x Lifecycle environments/content views/composite x x Lifecycle environments/content views/last published x x Lifecycle environments/content views/repositories/repository id x x Lifecycle environments/content views/repositories/repository name x x Lifecycle environments/content views/repositories/content counts/warning x x Lifecycle environments/content views/repositories/content counts/packages x x Lifecycle environments/content views/repositories/content counts/srpms x x Lifecycle environments/content views/repositories/content counts/module streams x x Lifecycle environments/content views/repositories/content counts/package groups x x Lifecycle environments/content views/repositories/content counts/errata x x Lifecycle environments/content views/repositories/content counts/debian packages x x Lifecycle environments/content views/repositories/content counts/container tags x x Lifecycle environments/content views/repositories/content counts/container ma... x x Lifecycle environments/content views/repositories/content counts/container ma... x x Lifecycle environments/content views/repositories/content counts/files x x Lifecycle environments/content views/repositories/content counts/ansible coll... x x Lifecycle environments/content views/repositories/content counts/ostree refs x x Lifecycle environments/content views/repositories/content counts/python packages x x 3.59.1.5. proxy content lifecycle-environments List the lifecycle environments attached to the capsule Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id NUMBER - Id of the capsule --name VALUE - Name to search by --organization VALUE - Organization name to search by --organization-id NUMBER - Id of the organization to limit environments on --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title -h , --help - Print help Table 3.154. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Organization x x 3.59.1.6. proxy content reclaim-space Reclaim space from all On Demand repositories on a capsule Usage Options --async - Do not wait for the task --id NUMBER - Id of the capsule --name VALUE - Name to search by -h , --help - Print help 3.59.1.7. proxy content remove-lifecycle-environment Remove lifecycle environments from the capsule Usage Options --environment VALUE - Lifecycle environment name to search by (--environment is deprecated: Use --lifecycle-environment instead) --environment-id NUMBER - (--environment-id is deprecated: Use --lifecycle-environment-id instead) --id NUMBER - Id of the capsule --lifecycle-environment VALUE - Lifecycle environment name to search by --lifecycle-environment-id NUMBER Id of the lifecycle environment --name VALUE - Name to search by --organization VALUE - Organization name --organization-id VALUE - Organization ID -h , --help - Print help 3.59.1.8. proxy content synchronization-status Get current capsule synchronization status Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id NUMBER - Id of the capsule --name VALUE - Name to search by --organization VALUE - Organization name to search by --organization-id NUMBER - Id of the organization to get the status for --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title -h , --help - Print help Table 3.155. Predefined field sets FIELDS ALL DEFAULT Last sync x x Status x x Currently running sync tasks/task id x x Currently running sync tasks/progress x x Last failure/task id x x Last failure/messages x x 3.59.1.9. proxy content synchronize Synchronize the content to the capsule Usage Options --async - Do not wait for the task --content-view VALUE - Content view name to search by --content-view-id NUMBER - Id of the content view to limit the synchronization on --environment VALUE - Lifecycle environment name to search by (--environment is deprecated: Use --lifecycle-environment instead) --environment-id NUMBER - (--environment-id is deprecated: Use --lifecycle-environment-id instead) --id NUMBER - Id of the capsule --lifecycle-environment VALUE - Lifecycle environment name to search by --lifecycle-environment-id NUMBER Id of the environment to limit the synchronization on --name VALUE - Name to search by --organization VALUE - Organization name --organization-id VALUE - Organization ID --repository VALUE - Repository name to search by --repository-id NUMBER - Id of the repository to limit the synchronization on --skip-metadata-check BOOLEAN - Skip metadata check on each repository on the capsule -h , --help - Print help 3.59.1.10. proxy content update-counts Update content counts for the capsule Usage Options --async - Do not wait for the task --id NUMBER - Id of the capsule --name VALUE - Name to search by --organization VALUE - Organization name --organization-id VALUE - Organization ID -h , --help - Print help 3.59.2. proxy create Create a capsule Usage Options --download-policy VALUE - Download Policy of the capsule, must be one of on_demand, immediate, inherit, streamed --http-proxy VALUE - Name to search by --http-proxy-id NUMBER - Id of the HTTP Proxy to use with alternate content sources --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-ids LIST - REPLACE locations with given ids --location-title VALUE - Set the current location context for the request --location-titles LIST --locations LIST --name VALUE --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-ids LIST - REPLACE organizations with given ids. --organization-title VALUE - Set the current organization context for the request --organization-titles LIST --organizations LIST --url VALUE -h , --help - Print help 3.59.3. proxy delete Delete a capsule Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.59.4. proxy import-subnets Import subnets from Capsule Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.59.5. proxy info Show a capsule Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --include-status BOOLEAN - Flag to indicate whether to include status or not --include-version BOOLEAN - Flag to indicate whether to include version or not --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.156. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Status x x Url x x Features x x Version x x Host count x x Features/name x x Features/version x x Locations/ x x Organizations/ x x Created at x x Updated at x x 3.59.6. proxy list List all capsules Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --include-status BOOLEAN - Flag to indicate whether to include status or not --location VALUE - Set the current location context for the request --location-id NUMBER - Scope by locations --location-title VALUE - Set the current location context for the request --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Scope by organizations --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results -h , --help - Print help Table 3.157. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Status x x Url x x Features x x Search / Order fields feature - string id - integer location - string location_id - integer name - string organization - string organization_id - integer url - string Search / Order fields feature - string id - integer location - string location_id - integer name - string organization - string organization_id - integer url - string 3.59.7. proxy refresh-features Refresh capsule features Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.59.8. proxy update Update a capsule Usage Options --download-policy VALUE - Download Policy of the capsule, must be one of on_demand, immediate, inherit, streamed --http-proxy VALUE - Name to search by --http-proxy-id NUMBER - Id of the HTTP Proxy to use with alternate content sources --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-ids LIST - REPLACE locations with given ids --location-title VALUE - Set the current location context for the request --location-titles LIST --locations LIST --name VALUE --new-name VALUE --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-ids LIST - REPLACE organizations with given ids. --organization-title VALUE - Set the current organization context for the request --organization-titles LIST --organizations LIST --url VALUE -h , --help - Print help 3.60. realm Manipulate realms Usage Options -h , --help - Print help 3.60.1. realm create Create a realm Usage Options --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-ids LIST - REPLACE locations with given ids --location-title VALUE - Set the current location context for the request --location-titles LIST --locations LIST --name VALUE - The realm name, e.g. EXAMPLE.COM --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-ids LIST - REPLACE organizations with given ids. --organization-title VALUE - Set the current organization context for the request --organization-titles LIST --organizations LIST --realm-proxy-id NUMBER - Capsule ID to use within this realm --realm-type VALUE - Realm type, e.g. Red Hat Identity Management or Active Directory -h , --help - Print help 3.60.2. realm delete Delete a realm Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.60.3. realm info Show a realm Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE - Numerical ID or realm name --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.158. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Realm proxy id x x Realm type x x Locations/ x x Organizations/ x x Created at x x Updated at x x 3.60.4. realm list List of realms Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Scope by locations --location-title VALUE - Set the current location context for the request --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Scope by organizations --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results -h , --help - Print help Table 3.159. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Search / Order fields id - integer location - string location_id - integer name - string organization - string organization_id - integer type - string 3.60.5. realm update Update a realm Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-ids LIST - REPLACE locations with given ids --location-title VALUE - Set the current location context for the request --location-titles LIST --locations LIST --name VALUE - The realm name, e.g. EXAMPLE.COM --new-name VALUE - The realm name, e.g. EXAMPLE.COM --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-ids LIST - REPLACE organizations with given ids. --organization-title VALUE - Set the current organization context for the request --organization-titles LIST --organizations LIST --realm-proxy-id NUMBER - Capsule ID to use within this realm --realm-type VALUE - Realm type, e.g. Red Hat Identity Management or Active Directory -h , --help - Print help 3.61. recurring-logic Recurring logic related actions Usage Options -h , --help - Print help 3.61.1. recurring-logic cancel Cancel recurring logic Usage Options --id VALUE - ID of the recurring logic --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.61.2. recurring-logic delete Delete all recuring logics filtered by the arguments Usage Options --cancelled - Only delete cancelled recurring logics --finished - Only delete finished recurring logics -h , --help - Print help 3.61.3. recurring-logic info Show recurring logic details Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE - ID of the recurring logic --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.160. Predefined field sets FIELDS ALL DEFAULT Id x x Cron line x x Action x x Last occurrence x x occurrence x x Task count x x Action x x Last occurrence x x occurrence x x Iteration x x Iteration limit x x Iteration limit x x Repeat until x x State x x Purpose x x 3.61.4. recurring-logic list List recurring logics Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results -h , --help - Print help Table 3.161. Predefined field sets FIELDS ALL DEFAULT Id x x Cron line x x Task count x x Action x x Last occurrence x x occurrence x x Iteration x x Iteration limit x x End time x x State x x Purpose x x 3.62. remote-execution-feature Manage remote execution features Usage Options -h , --help - Print help 3.62.1. remote-execution-feature info Show remote execution feature Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.162. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Label x x Name x x x Description x x Job template name x x Job template id x x 3.62.2. remote-execution-feature list List remote execution features Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.163. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Description x x Job template name x x 3.62.3. remote-execution-feature update Update a job template Usage Options --id VALUE --job-template VALUE - Name to search by --job-template-id VALUE - Job template ID to be used for the feature --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --new-name VALUE --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.63. report Browse and read reports Usage Options -h , --help - Print help 3.63.1. report delete Delete a report Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.63.2. report info Show a report Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.164. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Host x x Reported at x x Origin x x Report status/applied x x Report status/restarted x x Report status/failed x x Report status/restart failures x x Report status/skipped x x Report status/pending x x Report metrics/config retrieval x x Report metrics/exec x x Report metrics/file x x Report metrics/package x x Report metrics/service x x Report metrics/user x x Report metrics/yumrepo x x Report metrics/filebucket x x Report metrics/cron x x Report metrics/total x x Logs/resource x x Logs/message x x 3.63.3. report list List all reports Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results -h , --help - Print help Table 3.165. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Host x x Last report x x Origin x x Applied x x Restarted x x Failed x x Restart failures x x Skipped x x Pending x x Search / Order fields applied - integer eventful - Values: true, false failed - integer failed_restarts - integer host - string host_id - integer host_owner_id - integer hostgroup - string hostgroup_fullname - string hostgroup_title - string id - integer last_report - datetime location - string log - text organization - string origin - string pending - integer reported - datetime resource - text restarted - integer skipped - integer Search / Order fields applied - integer eventful - Values: true, false failed - integer failed_restarts - integer host - string host_id - integer host_owner_id - integer hostgroup - string hostgroup_fullname - string hostgroup_title - string id - integer last_report - datetime location - string log - text organization - string origin - string pending - integer reported - datetime resource - text restarted - integer skipped - integer 3.64. report-template Manipulate report templates Usage Options -h , --help - Print help 3.64.1. report-template clone Clone a template Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Template name --new-name VALUE --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.64.2. report-template create Create a report template Usage Options --audit-comment VALUE --default BOOLEAN - Whether or not the template is added automatically to new organizations and locations --description VALUE --file FILE - Path to a file that contains the report template content --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-ids LIST - REPLACE locations with given ids --location-title VALUE - Set the current location context for the request --location-titles LIST --locations LIST --locked BOOLEAN - Whether or not the template is locked for editing --name VALUE --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-ids LIST - REPLACE organizations with given ids. --organization-title VALUE - Set the current organization context for the request --organization-titles LIST --organizations LIST --snippet BOOLEAN -h , --help - Print help -i , --interactive - Open empty template in an USDEDITOR. Upload the result 3.64.3. report-template delete Delete a report template Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.64.4. report-template dump View report content Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.166. Predefined field sets FIELDS 3.64.5. report-template export Export a report template to ERB Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --path VALUE - Path to directory where downloaded content will be saved -h , --help - Print help 3.64.6. report-template generate Generate report Usage Options --gzip BOOLEAN - Compress the report uzing gzip --id VALUE --inputs KEY_VALUE_LIST - Specify inputs --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --path VALUE - Path to directory where downloaded content will be saved --report-format ENUM - Report format, defaults to csv Possible value(s): csv , json , yaml , html -h , --help - Print help 3.64.7. report-template import Import a report template Usage Options --associate ENUM - Determines when the template should associate objects based on metadata, new means only when new template is being created, always means both for new and existing template which is only being updated, never ignores metadata Possible value(s): new , always , never --default BOOLEAN - Makes the template default meaning it will be automatically associated with newly created organizations and locations (false by default) --file FILE - Path to a file that contains the report template content including metadata --force BOOLEAN - Use if you want update locked templates --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-ids LIST - REPLACE locations with given ids --location-title VALUE - Set the current location context for the request --location-titles LIST --locations LIST --lock BOOLEAN - Lock imported templates (false by default) --name VALUE - Template name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-ids LIST - REPLACE organizations with given ids. --organization-title VALUE - Set the current organization context for the request --organization-titles LIST --organizations LIST -h , --help - Print help 3.64.8. report-template info Show a report template Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.167. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Description x x Locked x x Default x x Created at x x Updated at x x Locations/ x x Organizations/ x x Template inputs/id x x Template inputs/name x x Template inputs/description x x Template inputs/required x x Template inputs/options x x 3.64.9. report-template list List all report templates Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Scope by locations --location-title VALUE - Set the current location context for the request --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Scope by organizations --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results -h , --help - Print help Table 3.168. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Search / Order fields default - Values: true, false id - integer location - string location_id - integer locked - Values: true, false name - string organization - string organization_id - integer snippet - Values: true, false template - text 3.64.10. report-template report-data Downloads a generated report Usage Options --id VALUE --job-id VALUE - ID assigned to generating job by the schedule command --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --path VALUE - Path to directory where downloaded content will be saved -h , --help - Print help 3.64.11. report-template schedule Schedule generating of a report Usage Options --generate-at VALUE - UTC time to generate report at --gzip BOOLEAN - Compress the report using gzip --id VALUE --inputs KEY_VALUE_LIST - Specify inputs --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --mail-to VALUE - If set, scheduled report will be delivered via e-mail. Use , to separate multiple email addresses. --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --path VALUE - Path to directory where downloaded content will be saved. Only usable if wait is specified --report-format ENUM - Report format, defaults to csv Possible value(s): csv , json , yaml , html --wait - Turns a command to be active, wait for the result and download it right away -h , --help - Print help 3.64.12. report-template update Update a report template Usage Options --audit-comment VALUE --default BOOLEAN - Whether or not the template is added automatically to new organizations and locations --description VALUE --file FILE - Path to a file that contains the report template content --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-ids LIST - REPLACE locations with given ids --location-title VALUE - Set the current location context for the request --location-titles LIST --locations LIST --locked BOOLEAN - Whether or not the template is locked for editing --name VALUE --new-name VALUE --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-ids LIST - REPLACE organizations with given ids. --organization-title VALUE - Set the current organization context for the request --organization-titles LIST --organizations LIST --snippet BOOLEAN -h , --help - Print help -i , --interactive - Dump existing template and open it in an USDEDITOR. Update with the result 3.65. repository Manipulate repositories Usage Options -h , --help - Print help 3.65.1. repository create Create a custom repository Usage Options --ansible-collection-auth-token VALUE - The token key to use for authentication. --ansible-collection-auth-url VALUE - The URL to receive a session token from, e.g. used with Automation Hub. --ansible-collection-requirements VALUE - Contents of requirement yaml file to sync from URL --ansible-collection-requirements-file VALUE Location of the ansible collections requirements.yml file. --arch VALUE - Architecture of content in the repository --checksum-type VALUE - Checksum of the repository, currently sha1 & sha256 are supported --content-type VALUE - Type of repository to create. View available types with "hammer repository types" --deb-architectures VALUE - Whitespace-separated list of architectures to be synced from deb-archive --deb-components VALUE - Whitespace-separated list of repo components to be synced from deb-archive --deb-releases VALUE - Whitespace-separated list of releases to be synced from deb-archive --description VALUE - Description of the repository --docker-upstream-name VALUE - Name of the upstream docker repository --download-concurrency NUMBER - Used to determine download concurrency of the repository in pulp3. Use value less than 20. Defaults to 10 --download-policy ENUM - Download policy for yum, deb, and docker repos (either immediate or on_demand ) Possible value(s): immediate , on_demand --exclude-tags LIST - Comma-separated list of tags to exclude when syncing a container image repository. Default: any tag ending in "-source" --excludes LIST - Python packages to exclude from the upstream URL, names separated by newline. You may also specify versions, for example: django~=2.0. --gpg-key-id NUMBER - Id of the gpg key that will be assigned to the new repository --http-proxy VALUE - Name to search by --http-proxy-id NUMBER - ID of a HTTP Proxy --http-proxy-policy ENUM - Policies for HTTP Proxy for content sync Possible value(s): global_default_http_proxy , none , use_selected_http_proxy --ignorable-content LIST - List of content units to ignore while syncing a yum repository. Must be subset of srpm,treeinfo --include-tags LIST - Comma-separated list of tags to sync for a container image repository --includes LIST - Python packages to include from the upstream URL, names separated by newline. You may also specify versions, for example: django~=2.0. Leave empty to include every package. --label VALUE --metadata-expire NUMBER - Time to expire yum metadata in seconds. Only relevant for custom yum repositories. --mirroring-policy ENUM - Policy to set for mirroring content. Must be one of additive. Possible value(s): additive , mirror_complete , mirror_content_only --name VALUE - Name of the repository --organization VALUE - Organization name to search by --organization-id NUMBER --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --os-versions LIST - Identifies whether the repository should be unavailable on a client with a non-matching OS version. Pass [] to make repo available for clients regardless of OS version. Maximum length 1; allowed tags are: rhel-6, rhel-7, rhel-8, rhel-9 --package-types LIST - Package types to sync for Python content, separated by comma. Leave empty to get every package type. Package types are: bdist_dmg, bdist_dumb, bdist_egg, bdist_msi, bdist_rpm, bdist_wheel, bdist_wininst, sdist. --product VALUE - Product name to search by --product-id NUMBER - Product the repository belongs to --publish-via-http BOOLEAN - Publish Via HTTP --retain-package-versions-count NUMBER - The maximum number of versions of each package to keep. --ssl-ca-cert-id NUMBER - Identifier of the content credential containing the SSL CA Cert --ssl-client-cert-id NUMBER - Identifier of the content credential containing the SSL Client Cert --ssl-client-key-id NUMBER - Identifier of the content credential containing the SSL Client Key --upstream-authentication-token VALUE - Password of the upstream authentication token. --upstream-password VALUE - Password of the upstream repository user used for authentication --upstream-username VALUE - Username of the upstream repository user used for authentication --url VALUE - Repository source url --verify-ssl-on-sync BOOLEAN - If true, Katello will verify the upstream url`s SSL certifcates are signed by a trusted CA -h , --help - Print help 3.65.2. repository delete Destroy a custom repository Usage Options --delete-empty-repo-filters BOOLEAN - Delete content view filters that have this repository as the last associated repository. Defaults to true. If false, such filters will now apply to all repositories in the content view. --id NUMBER --name VALUE - Repository name to search by --organization VALUE - Organization name to search by --organization-id VALUE - Organization ID to search by --organization-label VALUE - Organization label to search by --product VALUE - Product name to search by --product-id NUMBER - Product numeric identifier --remove-from-content-view-versions BOOLEAN Force delete the repository by removing it from all content view versions -h , --help - Print help 3.65.3. repository info Show a repository Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id NUMBER - Repository ID --name VALUE - Repository name to search by --organization VALUE - Organization name to search by --organization-id NUMBER - Organization ID --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --product VALUE - Product name to search by --product-id NUMBER - Product numeric identifier -h , --help - Print help Table 3.169. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Label x x Description x x Organization x x Red hat repository x x Content type x x Content label x x Checksum type x x Mirroring policy x x Url x x Publish via http x x Published at x x Relative path x x Download policy x x Metadata expiration x x Upstream repository name x x Container image tags filter x x Container repository name x x Ignorable content units x x Http proxy/id x x Http proxy/name x x Http proxy/http proxy policy x x Product/id x x Product/name x x Gpg key/id x x Gpg key/name x x Sync/status x x Sync/last sync date x x Created x x Updated x x Content counts/packages x x Content counts/source rpms x x Content counts/package groups x x Content counts/errata x x Content counts/container image manifest lists x x Content counts/container image manifests x x Content counts/container image tags x x Content counts/files x x Content counts/module streams x x 3.65.4. repository list List of enabled repositories Usage Options --ansible-collection VALUE - Name to search by --ansible-collection-id VALUE - Id of an ansible collection to find repositories that contain the ansible collection --archived BOOLEAN - Show archived repositories --available-for VALUE - Interpret specified object to return only Repositories that can be associated with specified object. Only content_view & content_view_version are supported. --content-type VALUE - Limit the repository type to return. View available types with "hammer repository types" --content-view VALUE - Content view name to search by --content-view-id NUMBER - ID of a content view to show repositories in --content-view-version VALUE - Content view version number --content-view-version-id NUMBER ID of a content view version to show repositories in --deb VALUE - Name to search by --deb-id VALUE - Id of a deb package to find repositories that contain the deb --description VALUE - Description of the repository --download-policy ENUM - Limit to only repositories with this download policy Possible value(s): immediate , on_demand --environment VALUE - Lifecycle environment name to search by (--environment is deprecated: Use --lifecycle-environment instead) --environment-id NUMBER - (--environment-id is deprecated: Use --lifecycle-environment-id instead) --erratum-id VALUE - Id of an erratum to find repositories that contain the erratum --fields LIST - Show specified fields or predefined field sets only. (See below) --file-id VALUE - Id of a file to find repositories that contain the file --full-result BOOLEAN - Whether or not to show all results --label VALUE - Label of the repository --library BOOLEAN - Show repositories in Library and the default content view --lifecycle-environment VALUE - Lifecycle environment name to search by --lifecycle-environment-id NUMBER ID of an environment to show repositories in --name VALUE - Name of the repository --order VALUE - Sort field and order, eg. id DESC --organization VALUE - Organization name to search by --organization-id NUMBER - ID of an organization to show repositories in --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --page NUMBER - Page number, starting at 1 --per-page NUMBER - Number of results per page to return --product VALUE - Product name to search by --product-id NUMBER - ID of a product to show repositories of --rpm-id VALUE - Id of a rpm package to find repositories that contain the rpm --search VALUE - Search string --username VALUE - Only show the repositories readable by this user with this username --with-content VALUE - Limit the repository type to return. View available types with "hammer repository types" -h , --help - Print help Table 3.170. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Product x x Content type x x Content label x x Url x x Search / Order fields container_repository_name - string content_label - string content_type - string content_view_id - integer description - text distribution_arch - string distribution_bootable - boolean distribution_family - string distribution_variant - string distribution_version - string download_policy - string label - string name - string product - string product_id - integer product_name - string redhat - Values: true, false 3.65.5. repository reclaim-space Reclaim space from an On Demand repository Usage Options --async - Do not wait for the task --id NUMBER - Repository ID --name VALUE - Repository name to search by --organization VALUE - Organization name to search by --organization-id VALUE - Organization ID to search by --organization-label VALUE - Organization label to search by --product VALUE - Product name to search by --product-id NUMBER - Product numeric identifier -h , --help - Print help 3.65.6. repository remove-content Remove content from a repository Usage Options --content-type VALUE - The type of content unit to remove (srpm, docker_manifest, etc.). View removable types with "hammer repository types" --id NUMBER - Repository ID --ids LIST - Array of content ids to remove --name VALUE - Repository name to search by --organization VALUE - Organization name to search by --organization-id VALUE - Organization ID to search by --organization-label VALUE - Organization label to search by --product VALUE - Product name to search by --product-id NUMBER - Product numeric identifier --sync-capsule BOOLEAN - Whether or not to sync an external capsule after upload. Default: true -h , --help - Print help 3.65.7. repository republish Forces a republish of the specified repository. Usage Options --async - Do not wait for the task --force BOOLEAN - Force metadata regeneration to proceed. Dangerous when repositories use the Complete Mirroring mirroring policy --id NUMBER - Repository identifier --name VALUE - Repository name to search by --organization VALUE - Organization name to search by --organization-id VALUE - Organization ID to search by --organization-label VALUE - Organization label to search by --product VALUE - Product name to search by --product-id NUMBER - Product numeric identifier -h , --help - Print help 3.65.8. repository synchronize Sync a repository Usage Options --async - Do not wait for the task --id NUMBER - Repository ID --incremental BOOLEAN - Perform an incremental import --name VALUE - Repository name to search by --organization VALUE - Organization name to search by --organization-id NUMBER - Organization ID --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --product VALUE - Product name to search by --product-id NUMBER - Product numeric identifier --skip-metadata-check BOOLEAN Force sync even if no upstream changes are detected. Only used with yum or deb repositories. --source-url VALUE - Temporarily override feed URL for sync --validate-contents BOOLEAN - Force a sync and validate the checksums of all content. Only used with yum repositories. -h , --help - Print help 3.65.9. repository types Show the available repository types Usage Options --creatable BOOLEAN - When set to True repository types that are creatable will be returned --fields LIST - Show specified fields or predefined field sets only. (See below) -h , --help - Print help Table 3.171. Predefined field sets FIELDS ALL DEFAULT THIN Name x x x Content types/type x x Content types/generic? x x Content types/removable? x x Content types/uploadable? x x Content types/indexed? x x 3.65.10. repository update Update a repository Usage Options --ansible-collection-auth-token VALUE - The token key to use for authentication. --ansible-collection-auth-url VALUE - The URL to receive a session token from, e.g. used with Automation Hub. --ansible-collection-requirements VALUE - Contents of requirement yaml file to sync from URL --ansible-collection-requirements-file VALUE Location of the ansible collections requirements.yml file. --arch VALUE - Architecture of content in the repository --checksum-type VALUE - Checksum of the repository, currently sha1 & sha256 are supported --deb-architectures VALUE - Whitespace-separated list of architectures to be synced from deb-archive --deb-components VALUE - Whitespace-separated list of repo components to be synced from deb-archive --deb-releases VALUE - Whitespace-separated list of releases to be synced from deb-archive --description VALUE - Description of the repository --docker-digest VALUE - Container Image manifest digest --docker-tag VALUE - Container Image tag --docker-upstream-name VALUE - Name of the upstream docker repository --download-concurrency NUMBER - Used to determine download concurrency of the repository in pulp3. Use value less than 20. Defaults to 10 --download-policy ENUM - Download policy for yum, deb, and docker repos (either immediate or on_demand ) Possible value(s): immediate , on_demand --exclude-tags LIST - Comma-separated list of tags to exclude when syncing a container image repository. Default: any tag ending in "-source" --excludes LIST - Python packages to exclude from the upstream URL, names separated by newline. You may also specify versions, for example: django~=2.0. --gpg-key-id NUMBER - Id of the gpg key that will be assigned to the new repository --http-proxy VALUE - Name to search by --http-proxy-id NUMBER - ID of a HTTP Proxy --http-proxy-policy ENUM - Policies for HTTP Proxy for content sync Possible value(s): global_default_http_proxy , none , use_selected_http_proxy --id NUMBER - Repository ID --ignorable-content LIST - List of content units to ignore while syncing a yum repository. Must be subset of srpm,treeinfo --include-tags LIST - Comma-separated list of tags to sync for a container image repository --includes LIST - Python packages to include from the upstream URL, names separated by newline. You may also specify versions, for example: django~=2.0. Leave empty to include every package. --metadata-expire NUMBER - Time to expire yum metadata in seconds. Only relevant for custom yum repositories. --mirroring-policy ENUM - Policy to set for mirroring content. Must be one of additive. Possible value(s): additive , mirror_complete , mirror_content_only --name VALUE --new-name VALUE --organization VALUE - Organization name to search by --organization-id VALUE - Organization ID to search by --organization-label VALUE - Organization label to search by --os-versions LIST - Identifies whether the repository should be unavailable on a client with a non-matching OS version. Pass [] to make repo available for clients regardless of OS version. Maximum length 1; allowed tags are: rhel-6, rhel-7, rhel-8, rhel-9 --package-types LIST - Package types to sync for Python content, separated by comma. Leave empty to get every package type. Package types are: bdist_dmg, bdist_dumb, bdist_egg, bdist_msi, bdist_rpm, bdist_wheel, bdist_wininst, sdist. --product VALUE - Product name to search by --product-id NUMBER - Product numeric identifier --publish-via-http BOOLEAN - Publish Via HTTP --retain-package-versions-count NUMBER - The maximum number of versions of each package to keep. --ssl-ca-cert-id NUMBER - Identifier of the content credential containing the SSL CA Cert --ssl-client-cert-id NUMBER - Identifier of the content credential containing the SSL Client Cert --ssl-client-key-id NUMBER - Identifier of the content credential containing the SSL Client Key --upstream-authentication-token VALUE - Password of the upstream authentication token. --upstream-password VALUE - Password of the upstream repository user used for authentication --upstream-username VALUE - Username of the upstream repository user used for authentication --url VALUE - Repository source url --verify-ssl-on-sync BOOLEAN - If true, Katello will verify the upstream url`s SSL certifcates are signed by a trusted CA -h , --help - Print help 3.65.11. repository upload-content Upload content into the repository Usage Options --async - Do not wait for the task. --content-type VALUE - The type of content unit to upload (srpm, file, etc.). View uploadable types with "hammer repository types" --fields LIST - Show specified fields or predefined field sets only. (See below) --id NUMBER - Repository ID --name VALUE - Repository name to search by --organization VALUE - Organization name to search by --organization-id NUMBER - Organization ID --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --ostree-repository-name VALUE Name of OSTree repository in archive. --path FILE - Upload file, directory of files, or glob of files as content for a repository. Globs must be escaped by single or double quotes --product VALUE - Product name to search by --product-id NUMBER - Product numeric identifier -h , --help - Print help Table 3.172. Predefined field sets FIELDS 3.66. repository-set Manipulate repository sets on the server Usage Options -h , --help - Print help 3.66.1. repository-set available-repositories Get list of available repositories for the repository set Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id NUMBER - ID of the repository set --name VALUE - Repository set name to search by --organization VALUE - Organization name to search by --organization-id NUMBER - Organization identifier --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --product VALUE - Product name to search by --product-id NUMBER - ID of a product to list repository sets from -h , --help - Print help Table 3.173. Predefined field sets FIELDS ALL DEFAULT THIN Name x x x Arch x x Release x x Registry name x x Enabled x x 3.66.2. repository-set disable Disable a repository from the set Usage Options --basearch VALUE - Basearch to disable --id NUMBER - ID of the repository set to disable --name VALUE - Repository set name to search by --organization VALUE - Organization name to search by --organization-id NUMBER - Organization identifier --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --product VALUE - Product name to search by --product-id NUMBER - ID of the product containing the repository set --releasever VALUE - Releasever to disable --repository VALUE - Repository name to search by --repository-id NUMBER - ID of the repository within the set to disable -h , --help - Print help 3.66.3. repository-set enable Enable a repository from the set Usage Options --basearch VALUE - Basearch to enable --id NUMBER - ID of the repository set to enable --name VALUE - Repository set name to search by --organization VALUE - Organization name to search by --organization-id NUMBER - Organization identifier --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --product VALUE - Product name to search by --product-id NUMBER - ID of the product containing the repository set --releasever VALUE - Releasever to enable -h , --help - Print help 3.66.4. repository-set info Get info about a repository set Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id NUMBER - ID of the repository set --name VALUE - Repository set name to search by --organization VALUE - Organization name to search by --organization-id NUMBER - Organization identifier --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --product VALUE - Product name to search by --product-id NUMBER - ID of a product to list repository sets from -h , --help - Print help Table 3.174. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Type x x Url x x Gpg key x x Label x x Enabled repositories/id x x Enabled repositories/name x x 3.66.5. repository-set list List repository sets. Usage Options --activation-key VALUE - Activation key name to search by --activation-key-id NUMBER - Activation key identifier --content-access-mode-all BOOLEAN Get all content available, not just that provided by subscriptions. --content-access-mode-env BOOLEAN Limit content to just that available in the host`s or activation key`s content view version and lifecycle environment. --enabled BOOLEAN - If true, only return repository sets that have been enabled. Defaults to false --fields LIST - Show specified fields or predefined field sets only. (See below) --full-result BOOLEAN - Whether or not to show all results --host VALUE - Host name --host-id NUMBER - Id of the host --name VALUE - Repository set name to search on --order VALUE - Sort field and order, eg. id DESC --organization VALUE - Organization name to search by --organization-id NUMBER - Organization identifier --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --page NUMBER - Page number, starting at 1 --per-page NUMBER - Number of results per page to return --product VALUE - Product name to search by --product-id NUMBER - ID of a product to list repository sets from --repository-type ENUM - Limit content to Red Hat / custom Possible value(s): redhat , custom --search VALUE - Search string --status ENUM - Limit content to enabled / disabled / overridden Possible value(s): enabled , disabled , overridden --with-active-subscription BOOLEAN If true, only return repository sets that are associated with an active subscriptions --with-custom BOOLEAN - If true, return custom repository sets along with redhat repos. Will be ignored if repository_type is supplied. -h , --help - Print help Table 3.175. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Type x x Name x x x Search / Order fields content_label - string content_type - string enabled_by_default - Values: true, false label - string name - string path - string product - string product_id - integer product_name - string redhat - Values: true, false 3.67. role Manage user roles Usage Options -h , --help - Print help 3.67.1. role clone Clone a role Usage Options --description VALUE - Role description --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-ids LIST - REPLACE locations with given ids --location-title VALUE - Set the current location context for the request --location-titles LIST --locations LIST --name VALUE --new-name VALUE --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-ids LIST - REPLACE organizations with given ids. --organization-title VALUE - Set the current organization context for the request --organization-titles LIST --organizations LIST -h , --help - Print help 3.67.2. role create Create a role Usage Options --description VALUE - Role description --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-ids LIST - REPLACE locations with given ids --location-title VALUE - Set the current location context for the request --location-titles LIST --locations LIST --name VALUE --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-ids LIST - REPLACE organizations with given ids. --organization-title VALUE - Set the current organization context for the request --organization-titles LIST --organizations LIST -h , --help - Print help 3.67.3. role delete Delete a role Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - User role name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.67.4. role filters List all filters Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE - User role id --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - User role name --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results -h , --help - Print help Table 3.176. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Resource type x x Search x x Unlimited? x x Override? x x Role x x Permissions x x 3.67.5. role info Show a role Usage Options --description VALUE --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - User role name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.177. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Builtin x x Description x x Locations/ x x Organizations/ x x 3.67.6. role list List all roles Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results -h , --help - Print help Table 3.178. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Builtin x x Search / Order fields builtin - Values: true, false description - text id - integer locked - Values: true, false name - string permission - string 3.67.7. role update Update a role Usage Options --description VALUE - Role description --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-ids LIST - REPLACE locations with given ids --location-title VALUE - Set the current location context for the request --location-titles LIST --locations LIST --name VALUE --new-name VALUE --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-ids LIST - REPLACE organizations with given ids. --organization-title VALUE - Set the current organization context for the request --organization-titles LIST --organizations LIST -h , --help - Print help 3.68. scap-content Manipulate SCAP contents Usage Options -h , --help - Print help 3.68.1. scap-content bulk-upload Upload scap contents in bulk Usage Options --directory VALUE - Directory to upload when using "directory" upload type --files LIST - File paths to upload when using "files" upload type --location VALUE - Name to search by --location-id NUMBER - Set the current location context for the request --organization VALUE - Name to search by --organization-id NUMBER - Set the current organization context for the request --type ENUM - Type of the upload Possible value(s): files , directory , default -h , --help - Print help 3.68.2. scap-content create Create SCAP content Usage Options --location VALUE - Name to search by --location-id NUMBER - Set the current location context for the request --location-ids LIST - REPLACE locations with given ids --locations LIST --organization VALUE - Name to search by --organization-id NUMBER - Set the current organization context for the request --organization-ids LIST - REPLACE organizations with given ids. --organizations LIST --original-filename VALUE - Original file name of the XML file --scap-file FILE - SCAP content file --title VALUE - SCAP content name -h , --help - Print help 3.68.3. scap-content delete Deletes an SCAP content Usage Options --id VALUE --location VALUE - Name to search by --location-id NUMBER - Set the current location context for the request --organization VALUE - Name to search by --organization-id NUMBER - Set the current organization context for the request --title VALUE - SCAP content title -h , --help - Print help 3.68.4. scap-content download Download an SCAP content as XML Usage Options --id VALUE --location VALUE - Name to search by --location-id NUMBER - Set the current location context for the request --organization VALUE - Name to search by --organization-id NUMBER - Set the current organization context for the request --path VALUE - Path to directory where downloaded file will be saved --title VALUE - SCAP content title -h , --help - Print help 3.68.5. scap-content info Show an SCAP content Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Name to search by --location-id NUMBER - Set the current location context for the request --organization VALUE - Name to search by --organization-id NUMBER - Set the current organization context for the request --title VALUE - SCAP content title -h , --help - Print help Table 3.179. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Title x x x Digest x x Created at x x Original filename x x Scap content profiles/id x x Scap content profiles/profile id x x Scap content profiles/title x x Locations/ x x Organizations/ x x 3.68.6. scap-content list List SCAP contents Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results -h , --help - Print help Table 3.180. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Title x x x Digest x x Search / Order fields created_at - datetime filename - string location - string location_id - integer organization - string organization_id - integer title - string 3.68.7. scap-content update Update an SCAP content Usage Options --id VALUE --location VALUE - Name to search by --location-id NUMBER - Set the current location context for the request --location-ids LIST - REPLACE locations with given ids --locations LIST --new-title VALUE - SCAP content name --organization VALUE - Name to search by --organization-id NUMBER - Set the current organization context for the request --organization-ids LIST - REPLACE organizations with given ids. --organizations LIST --original-filename VALUE - Original file name of the XML file --scap-file FILE - SCAP content file --title VALUE - SCAP content name -h , --help - Print help 3.69. scap-content-profile Manipulate Scap Content Profiles Usage Options -h , --help - Print help 3.69.1. scap-content-profile list List SCAP content profiles Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results -h , --help - Print help Table 3.181. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Title x x Profile id x x Scap content id x x Scap content title x x Tailoring file id x x Tailoring file name x x x Search / Order fields profile_id - string title - string 3.70. settings Change server settings Usage Options -h , --help - Print help 3.70.1. settings info Show a setting Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Setting name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.182. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Description x x Category x x Settings type x x Value x x 3.70.2. settings list List all settings Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results -h , --help - Print help Table 3.183. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Full name x x Value x x Description x x Search / Order fields id - integer name - string 3.70.3. settings set Update a setting Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Setting name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --value VALUE -h , --help - Print help 3.71. shell Interactive shell Usage Options -h , --help - Print help 3.72. simple-content-access Simple content access commands Usage Options -h , --help - Print help Unfortunately the server does not support such operation. 3.72.1. simple-content-access disable Disable simple content access for a manifest. Warning Simple Content Access will be required for all organizations in Satellite 6.16. Usage Options --async - Do not wait for the task --organization VALUE - Organization name to search by --organization-id NUMBER - Organization ID --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title -h , --help - Print help 3.72.2. simple-content-access enable Enable simple content access for a manifest Usage Options --async - Do not wait for the task --organization VALUE - Organization name to search by --organization-id NUMBER - Organization ID --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title -h , --help - Print help 3.72.3. simple-content-access status Check if the specified organization has Simple Content Access enabled. Warning Simple Content Access will be required for all organizations in Satellite 6.16. Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --organization VALUE - Organization name to search by --organization-id NUMBER - Organization ID --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title -h , --help - Print help Table 3.184. Predefined field sets FIELDS ALL DEFAULT Simple content access x x 3.73. srpm Manipulate source RPMs Usage Options -h , --help - Print help 3.73.1. srpm info Show SRPM details Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE - SRPM details identifier --name VALUE - Name to search by --organization VALUE - Organization name to search by --organization-id NUMBER - Organization identifier --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --repository VALUE - Repository name to search by --repository-id NUMBER - Repository identifier -h , --help - Print help Table 3.185. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Version x x Architecture x x Epoch x x Release x x Filename x x Description x x 3.73.2. srpm list List srpms Usage Options --content-view VALUE - Content view name to search by --content-view-id NUMBER - Content view numeric identifier --content-view-version VALUE - Content view version number --content-view-version-id NUMBER Content View Version identifier --environment VALUE - Lifecycle environment name to search by (--environment is deprecated: Use --lifecycle-environment instead) --environment-id NUMBER - (--environment-id is deprecated: Use --lifecycle-environment-id instead) --fields LIST - Show specified fields or predefined field sets only. (See below) --full-result BOOLEAN - Whether or not to show all results --lifecycle-environment VALUE - Lifecycle environment name to search by --lifecycle-environment-id NUMBER Environment identifier --order VALUE - Sort field and order, eg. id DESC --organization VALUE - Organization name to search by --organization-id NUMBER - Organization identifier --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --page NUMBER - Page number, starting at 1 --per-page NUMBER - Number of results per page to return --product VALUE - Product name to search by --product-id NUMBER - Product numeric identifier --repository VALUE - Repository name to search by --repository-id NUMBER - Repository identifier --search VALUE - Search string -h , --help - Print help Table 3.186. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Filename x x 3.74. status Get the complete status of the server and/or it's subcomponents Usage Options -h , --help - Print help 3.74.1. status foreman Shows status and version information of Satellite system and it's subcomponents Usage Options -h , --help - Print help 3.74.2. status katello Shows version information Usage Options -h , --help - Print help 3.75. subnet Manipulate subnets Usage Options -h , --help - Print help 3.75.1. subnet create Create a subnet Usage Options --bmc VALUE - BMC Proxy to use within this subnet --bmc-id NUMBER - BMC Capsule ID to use within this subnet --boot-mode ENUM - Default boot mode for interfaces assigned to this subnet. Possible value(s): Static , DHCP --description VALUE - Subnet description --dhcp VALUE - DHCP Proxy to use within this subnet --dhcp-id NUMBER - DHCP Capsule ID to use within this subnet --discovery-id NUMBER - ID of Discovery Capsule to use within this subnet for managing connection to discovered hosts --dns VALUE - DNS Proxy to use within this subnet --dns-id NUMBER - DNS Capsule ID to use within this subnet --dns-primary VALUE - Primary DNS for this subnet --dns-secondary VALUE - Secondary DNS for this subnet --domain-ids LIST - Domains in which this subnet is part --domains LIST --externalipam-group VALUE - External IPAM group - only relevant when IPAM is set to external --externalipam-id NUMBER - External IPAM Capsule ID to use within this subnet --from VALUE - Starting IP Address for IP auto suggestion --gateway VALUE - Subnet gateway --httpboot-id NUMBER - HTTPBoot Capsule ID to use within this subnet --ipam ENUM - IP Address auto suggestion mode for this subnet. Possible value(s): DHCP , Internal DB , Random DB , EUI-64 , External IPAM , None --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-ids LIST - REPLACE locations with given ids --location-title VALUE - Set the current location context for the request --location-titles LIST --locations LIST --mask VALUE - Netmask for this subnet --mtu NUMBER - MTU for this subnet --name VALUE - Subnet name --network VALUE - Subnet network --network-type ENUM - Type or protocol, IPv4 or IPv6, defaults to IPv4 Possible value(s): IPv4 , IPv6 --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-ids LIST - REPLACE organizations with given ids. --organization-title VALUE - Set the current organization context for the request --organization-titles LIST --organizations LIST --prefix VALUE - Network prefix in CIDR notation (e.g. 64) for this subnet --remote-execution-proxy-ids LIST List of Capsule IDs to be used for remote execution --template-id NUMBER - Template HTTP(S) Capsule ID to use within this subnet --tftp VALUE - TFTP Proxy to use within this subnet --tftp-id NUMBER - TFTP Capsule ID to use within this subnet --to VALUE - Ending IP Address for IP auto suggestion --vlanid VALUE - VLAN ID for this subnet -h , --help - Print help 3.75.2. subnet delete Delete a subnet Usage Options --id NUMBER - Subnet numeric identifier --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Subnet name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.75.3. subnet delete-parameter Delete parameter for a subnet Usage Options --name VALUE - Parameter name --subnet VALUE - Subnet name --subnet-id NUMBER -h , --help - Print help 3.75.4. subnet info Show a subnet Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Subnet name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --show-hidden-parameters BOOLEAN Display hidden parameter values -h , --help - Print help Table 3.187. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Description x x Protocol x x Priority x x Network addr x x Network prefix x x Network mask x x Gateway addr x x Primary dns x x Secondary dns x x Smart proxies/dns x x Smart proxies/tftp x x Smart proxies/dhcp x x Remote execution proxies/id x x Remote execution proxies/name x x Ipam x x Start of ip range x x End of ip range x x Vlan id x x Mtu x x Boot mode x x Domains/ x x Locations/ x x Organizations/ x x Parameters/ x x 3.75.5. subnet list List of subnets Usage Options --domain VALUE - Domain name --domain-id VALUE - ID of domain --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Scope by locations --location-title VALUE - Set the current location context for the request --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Scope by organizations --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results -h , --help - Print help Table 3.188. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Network addr x x Network prefix x x Network mask x x Vlan id x x Boot mode x x Gateway address x x Search / Order fields boot_mode - string dns_primary - string dns_secondary - string domain - string gateway - string id - integer ipam - string location - string location_id - integer mask - string mtu - integer name - text network - string nic_delay - integer organization - string organization_id - integer params - string type - string vlanid - integer 3.75.6. subnet set-parameter Create or update parameter for a subnet Usage Options --hidden-value BOOLEAN - Should the value be hidden --name VALUE - Parameter name --parameter-type ENUM - Type of the parameter Possible value(s): string , boolean , integer , real , array , hash , yaml , json Default: "string" --subnet VALUE - Subnet name --subnet-id NUMBER --value VALUE - Parameter value -h , --help - Print help 3.75.7. subnet update Update a subnet Usage Options --bmc VALUE - BMC Proxy to use within this subnet --bmc-id NUMBER - BMC Capsule ID to use within this subnet --boot-mode ENUM - Default boot mode for interfaces assigned to this subnet. Possible value(s): Static , DHCP --description VALUE - Subnet description --dhcp VALUE - DHCP Proxy to use within this subnet --dhcp-id NUMBER - DHCP Capsule ID to use within this subnet --discovery-id NUMBER - ID of Discovery Capsule to use within this subnet for managing connection to discovered hosts --dns VALUE - DNS Proxy to use within this subnet --dns-id NUMBER - DNS Capsule ID to use within this subnet --dns-primary VALUE - Primary DNS for this subnet --dns-secondary VALUE - Secondary DNS for this subnet --domain-ids LIST - Domains in which this subnet is part --domains LIST --externalipam-group VALUE - External IPAM group - only relevant when IPAM is set to external --externalipam-id NUMBER - External IPAM Capsule ID to use within this subnet --from VALUE - Starting IP Address for IP auto suggestion --gateway VALUE - Subnet gateway --httpboot-id NUMBER - HTTPBoot Capsule ID to use within this subnet --id NUMBER - Subnet numeric identifier --ipam ENUM - IP Address auto suggestion mode for this subnet. Possible value(s): DHCP , Internal DB , Random DB , EUI-64 , External IPAM , None --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-ids LIST - REPLACE locations with given ids --location-title VALUE - Set the current location context for the request --location-titles LIST --locations LIST --mask VALUE - Netmask for this subnet --mtu NUMBER - MTU for this subnet --name VALUE - Subnet name --network VALUE - Subnet network --network-type ENUM - Type or protocol, IPv4 or IPv6, defaults to IPv4 Possible value(s): IPv4 , IPv6 --new-name VALUE - Subnet name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-ids LIST - REPLACE organizations with given ids. --organization-title VALUE - Set the current organization context for the request --organization-titles LIST --organizations LIST --prefix VALUE - Network prefix in CIDR notation (e.g. 64) for this subnet --remote-execution-proxy-ids LIST List of Capsule IDs to be used for remote execution --template-id NUMBER - Template HTTP(S) Capsule ID to use within this subnet --tftp VALUE - TFTP Proxy to use within this subnet --tftp-id NUMBER - TFTP Capsule ID to use within this subnet --to VALUE - Ending IP Address for IP auto suggestion --vlanid VALUE - VLAN ID for this subnet -h , --help - Print help 3.76. subscription Manipulate subscriptions Usage Options -h , --help - Print help 3.76.1. subscription delete-manifest Delete manifest from Red Hat provider Usage Options --async - Do not wait for the task --organization VALUE - Organization name to search by --organization-id NUMBER - Organization id --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title -h , --help - Print help 3.76.2. subscription list List organization subscriptions Usage Options --activation-key VALUE - Activation key name to search by --activation-key-id VALUE - Activation key ID --available-for VALUE - Object to show subscriptions available for, either host or activation_key --fields LIST - Show specified fields or predefined field sets only. (See below) --full-result BOOLEAN - Whether or not to show all results --host VALUE - Host name --host-id VALUE - Id of a host --match-host BOOLEAN - Ignore subscriptions that are unavailable to the specified host --match-installed BOOLEAN - Return subscriptions that match installed products of the specified host --name VALUE - Name of the subscription --no-overlap BOOLEAN - Return subscriptions which do not overlap with a currently-attached subscription --order VALUE - Sort field and order, eg. id DESC --organization VALUE - Organization name to search by --organization-id NUMBER - Organization ID --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --page NUMBER - Page number, starting at 1 --per-page NUMBER - Number of results per page to return --search VALUE - Search string -h , --help - Print help Table 3.189. Predefined field sets FIELDS ALL DEFAULT Id x x Uuid x x Name x x Type x x Contract x x Account x x Support x x Start date x x End date x x Quantity x x Consumed x x 3.76.3. subscription manifest-history obtain manifest history for subscriptions Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --organization VALUE - Organization name to search by --organization-id NUMBER - Organization ID --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title -h , --help - Print help Table 3.190. Predefined field sets FIELDS ALL DEFAULT Status x x Status message x x Time x x 3.76.4. subscription refresh-manifest Refresh previously imported manifest for Red Hat provider Usage Options --async - Do not wait for the task --organization VALUE - Organization name to search by --organization-id NUMBER - Organization id --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title -h , --help - Print help 3.76.5. subscription upload Upload a subscription manifest Usage Options --async - Do not wait for the task --file FILE - Subscription manifest file --organization VALUE - Organization name to search by --organization-id NUMBER - Organization id --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title -h , --help - Print help 3.77. sync-plan Manipulate sync plans Usage Options -h , --help - Print help 3.77.1. sync-plan create Create a sync plan Usage Options --cron-expression VALUE - Set this when interval is custom cron --description VALUE - Sync plan description --enabled BOOLEAN - Enables or disables synchronization --interval VALUE - How often synchronization should run --name VALUE - Sync plan name --organization VALUE - Organization name to search by --organization-id NUMBER - Organization ID --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --sync-date DATETIME - Start date and time for the sync plan.Time is optional, if kept blank current system time will be considered -h , --help - Print help 3.77.2. sync-plan delete Destroy a sync plan Usage Options --id NUMBER - Sync plan numeric identifier --name VALUE - Sync plan name to search by --organization VALUE - Organization name to search by --organization-id NUMBER - Organization ID --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title -h , --help - Print help 3.77.3. sync-plan info Show a sync plan Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id NUMBER - Sync plan numeric identifier --name VALUE - Sync plan name to search by --organization VALUE - Organization name to search by --organization-id NUMBER - Organization ID --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title -h , --help - Print help Table 3.191. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Start date x x Interval x x Enabled x x Cron expression x x Recurring logic id x x Description x x Created at x x Updated at x x sync x x Products/id x x Products/name x x 3.77.4. sync-plan list List sync plans Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --full-result BOOLEAN - Whether or not to show all results --interval ENUM - Filter by interval Possible value(s): hourly , daily , weekly , custom cron --name VALUE - Filter by name --order VALUE - Sort field and order, eg. id DESC --organization VALUE - Organization name to search by --organization-id NUMBER - Organization ID --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --page NUMBER - Page number, starting at 1 --per-page NUMBER - Number of results per page to return --search VALUE - Search string --sync-date VALUE - Filter by sync date -h , --help - Print help Table 3.192. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Start date x x Interval x x Enabled x x Cron expression x x Recurring logic id x x Search / Order fields enabled - Values: true, false interval - string name - string organization_id - integer 3.77.5. sync-plan update Update a sync plan Usage Options --cron-expression VALUE - Add custom cron logic for sync plan --description VALUE - Sync plan description --enabled BOOLEAN - Enables or disables synchronization --id NUMBER - Sync plan numeric identifier --interval VALUE - How often synchronization should run --name VALUE - Sync plan name --new-name VALUE - Sync plan name --organization VALUE - Organization name to search by --organization-id NUMBER - Organization ID --organization-label VALUE - Organization label to search by --organization-title VALUE - Organization title --sync-date DATETIME - Start date and time of the synchronization -h , --help - Print help 3.78. tailoring-file Manipulate Tailoring files Usage Options -h , --help - Print help 3.78.1. tailoring-file create Create a Tailoring file Usage Options --location VALUE - Name to search by --location-id NUMBER - Set the current location context for the request --location-ids LIST - REPLACE locations with given ids --locations LIST --name VALUE - Tailoring file name --organization VALUE - Name to search by --organization-id NUMBER - Set the current organization context for the request --organization-ids LIST - REPLACE organizations with given ids. --organizations LIST --original-filename VALUE - Original file name of the XML file --scap-file FILE - Tailoring file content -h , --help - Print help 3.78.2. tailoring-file delete Deletes a Tailoring file Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.78.3. tailoring-file download Download a Tailoring file as XML Usage Options --id VALUE --location VALUE - Name to search by --location-id NUMBER - Set the current location context for the request --name VALUE - Tailoring file name --organization VALUE - Name to search by --organization-id NUMBER - Set the current organization context for the request --path VALUE - Path to directory where downloaded file will be saved -h , --help - Print help 3.78.4. tailoring-file info Show a Tailoring file Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Name to search by --location-id NUMBER - Set the current location context for the request --name VALUE - Tailoring file name --organization VALUE - Name to search by --organization-id NUMBER - Set the current organization context for the request -h , --help - Print help Table 3.193. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Created at x x Original filename x x Tailoring file profiles/id x x Tailoring file profiles/profile id x x Tailoring file profiles/title x x Locations/ x x Organizations/ x x 3.78.5. tailoring-file list List Tailoring files Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results -h , --help - Print help Table 3.194. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Search / Order fields created_at - datetime filename - string location - string location_id - integer name - string organization - string organization_id - integer 3.78.6. tailoring-file update Update a Tailoring file Usage Options --id VALUE --location VALUE - Name to search by --location-id NUMBER - Set the current location context for the request --location-ids LIST - REPLACE locations with given ids --locations LIST --name VALUE - Tailoring file name --new-name VALUE - Tailoring file name --organization VALUE - Name to search by --organization-id NUMBER - Set the current organization context for the request --organization-ids LIST - REPLACE organizations with given ids. --organizations LIST --original-filename VALUE - Original file name of the XML file --scap-file FILE - Tailoring file content -h , --help - Print help 3.79. task Tasks related actions. Usage Options -h , --help - Print help 3.79.1. task info Show task details Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE - UUID of the task --location-id NUMBER - Set the current location context for the request --organization-id NUMBER - Set the current organization context for the request -h , --help - Print help Table 3.195. Predefined field sets FIELDS ALL DEFAULT Id x x Action x x State x x Result x x Started at x x Ended at x x Duration x x Owner x x Task errors x x 3.79.2. task list List tasks Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location-id NUMBER - Set the current location context for the request --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization-id NUMBER - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --parent-task-id VALUE - UUID of the task --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results -h , --help - Print help Table 3.196. Predefined field sets FIELDS ALL DEFAULT Id x x Action x x State x x Result x x Started at x x Ended at x x Duration x x Owner x x Task errors x x 3.79.3. task progress Show the progress of the task Usage Options --id VALUE - UUID of the task --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.79.4. task resume Resume all tasks paused in error state Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --search VALUE - Resume tasks matching search string --task-ids LIST - Resume specific tasks by ID --tasks LIST -h , --help - Print help Table 3.197. Predefined field sets FIELDS ALL DEFAULT Total tasks found paused in error state x x Total tasks resumed x x Resumed tasks/task identifier x x Resumed tasks/task action x x Resumed tasks/task errors x x Total tasks failed to resume x x Failed tasks/task identifier x x Failed tasks/task action x x Failed tasks/task errors x x Total tasks skipped x x Skipped tasks/task identifier x x Skipped tasks/task action x x Skipped tasks/task errors x x 3.80. template Manipulate provisioning templates Usage Options -h , --help - Print help 3.80.1. template add-operatingsystem Associate an operating system Usage Options --id VALUE --name VALUE - Name to search by --operatingsystem VALUE - Operating system title --operatingsystem-id NUMBER -h , --help - Print help 3.80.2. template build-pxe-default Update the default PXE menu on all configured TFTP servers Usage Options --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.80.3. template clone Clone a provision template Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Template name --new-name VALUE - Template name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.80.4. template combination Manage template combinations Usage Options -h , --help - Print help 3.80.4.1. template combination create Add a template combination Usage Options --hostgroup VALUE - Hostgroup name --hostgroup-id VALUE - ID of host group --hostgroup-title VALUE - Hostgroup title --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --provisioning-template VALUE - Name to search by --provisioning-template-id VALUE ID of config template -h , --help - Print help 3.80.4.2. template combination delete Delete a template combination Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.80.4.3. template combination info Show template combination Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --hostgroup VALUE - Hostgroup name --hostgroup-id VALUE - ID of host group --hostgroup-title VALUE - Hostgroup title --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --provisioning-template VALUE - Name to search by --provisioning-template-id VALUE ID of config template -h , --help - Print help Table 3.198. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Provisioning template id x x Provisioning template name x x Hostgroup id x x Hostgroup name x x Locations/ x x Organizations/ x x Created at x x Updated at x x 3.80.4.4. template combination list List template combination Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --provisioning-template VALUE - Name to search by --provisioning-template-id VALUE ID of config template -h , --help - Print help Table 3.199. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Provisioning template x x Hostgroup x x 3.80.4.5. template combination update Update template combination Usage Options --hostgroup VALUE - Hostgroup name --hostgroup-id VALUE - ID of host group --hostgroup-title VALUE - Hostgroup title --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --provisioning-template VALUE - Name to search by --provisioning-template-id VALUE ID of config template -h , --help - Print help 3.80.5. template create Create a provisioning template Usage Options --audit-comment VALUE --description VALUE --file FILE - Path to a file that contains the template --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-ids LIST - REPLACE locations with given ids --location-title VALUE - Set the current location context for the request --location-titles LIST --locations LIST --locked BOOLEAN - Whether or not the template is locked for editing --name VALUE - Template name --operatingsystem-ids LIST - Array of operating system IDs to associate with the template --operatingsystems LIST --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-ids LIST - REPLACE organizations with given ids. --organization-title VALUE - Set the current organization context for the request --organization-titles LIST --organizations LIST --type VALUE - Template type. Eg. snippet, script, provision -h , --help - Print help 3.80.6. template delete Delete a provisioning template Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.80.7. template dump View provisioning template content Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.200. Predefined field sets FIELDS 3.80.8. template export Export a provisioning template to ERB Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --path VALUE - Path to directory where downloaded content will be saved -h , --help - Print help 3.80.9. template import Import a provisioning template Usage Options --associate ENUM - Determines when the template should associate objects based on metadata, new means only when new template is being created, always means both for new and existing template which is only being updated, never ignores metadata Possible value(s): new , always , never --default BOOLEAN - Makes the template default meaning it will be automatically associated with newly created organizations and locations (false by default) --file FILE - Path to a file that contains the template content including metadata --force BOOLEAN - Use if you want update locked templates --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-ids LIST - REPLACE locations with given ids --location-title VALUE - Set the current location context for the request --location-titles LIST --locations LIST --lock BOOLEAN - Lock imported templates (false by default) --name VALUE - Template name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-ids LIST - REPLACE organizations with given ids. --organization-title VALUE - Set the current organization context for the request --organization-titles LIST --organizations LIST -h , --help - Print help 3.80.10. template info Show provisioning template details Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.201. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Type x x Description x x Locked x x Operating systems/ x x Locations/ x x Organizations/ x x Template combinations/hostgroup name x x Template combinations/environment name x x 3.80.11. template kinds List available provisioning template kinds Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) -h , --help - Print help Table 3.202. Predefined field sets FIELDS ALL DEFAULT THIN Name x x x 3.80.12. template list List provisioning templates Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Scope by locations --location-title VALUE - Set the current location context for the request --operatingsystem VALUE - Operating system title --operatingsystem-id NUMBER - ID of operating system --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Scope by organizations --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results -h , --help - Print help Table 3.203. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Type x x Search / Order fields default_template - Values: true, false hostgroup - string id - integer kind - string location - string location_id - integer locked - Values: true, false name - string operatingsystem - string organization - string organization_id - integer snippet - Values: true, false supported - Values: true, false template - text vendor - string 3.80.13. template remove-operatingsystem Disassociate an operating system Usage Options --id VALUE --name VALUE - Name to search by --operatingsystem VALUE - Operating system title --operatingsystem-id NUMBER -h , --help - Print help 3.80.14. template update Update a provisioning template Usage Options --audit-comment VALUE --description VALUE --file FILE - Path to a file that contains the template --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-ids LIST - REPLACE locations with given ids --location-title VALUE - Set the current location context for the request --location-titles LIST --locations LIST --locked BOOLEAN - Whether or not the template is locked for editing --name VALUE - Template name --new-name VALUE - Template name --operatingsystem-ids LIST - Array of operating system IDs to associate with the template --operatingsystems LIST --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-ids LIST - REPLACE organizations with given ids. --organization-title VALUE - Set the current organization context for the request --organization-titles LIST --organizations LIST --type VALUE - Template type. Eg. snippet, script, provision -h , --help - Print help 3.81. template-input Manage template inputs Usage Options -h , --help - Print help 3.81.1. template-input create Create a template input Usage Options --advanced BOOLEAN - Input is advanced --default VALUE - Default value for user input --description VALUE - Input description --fact-name VALUE - Fact name, used when input type is Fact value --hidden-value BOOLEAN - The value contains sensitive information and shouldn not be normally visible, useful e.g. for passwords --input-type ENUM - Input type Possible value(s): user , fact , variable --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Input name --options LIST - Selectable values for user inputs --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --required BOOLEAN - Input is required --resource-type ENUM - For values of type search, this is the resource the value searches in Possible value(s): AnsibleRole , AnsibleVariable , Architecture , Audit , AuthSource , Bookmark , ComputeProfile , ComputeResource , ConfigReport , DiscoveryRule , Domain , ExternalUsergroup , FactValue , Filter , ForemanOpenscap::ArfReport , ForemanOpenscap::OvalContent , ForemanOpenscap::OvalPolicy , ForemanOpenscap::Policy , ForemanOpenscap::ScapContent , ForemanOpenscap::TailoringFile , ForemanTasks::RecurringLogic , ForemanTasks::Task , ForemanVirtWhoConfigure::Config , Host , Hostgroup , HttpProxy , Image , InsightsHit , JobInvocation , JobTemplate , Katello::ActivationKey , Katello::AlternateContentSource , Katello::ContentCredential , Katello::ContentView , Katello::HostCollection , Katello::KTEnvironment , Katello::Product , Katello::Subscription , Katello::SyncPlan , KeyPair , Location , LookupValue , MailNotification , Medium , Model , Operatingsystem , Organization , Parameter , PersonalAccessToken , ProvisioningTemplate , Ptable , Realm , RemoteExecutionFeature , ReportTemplate , Role , Setting , SmartProxy , SshKey , Subnet , Template , TemplateInvocation , User , Usergroup , Webhook , WebhookTemplate --template-id VALUE --value-type ENUM - Value type, defaults to plain Possible value(s): plain , search , date , resource --variable-name VALUE - Variable name, used when input type is Variable -h , --help - Print help 3.81.2. template-input delete Delete a template input Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --template-id VALUE -h , --help - Print help 3.81.3. template-input info Show template input details Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --template-id VALUE -h , --help - Print help Table 3.204. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Input type x x Fact name x x Variable name x x Puppet parameter name x x Options x x Default value x x 3.81.4. template-input list List template inputs Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results --template-id VALUE -h , --help - Print help Table 3.205. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Input type x x Search / Order fields id - integer input_type - string name - string 3.81.5. template-input update Update a template input Usage Options --advanced BOOLEAN - Input is advanced --default VALUE - Default value for user input --description VALUE - Input description --fact-name VALUE - Fact name, used when input type is Fact value --hidden-value BOOLEAN - The value contains sensitive information and shouldn not be normally visible, useful e.g. for passwords --id VALUE --input-type ENUM - Input type Possible value(s): user , fact , variable --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Input name --new-name VALUE - Input name --options LIST - Selectable values for user inputs --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --required BOOLEAN - Input is required --resource-type ENUM - For values of type search, this is the resource the value searches in Possible value(s): AnsibleRole , AnsibleVariable , Architecture , Audit , AuthSource , Bookmark , ComputeProfile , ComputeResource , ConfigReport , DiscoveryRule , Domain , ExternalUsergroup , FactValue , Filter , ForemanOpenscap::ArfReport , ForemanOpenscap::OvalContent , ForemanOpenscap::OvalPolicy , ForemanOpenscap::Policy , ForemanOpenscap::ScapContent , ForemanOpenscap::TailoringFile , ForemanTasks::RecurringLogic , ForemanTasks::Task , ForemanVirtWhoConfigure::Config , Host , Hostgroup , HttpProxy , Image , InsightsHit , JobInvocation , JobTemplate , Katello::ActivationKey , Katello::AlternateContentSource , Katello::ContentCredential , Katello::ContentView , Katello::HostCollection , Katello::KTEnvironment , Katello::Product , Katello::Subscription , Katello::SyncPlan , KeyPair , Location , LookupValue , MailNotification , Medium , Model , Operatingsystem , Organization , Parameter , PersonalAccessToken , ProvisioningTemplate , Ptable , Realm , RemoteExecutionFeature , ReportTemplate , Role , Setting , SmartProxy , SshKey , Subnet , Template , TemplateInvocation , User , Usergroup , Webhook , WebhookTemplate --template-id VALUE --value-type ENUM - Value type, defaults to plain Possible value(s): plain , search , date , resource --variable-name VALUE - Variable name, used when input type is Variable -h , --help - Print help 3.82. user Manipulate users Usage Options -h , --help - Print help 3.82.1. user access-token Managing personal access tokens Usage Options -h , --help - Print help 3.82.1.1. user access-token create Create a Personal Access Token for a user Usage Options --expires-at VALUE - Expiry Date --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --user VALUE - User`s login to search by --user-id VALUE - ID of the user -h , --help - Print help 3.82.1.2. user access-token info Show a Personal Access Token for a user Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --user VALUE - User`s login to search by --user-id VALUE - ID of the user -h , --help - Print help Table 3.206. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Active x x Expires at x x Created at x x Last used at x x 3.82.1.3. user access-token list List all Personal Access Tokens for a user Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results --user VALUE - User`s login to search by --user-id VALUE - ID of the user -h , --help - Print help Table 3.207. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Active x x Expires at x x Search / Order fields id - integer name - string user_id - integer 3.82.1.4. user access-token revoke Revoke a Personal Access Token for a user Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --user VALUE - User`s login to search by --user-id VALUE - ID of the user -h , --help - Print help 3.82.2. user add-role Assign a user role Usage Options --id VALUE --login VALUE - User`s login to search by --role VALUE - User role name --role-id NUMBER -h , --help - Print help 3.82.3. user create Create a user Usage Options --admin BOOLEAN - Is an admin account --ask-password BOOLEAN --auth-source VALUE - Name to search by --auth-source-id NUMBER --default-location VALUE - Default location name --default-location-id NUMBER --default-organization VALUE - Default organization name --default-organization-id NUMBER --description VALUE --disabled BOOLEAN --firstname VALUE --lastname VALUE --locale ENUM - User`s preferred locale Possible value(s): ca , cs_CZ , de , en , en_GB , es , fr , it , ja , ka , ko , pl , pt_BR , ru , zh_CN , zh_TW --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-ids LIST - REPLACE locations with given ids --location-title VALUE - Set the current location context for the request --location-titles LIST --locations LIST --login VALUE --mail VALUE --mail-enabled BOOLEAN - Enable user`s email --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-ids LIST - REPLACE organizations with given ids. --organization-title VALUE - Set the current organization context for the request --organization-titles LIST --organizations LIST --password VALUE - Required unless user is in an external authentication source --role-ids LIST --roles LIST --timezone ENUM - User`s timezone Possible value(s): International Date Line West , American Samoa , Midway Island , Hawaii , Alaska , Pacific Time (US & Canada) , Tijuana , Arizona , Mazatlan , Mountain Time (US & Canada) , Central America , Central Time (US & Canada) , Chihuahua , Guadalajara , Mexico City , Monterrey , Saskatchewan , Bogota , Eastern Time (US & Canada) , Indiana (East) , Lima , Quito , Atlantic Time (Canada) , Caracas , Georgetown , La Paz , Puerto Rico , Santiago , Newfoundland , Brasilia , Buenos Aires , Montevideo , Greenland , Mid-Atlantic , Azores , Cape Verde Is. , Casablanca , Dublin , Edinburgh , Lisbon , London , Monrovia , UTC , Amsterdam , Belgrade , Berlin , Bern , Bratislava , Brussels , Budapest , Copenhagen , Ljubljana , Madrid , Paris , Prague , Rome , Sarajevo , Skopje , Stockholm , Vienna , Warsaw , West Central Africa , Zagreb , Zurich , Athens , Bucharest , Cairo , Harare , Helsinki , Jerusalem , Kaliningrad , Kyiv , Pretoria , Riga , Sofia , Tallinn , Vilnius , Baghdad , Istanbul , Kuwait , Minsk , Moscow , Nairobi , Riyadh , St. Petersburg , Volgograd , Tehran , Abu Dhabi , Baku , Muscat , Samara , Tbilisi , Yerevan , Kabul , Almaty , Ekaterinburg , Islamabad , Karachi , Tashkent , Chennai , Kolkata , Mumbai , New Delhi , Sri Jayawardenepura , Kathmandu , Astana , Dhaka , Urumqi , Rangoon , Bangkok , Hanoi , Jakarta , Krasnoyarsk , Novosibirsk , Beijing , Chongqing , Hong Kong , Irkutsk , Kuala Lumpur , Perth , Singapore , Taipei , Ulaanbaatar , Osaka , Sapporo , Seoul , Tokyo , Yakutsk , Adelaide , Darwin , Brisbane , Canberra , Guam , Hobart , Melbourne , Port Moresby , Sydney , Vladivostok , Magadan , New Caledonia , Solomon Is. , Srednekolymsk , Auckland , Fiji , Kamchatka , Marshall Is. , Wellington , Chatham Is. , Nuku`alofa , Samoa , Tokelau Is. -h , --help - Print help 3.82.4. user delete Delete a user Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --login VALUE - User`s login to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.82.5. user info Show a user Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --login VALUE - User`s login to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.208. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Login x x x Name x x Email x x Admin x x Disabled x x Last login x x Authorized by x x Email enabled x x Effective admin x x Locale x x Timezone x x Description x x Default organization x x Default location x x Roles/ x x User groups/usergroup x x User groups/id x x User groups/roles/ x x Inherited user groups/usergroup x x Inherited user groups/id x x Inherited user groups/roles/ x x Locations/ x x Organizations/ x x Created at x x Updated at x x 3.82.6. user list List all users Usage Options --auth-source-ldap VALUE - Name to search by --auth-source-ldap-id VALUE - ID of LDAP authentication source --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Scope by locations --location-title VALUE - Set the current location context for the request --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Scope by organizations --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --role VALUE - User role name --role-id VALUE - ID of role --search VALUE - Filter results --user-group VALUE - Name to search by --user-group-id VALUE - ID of user group -h , --help - Print help Table 3.209. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Login x x x Name x x Email x x Admin x x Disabled x x Last login x x Authorized by x x Search / Order fields admin - Values: true, false auth_source - string auth_source_type - string description - text disabled - Values: true, false firstname - string id - integer last_login_on - datetime lastname - string location - string location_id - integer login - string mail - string organization - string organization_id - integer role - string role_id - integer usergroup - string 3.82.7. user mail-notification Managing personal mail notifications Usage Options -h , --help - Print help 3.82.7.1. user mail-notification add Add an email notification for a user Usage Options --interval VALUE - Mail notification interval option, e.g. Daily, Weekly or Monthly. Required for summary notification --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --mail-notification VALUE - Name to search by --mail-notification-id NUMBER --mail-query VALUE - Relevant only for audit summary notification --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --subscription VALUE - Mail notification subscription option, e.g. Subscribe, Subscribe to my hosts or Subscribe to all hosts. Required for host built and config error state --user VALUE - User`s login to search by --user-id VALUE -h , --help - Print help 3.82.7.2. user mail-notification list List all email notifications for a user Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --user VALUE - User`s login to search by --user-id VALUE -h , --help - Print help Table 3.210. Predefined field sets FIELDS ALL DEFAULT THIN Id x x Name x x x Description x x Interval x x Mail query x x 3.82.7.3. user mail-notification remove Remove an email notification for a user Usage Options --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --mail-notification VALUE - Name to search by --mail-notification-id NUMBER --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --user VALUE - User`s login to search by --user-id VALUE -h , --help - Print help 3.82.7.4. user mail-notification update Update an email notification for a user Usage Options --interval VALUE - Mail notification interval option, e.g. Daily, Weekly or Monthly. Required for summary notification --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --mail-notification VALUE - Name to search by --mail-notification-id NUMBER --mail-query VALUE - Relevant only for audit summary notification --new-name VALUE --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --subscription VALUE - Mail notification subscription option, e.g. Subscribe, Subscribe to my hosts or Subscribe to all hosts. Required for host built and config error state --user VALUE - User`s login to search by --user-id VALUE -h , --help - Print help 3.82.8. user remove-role Remove a user role Usage Options --id VALUE --login VALUE - User`s login to search by --role VALUE - User role name --role-id NUMBER -h , --help - Print help 3.82.9. user ssh-keys Managing User SSH Keys. Usage Options -h , --help - Print help 3.82.9.1. user ssh-keys add Add an SSH key for a user Usage Options --key VALUE - Public SSH key --key-file FILE - Path to a SSH public key --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --user VALUE - User`s login to search by --user-id VALUE - ID of the user -h , --help - Print help 3.82.9.2. user ssh-keys delete Delete an SSH key for a user Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --user VALUE - User`s login to search by --user-id VALUE - ID of the user -h , --help - Print help 3.82.9.3. user ssh-keys info Show an SSH key from a user Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --user VALUE - User`s login to search by --user-id VALUE - ID of the user -h , --help - Print help Table 3.211. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Fingerprint x x Length x x Created at x x Public key x x 3.82.9.4. user ssh-keys list List all SSH keys for a user Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results --user VALUE - User`s login to search by --user-id VALUE - ID of the user -h , --help - Print help Table 3.212. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Fingerprint x x Length x x Created at x x Search / Order fields id - integer name - string user_id - integer 3.82.10. user table-preference Managing table preferences Usage Options -h , --help - Print help 3.82.10.1. user table-preference create Creates a table preference for a given table Usage Options --columns LIST - List of user selected columns --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name of the table --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --user VALUE - User`s login to search by --user-id VALUE - ID of the user -h , --help - Print help 3.82.10.2. user table-preference delete Delete a table preference for a given table Usage Options --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name of the table --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --user VALUE - User`s login to search by --user-id VALUE - ID of the user -h , --help - Print help 3.82.10.3. user table-preference info Table preference details of a given table Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name of the table --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --user VALUE - User`s login to search by --user-id VALUE - ID of the user -h , --help - Print help Table 3.213. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x Columns x x Created at x x Updated at x x 3.82.10.4. user table-preference list List of table preferences for a user Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results --user VALUE - User`s login to search by --user-id VALUE - ID of the user -h , --help - Print help Table 3.214. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x Columns x x 3.82.10.5. user table-preference update Updates a table preference for a given table Usage Options --columns LIST - List of user selected columns --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name of the table --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --user VALUE - User`s login to search by --user-id VALUE - ID of the user -h , --help - Print help 3.82.11. user update Update a user Usage Options --admin BOOLEAN - Is an admin account --ask-password BOOLEAN --auth-source VALUE - Name to search by --auth-source-id NUMBER --current-password VALUE - Required when user want to change own password --default-location VALUE - Default location name --default-location-id NUMBER --default-organization VALUE - Default organization name --default-organization-id NUMBER --description VALUE --disabled BOOLEAN --firstname VALUE --id VALUE --lastname VALUE --locale ENUM - User`s preferred locale Possible value(s): ca , cs_CZ , de , en , en_GB , es , fr , it , ja , ka , ko , pl , pt_BR , ru , zh_CN , zh_TW --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-ids LIST - REPLACE locations with given ids --location-title VALUE - Set the current location context for the request --location-titles LIST --locations LIST --login VALUE --mail VALUE --mail-enabled BOOLEAN - Enable user`s email --new-login VALUE --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-ids LIST - REPLACE organizations with given ids. --organization-title VALUE - Set the current organization context for the request --organization-titles LIST --organizations LIST --password VALUE - Required unless user is in an external authentication source --role-ids LIST --roles LIST --timezone ENUM - User`s timezone Possible value(s): International Date Line West , American Samoa , Midway Island , Hawaii , Alaska , Pacific Time (US & Canada) , Tijuana , Arizona , Mazatlan , Mountain Time (US & Canada) , Central America , Central Time (US & Canada) , Chihuahua , Guadalajara , Mexico City , Monterrey , Saskatchewan , Bogota , Eastern Time (US & Canada) , Indiana (East) , Lima , Quito , Atlantic Time (Canada) , Caracas , Georgetown , La Paz , Puerto Rico , Santiago , Newfoundland , Brasilia , Buenos Aires , Montevideo , Greenland , Mid-Atlantic , Azores , Cape Verde Is. , Casablanca , Dublin , Edinburgh , Lisbon , London , Monrovia , UTC , Amsterdam , Belgrade , Berlin , Bern , Bratislava , Brussels , Budapest , Copenhagen , Ljubljana , Madrid , Paris , Prague , Rome , Sarajevo , Skopje , Stockholm , Vienna , Warsaw , West Central Africa , Zagreb , Zurich , Athens , Bucharest , Cairo , Harare , Helsinki , Jerusalem , Kaliningrad , Kyiv , Pretoria , Riga , Sofia , Tallinn , Vilnius , Baghdad , Istanbul , Kuwait , Minsk , Moscow , Nairobi , Riyadh , St. Petersburg , Volgograd , Tehran , Abu Dhabi , Baku , Muscat , Samara , Tbilisi , Yerevan , Kabul , Almaty , Ekaterinburg , Islamabad , Karachi , Tashkent , Chennai , Kolkata , Mumbai , New Delhi , Sri Jayawardenepura , Kathmandu , Astana , Dhaka , Urumqi , Rangoon , Bangkok , Hanoi , Jakarta , Krasnoyarsk , Novosibirsk , Beijing , Chongqing , Hong Kong , Irkutsk , Kuala Lumpur , Perth , Singapore , Taipei , Ulaanbaatar , Osaka , Sapporo , Seoul , Tokyo , Yakutsk , Adelaide , Darwin , Brisbane , Canberra , Guam , Hobart , Melbourne , Port Moresby , Sydney , Vladivostok , Magadan , New Caledonia , Solomon Is. , Srednekolymsk , Auckland , Fiji , Kamchatka , Marshall Is. , Wellington , Chatham Is. , Nuku`alofa , Samoa , Tokelau Is. -h , --help - Print help 3.83. user-group Manage user groups Usage Options -h , --help - Print help 3.83.1. user-group add-role Assign a user role Usage Options --id VALUE --name VALUE - Name to search by --role VALUE - User role name --role-id NUMBER -h , --help - Print help 3.83.2. user-group add-user Associate an user Usage Options --id VALUE --name VALUE - Name to search by --user VALUE - User`s login to search by --user-id NUMBER -h , --help - Print help 3.83.3. user-group add-user-group Associate an user group Usage Options --id VALUE --name VALUE - Name to search by --user-group VALUE - Name to search by --user-group-id NUMBER -h , --help - Print help 3.83.4. user-group create Create a user group Usage Options --admin BOOLEAN - Is an admin user group, can be modified by admins only --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --role-ids LIST --roles LIST --user-group-ids LIST --user-groups LIST --user-ids LIST --users LIST -h , --help - Print help 3.83.5. user-group delete Delete a user group Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.83.6. user-group external View and manage user group's external user groups Usage Options -h , --help - Print help 3.83.6.1. user-group external create Create an external user group linked to a user group Usage Options --auth-source VALUE - Name to search by --auth-source-id NUMBER - ID of linked authentication source --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - External user group name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --user-group VALUE - Name to search by --user-group-id VALUE - ID or name of user group -h , --help - Print help 3.83.6.2. user-group external delete Delete an external user group Usage Options --id VALUE - ID or name external user group --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --user-group VALUE - Name to search by --user-group-id VALUE - ID or name of user group -h , --help - Print help 3.83.6.3. user-group external info Show an external user group for user group Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE - ID or name of external user group --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --user-group VALUE - Name to search by --user-group-id VALUE - ID or name of user group -h , --help - Print help Table 3.215. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Auth source x x 3.83.6.4. user-group external list List all external user groups for user group Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --user-group VALUE - Name to search by --user-group-id VALUE - ID or name of user group -h , --help - Print help Table 3.216. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Auth source x x 3.83.6.5. user-group external refresh Refresh external user group Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE - ID or name of external user group --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --user-group VALUE - Name to search by --user-group-id VALUE - ID or name of user group -h , --help - Print help Table 3.217. Predefined field sets FIELDS ALL DEFAULT THIN Name x x x Auth source x x 3.83.6.6. user-group external update Update external user group Usage Options --auth-source VALUE - Name to search by --auth-source-id NUMBER - ID of linked authentication source --id VALUE - ID or name of external user group --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - External user group name --new-name VALUE - External user group name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --user-group VALUE - Name to search by --user-group-id VALUE - ID or name of user group -h , --help - Print help 3.83.7. user-group info Show a user group Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.218. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Admin x x Users/ x x User groups/usergroup x x User groups/id x x User groups/roles/ x x Inherited user groups/usergroup x x Inherited user groups/id x x Inherited user groups/roles/ x x External user groups/ x x Roles/ x x Created at x x Updated at x x 3.83.8. user-group list List all user groups Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results -h , --help - Print help Table 3.219. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Admin x x Search / Order fields id - integer name - string role - string role_id - integer 3.83.9. user-group remove-role Remove a user role Usage Options --id VALUE --name VALUE - Name to search by --role VALUE - User role name --role-id NUMBER -h , --help - Print help 3.83.10. user-group remove-user Disassociate an user Usage Options --id VALUE --name VALUE - Name to search by --user VALUE - User`s login to search by --user-id NUMBER -h , --help - Print help 3.83.11. user-group remove-user-group Disassociate an user group Usage Options --id VALUE --name VALUE - Name to search by --user-group VALUE - Name to search by --user-group-id NUMBER -h , --help - Print help 3.83.12. user-group update Update a user group Usage Options --admin BOOLEAN - Is an admin user group, can be modified by admins only --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE --new-name VALUE --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --role-ids LIST --roles LIST --user-group-ids LIST --user-groups LIST --user-ids LIST --users LIST -h , --help - Print help 3.84. virt-who-config Manage Virt Who configurations Usage Options -h , --help - Print help 3.84.1. virt-who-config create Create a virt-who configuration Usage Options --ahv-internal-debug BOOLEAN Option Enable debugging output is required to enable AHV internal debug. It provides extra AHV debug information when both options are enabled --blacklist VALUE - Hypervisor blacklist, applicable only when filtering mode is set to 2. Wildcards and regular expressions are supported, multiple records must be separated by comma. --debug BOOLEAN - Enable debugging output --exclude-host-parents VALUE Applicable only for esx provider type. Hosts which parent (usually ComputeResource) name is specified in comma-separated list in this option will NOT be reported. Wildcards and regular expressions are supported, multiple records must be separated by comma. Put the value into the double-quotes if it contains special characters like comma. All new line characters will be removed in resulting configuration file, white spaces are removed from beginning and end. --filter-host-parents VALUE - Applicable only for esx provider type. Only hosts which parent (usually ComputeResource) name is specified in comma-separated list in this option will be reported. Wildcards and regular expressions are supported, multiple records must be separated by comma. Put the value into the double-quotes if it contains special characters like comma. All new line characters will be removed in resulting configuration file, white spaces are removed from beginning and end. --filtering-mode ENUM - Hypervisor filtering mode Possible value(s): none , whitelist , blacklist --http-proxy VALUE - Name to search by --http-proxy-id NUMBER - HTTP Proxy that should be used for communication between the server on which virt-who is running and the hypervisors and virtualization managers. --hypervisor-id ENUM - Specifies how the hypervisor will be identified. Possible value(s): hostname , uuid , hwuuid --hypervisor-password VALUE - Hypervisor password, required for all hypervisor types except for libvirt/kubevirt. --hypervisor-server VALUE - Fully qualified host name or IP address of the hypervisor --hypervisor-type ENUM - Hypervisor type Possible value(s): esx , hyperv , libvirt , kubevirt , ahv --hypervisor-username VALUE - Account name by which virt-who is to connect to the hypervisor. --interval ENUM - Configuration interval in minutes Possible value(s): 60 , 120 , 240 , 480 , 720 , 1440 , 2880 , 4320 --kubeconfig-path VALUE - Configuration file containing details about how to connect to the cluster and authentication details. --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Configuration name --no-proxy VALUE - Ignore Proxy. A comma-separated list of hostnames or domains or ip addresses to ignore Capsule settings for. Optionally this may be set to * to bypass proxy settings for all hostnames domains or ip addresses. --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --prism-flavor ENUM - Select the Prism flavor you are connecting to Possible value(s): central , element --satellite-url VALUE - Satellite server FQDN --whitelist VALUE - Hypervisor whitelist, applicable only when filtering mode is set to 1. Wildcards and regular expressions are supported, multiple records must be separated by comma. -h , --help - Print help 3.84.2. virt-who-config delete Delete a virt-who configuration Usage Options --id NUMBER - Configuration numeric identifier --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.84.3. virt-who-config deploy Download and execute script for the specified virt-who configuration Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.84.4. virt-who-config fetch Renders a deploy script for the specified virt-who configuration Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help -o , --output VALUE - File where the script will be written 3.84.5. virt-who-config info Show a virt-who configuration Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.220. Predefined field sets FIELDS ALL DEFAULT General information/id x x General information/name x x General information/hypervisor type x x General information/hypervisor server x x General information/hypervisor username x x General information/configuration file x x General information/ahv prism flavor x x General information/ahv update frequency x x General information/enable ahv debug x x General information/status x x Schedule/interval x x Schedule/last report at x x Connection/satellite server x x Connection/hypervisor id x x Connection/filtering x x Connection/excluded hosts x x Connection/filtered hosts x x Connection/filter host parents x x Connection/exclude host parents x x Connection/debug mode x x Connection/ignore proxy x x Http proxy/http proxy id x x Http proxy/http proxy name x x Http proxy/http proxy url x x Locations/ x x Organizations/ x x 3.84.6. virt-who-config list List of virt-who configurations Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results -h , --help - Print help Table 3.221. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Interval x x Status x x Last report at x x 3.84.7. virt-who-config update Update a virt-who configuration Usage Options --ahv-internal-debug BOOLEAN Option Enable debugging output is required to enable AHV internal debug. It provides extra AHV debug information when both options are enabled --blacklist VALUE - Hypervisor blacklist, applicable only when filtering mode is set to 2. Wildcards and regular expressions are supported, multiple records must be separated by comma. --debug BOOLEAN - Enable debugging output --exclude-host-parents VALUE Applicable only for esx provider type. Hosts which parent (usually ComputeResource) name is specified in comma-separated list in this option will NOT be reported. Wildcards and regular expressions are supported, multiple records must be separated by comma. Put the value into the double-quotes if it contains special characters like comma. All new line characters will be removed in resulting configuration file, white spaces are removed from beginning and end. --filter-host-parents VALUE - Applicable only for esx provider type. Only hosts which parent (usually ComputeResource) name is specified in comma-separated list in this option will be reported. Wildcards and regular expressions are supported, multiple records must be separated by comma. Put the value into the double-quotes if it contains special characters like comma. All new line characters will be removed in resulting configuration file, white spaces are removed from beginning and end. --filtering-mode ENUM - Hypervisor filtering mode Possible value(s): none , whitelist , blacklist --http-proxy VALUE - Name to search by --http-proxy-id NUMBER - HTTP Proxy that should be used for communication between the server on which virt-who is running and the hypervisors and virtualization managers. --hypervisor-id ENUM - Specifies how the hypervisor will be identified. Possible value(s): hostname , uuid , hwuuid --hypervisor-password VALUE - Hypervisor password, required for all hypervisor types except for libvirt/kubevirt. --hypervisor-server VALUE - Fully qualified host name or IP address of the hypervisor --hypervisor-type ENUM - Hypervisor type Possible value(s): esx , hyperv , libvirt , kubevirt , ahv --hypervisor-username VALUE - Account name by which virt-who is to connect to the hypervisor. --id NUMBER - Configuration numeric identifier --interval ENUM - Configuration interval in minutes Possible value(s): 60 , 120 , 240 , 480 , 720 , 1440 , 2880 , 4320 --kubeconfig-path VALUE - Configuration file containing details about how to connect to the cluster and authentication details. --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Configuration name --new-name VALUE - Configuration name --no-proxy VALUE - Ignore Proxy. A comma-separated list of hostnames or domains or ip addresses to ignore Capsule settings for. Optionally this may be set to * to bypass proxy settings for all hostnames domains or ip addresses. --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --prism-flavor ENUM - Select the Prism flavor you are connecting to Possible value(s): central , element --satellite-url VALUE - Satellite server FQDN --whitelist VALUE - Hypervisor whitelist, applicable only when filtering mode is set to 1. Wildcards and regular expressions are supported, multiple records must be separated by comma. -h , --help - Print help 3.85. webhook Manage webhooks Usage Options -h , --help - Print help 3.85.1. webhook create Create a Webhook Usage Options --enabled BOOLEAN --event ENUM - Possible value(s): actions.katello.capsule_content.sync_failed , actions.katello.capsule_content.sync_succeeded , actions.katello.content_view.promote_failed , actions.katello.content_view.promote_succeeded , actions.katello.content_view.publish_failed , actions.katello.content_view.publish_succeeded , actions.katello.repository.sync_failed , actions.katello.repository.sync_succeeded , actions.remote_execution.run_host_job_ansible_configure_cloud_connector_succeeded , actions.remote_execution.run_host_job_ansible_enable_web_console_succeeded , actions.remote_execution.run_host_job_ansible_run_capsule_upgrade_succeeded , actions.remote_execution.run_host_job_ansible_run_host_succeeded , actions.remote_execution.run_host_job_ansible_run_insights_plan_succeeded , actions.remote_execution.run_host_job_ansible_run_playbook_succeeded , actions.remote_execution.run_host_job_failed , actions.remote_execution.run_host_job_foreman_openscap_run_oval_scans_succeeded , actions.remote_execution.run_host_job_foreman_openscap_run_scans_succeeded , actions.remote_execution.run_host_job_katello_errata_install_by_search_succeeded , actions.remote_execution.run_host_job_katello_errata_install_succeeded , actions.remote_execution.run_host_job_katello_group_install_succeeded , actions.remote_execution.run_host_job_katello_group_remove_succeeded , actions.remote_execution.run_host_job_katello_group_update_succeeded , actions.remote_execution.run_host_job_katello_host_tracer_resolve_succeeded , actions.remote_execution.run_host_job_katello_module_stream_action_succeeded , actions.remote_execution.run_host_job_katello_package_install_by_search_succeeded , actions.remote_execution.run_host_job_katello_package_install_succeeded , actions.remote_execution.run_host_job_katello_package_remove_succeeded , actions.remote_execution.run_host_job_katello_package_update_succeeded , actions.remote_execution.run_host_job_katello_packages_remove_by_search_succeeded , actions.remote_execution.run_host_job_katello_packages_update_by_search_succeeded , actions.remote_execution.run_host_job_katello_service_restart_succeeded , actions.remote_execution.run_host_job_leapp_preupgrade_succeeded , actions.remote_execution.run_host_job_leapp_remediation_plan_succeeded , actions.remote_execution.run_host_job_leapp_upgrade_succeeded , actions.remote_execution.run_host_job_puppet_run_host_succeeded , actions.remote_execution.run_host_job_rh_cloud_connector_run_playbook_succeeded , actions.remote_execution.run_host_job_rh_cloud_remediate_hosts_succeeded , actions.remote_execution.run_host_job_run_script_succeeded , actions.remote_execution.run_host_job_succeeded , actions.remote_execution.run_hosts_job_failed , actions.remote_execution.run_hosts_job_running , actions.remote_execution.run_hosts_job_succeeded , build_entered , build_exited , content_view_created , content_view_destroyed , content_view_updated , domain_created , domain_destroyed , domain_updated , host_created , host_destroyed , host_facts_updated , host_updated , hostgroup_created , hostgroup_destroyed , hostgroup_updated , model_created , model_destroyed , model_updated , status_changed , subnet_created , subnet_destroyed , subnet_updated , user_created , user_destroyed , user_updated --http-content-type VALUE --http-headers KEY_VALUE_LIST --http-method ENUM - Possible value(s): POST , GET , PUT , DELETE , PATCH --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --password VALUE --proxy-authorization BOOLEAN Authorize with Satellite client certificate and validate capsule CA from Settings --ssl-ca-certs FILE - File containing X509 Certification Authorities concatenated in PEM format --target-url VALUE --user VALUE --verify-ssl BOOLEAN --webhook-template VALUE - Name to search by --webhook-template-id VALUE -h , --help - Print help 3.85.2. webhook delete Delete a Webhook Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.85.3. webhook info Show Webhook details Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.222. Predefined field sets FIELDS ADDITIONAL ALL DEFAULT THIN Id x x x Name x x x Target url x x Enabled x x Event x x Http method x x Http content type x x Webhook template x x User x x Verify ssl x x Proxy authorization x x X509 certification authorities x x Http headers/ x x Created at x x Updated at x x 3.85.4. webhook list List Webhooks Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results -h , --help - Print help Table 3.223. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Target url x x Enabled x x Search / Order fields enabled - Values: true, false name - string target_url - string 3.85.5. webhook update Update a Webhook Usage Options --enabled BOOLEAN --event ENUM - Possible value(s): actions.katello.capsule_content.sync_failed , actions.katello.capsule_content.sync_succeeded , actions.katello.content_view.promote_failed , actions.katello.content_view.promote_succeeded , actions.katello.content_view.publish_failed , actions.katello.content_view.publish_succeeded , actions.katello.repository.sync_failed , actions.katello.repository.sync_succeeded , actions.remote_execution.run_host_job_ansible_configure_cloud_connector_succeeded , actions.remote_execution.run_host_job_ansible_enable_web_console_succeeded , actions.remote_execution.run_host_job_ansible_run_capsule_upgrade_succeeded , actions.remote_execution.run_host_job_ansible_run_host_succeeded , actions.remote_execution.run_host_job_ansible_run_insights_plan_succeeded , actions.remote_execution.run_host_job_ansible_run_playbook_succeeded , actions.remote_execution.run_host_job_failed , actions.remote_execution.run_host_job_foreman_openscap_run_oval_scans_succeeded , actions.remote_execution.run_host_job_foreman_openscap_run_scans_succeeded , actions.remote_execution.run_host_job_katello_errata_install_by_search_succeeded , actions.remote_execution.run_host_job_katello_errata_install_succeeded , actions.remote_execution.run_host_job_katello_group_install_succeeded , actions.remote_execution.run_host_job_katello_group_remove_succeeded , actions.remote_execution.run_host_job_katello_group_update_succeeded , actions.remote_execution.run_host_job_katello_host_tracer_resolve_succeeded , actions.remote_execution.run_host_job_katello_module_stream_action_succeeded , actions.remote_execution.run_host_job_katello_package_install_by_search_succeeded , actions.remote_execution.run_host_job_katello_package_install_succeeded , actions.remote_execution.run_host_job_katello_package_remove_succeeded , actions.remote_execution.run_host_job_katello_package_update_succeeded , actions.remote_execution.run_host_job_katello_packages_remove_by_search_succeeded , actions.remote_execution.run_host_job_katello_packages_update_by_search_succeeded , actions.remote_execution.run_host_job_katello_service_restart_succeeded , actions.remote_execution.run_host_job_leapp_preupgrade_succeeded , actions.remote_execution.run_host_job_leapp_remediation_plan_succeeded , actions.remote_execution.run_host_job_leapp_upgrade_succeeded , actions.remote_execution.run_host_job_puppet_run_host_succeeded , actions.remote_execution.run_host_job_rh_cloud_connector_run_playbook_succeeded , actions.remote_execution.run_host_job_rh_cloud_remediate_hosts_succeeded , actions.remote_execution.run_host_job_run_script_succeeded , actions.remote_execution.run_host_job_succeeded , actions.remote_execution.run_hosts_job_failed , actions.remote_execution.run_hosts_job_running , actions.remote_execution.run_hosts_job_succeeded , build_entered , build_exited , content_view_created , content_view_destroyed , content_view_updated , domain_created , domain_destroyed , domain_updated , host_created , host_destroyed , host_facts_updated , host_updated , hostgroup_created , hostgroup_destroyed , hostgroup_updated , model_created , model_destroyed , model_updated , status_changed , subnet_created , subnet_destroyed , subnet_updated , user_created , user_destroyed , user_updated --http-content-type VALUE --http-headers KEY_VALUE_LIST --http-method ENUM - Possible value(s): POST , GET , PUT , DELETE , PATCH --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE --new-name VALUE --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --password VALUE --proxy-authorization BOOLEAN Authorize with Satellite client certificate and validate capsule CA from Settings --ssl-ca-certs FILE - File containing X509 Certification Authorities concatenated in PEM format --target-url VALUE --user VALUE --verify-ssl BOOLEAN --webhook-template VALUE - Name to search by --webhook-template-id VALUE -h , --help - Print help 3.86. webhook-template Manipulate webhook templates Usage Options -h , --help - Print help 3.86.1. webhook-template clone Clone a template Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Template name --new-name VALUE --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.86.2. webhook-template create Create a webhook template Usage Options --audit-comment VALUE --default BOOLEAN - Whether or not the template is added automatically to new organizations and locations --description VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-ids LIST - REPLACE locations with given ids --location-title VALUE - Set the current location context for the request --location-titles LIST --locations LIST --locked BOOLEAN - Whether or not the template is locked for editing --name VALUE --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-ids LIST - REPLACE organizations with given ids. --organization-title VALUE - Set the current organization context for the request --organization-titles LIST --organizations LIST --snippet BOOLEAN --template VALUE -h , --help - Print help 3.86.3. webhook-template delete Delete a webhook template Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help 3.86.4. webhook-template dump View webhook template content Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.224. Predefined field sets FIELDS 3.86.5. webhook-template export Export a webhook template to ERB Usage Options --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --path VALUE - Path to directory where downloaded content will be saved -h , --help - Print help 3.86.6. webhook-template import Import a webhook template Usage Options --associate ENUM - Determines when the template should associate objects based on metadata, new means only when new template is being created, always means both for new and existing template which is only being updated, never ignores metadata Possible value(s): new , always , never --default BOOLEAN - Makes the template default meaning it will be automatically associated with newly created organizations and locations (false by default) --file FILE - Path to a file that contains the webhook template content including metadata --force BOOLEAN - Use if you want update locked templates --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-ids LIST - REPLACE locations with given ids --location-title VALUE - Set the current location context for the request --location-titles LIST --locations LIST --lock BOOLEAN - Lock imported templates (false by default) --name VALUE - Template name --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-ids LIST - REPLACE organizations with given ids. --organization-title VALUE - Set the current organization context for the request --organization-titles LIST --organizations LIST -h , --help - Print help 3.86.7. webhook-template info Show webhook template details Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --name VALUE - Name to search by --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request -h , --help - Print help Table 3.225. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Description x x Locked x x Default x x Created at x x Updated at x x Locations/ x x Organizations/ x x Template inputs/id x x Template inputs/name x x Template inputs/description x x Template inputs/required x x Template inputs/options x x 3.86.8. webhook-template list List webhook templates Usage Options --fields LIST - Show specified fields or predefined field sets only. (See below) --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-title VALUE - Set the current location context for the request --order VALUE - Sort and order by a searchable field, e.g. <field> DESC --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-title VALUE - Set the current organization context for the request --page NUMBER - Page number, starting at 1 --per-page VALUE - Number of results per page to return, all to return all results --search VALUE - Filter results -h , --help - Print help Table 3.226. Predefined field sets FIELDS ALL DEFAULT THIN Id x x x Name x x x Search / Order fields default - Values: true, false location - string location_id - integer locked - Values: true, false name - string organization - string organization_id - integer snippet - Values: true, false template - text 3.86.9. webhook-template update Update a webhook template Usage Options --audit-comment VALUE --default BOOLEAN - Whether or not the template is added automatically to new organizations and locations --description VALUE --id VALUE --location VALUE - Set the current location context for the request --location-id NUMBER - Set the current location context for the request --location-ids LIST - REPLACE locations with given ids --location-title VALUE - Set the current location context for the request --location-titles LIST --locations LIST --locked BOOLEAN - Whether or not the template is locked for editing --name VALUE --new-name VALUE --organization VALUE - Set the current organization context for the request --organization-id NUMBER - Set the current organization context for the request --organization-ids LIST - REPLACE organizations with given ids. --organization-title VALUE - Set the current organization context for the request --organization-titles LIST --organizations LIST --snippet BOOLEAN --template VALUE -h , --help - Print help 3.87. Option details Hammer options accept the following option types and values: BOOLEAN One of true/false, yes/no, 1/0 DATETIME Date and time in YYYY-MM-DD HH:MM:SS or ISO 8601 format ENUM Possible values are described in the option's description FILE Path to a file KEY_VALUE_LIST Comma-separated list of key=value. JSON is acceptable and preferred way for such parameters LIST Comma separated list of values. Values containing comma should be quoted or escaped with backslash. JSON is acceptable and preferred way for such parameters MULTIENUM Any combination of possible values described in the option's description NUMBER Numeric value. Integer SCHEMA Comma separated list of values defined by a schema. JSON is acceptable and preferred way for such parameters VALUE Value described in the option's description. Mostly simple string | [
"hammer [OPTIONS] SUBCOMMAND [ARG]",
"hammer activation-key [OPTIONS] SUBCOMMAND [ARG]",
"hammer activation-key add-host-collection [OPTIONS]",
"hammer activation-key add-subscription [OPTIONS]",
"hammer activation-key content-override [OPTIONS]",
"hammer activation-key copy [OPTIONS]",
"hammer activation-key create [OPTIONS]",
"hammer activation-key <delete|destroy> [OPTIONS]",
"hammer activation-key host-collections [OPTIONS]",
"hammer activation-key <info|show> [OPTIONS]",
"hammer activation-key <list|index> [OPTIONS]",
"hammer activation-key product-content [OPTIONS]",
"hammer activation-key remove-host-collection [OPTIONS]",
"hammer activation-key remove-subscription [OPTIONS]",
"hammer activation-key subscriptions [OPTIONS]",
"hammer activation-key update [OPTIONS]",
"hammer admin [OPTIONS] SUBCOMMAND [ARG]",
"hammer admin logging [OPTIONS]",
"hammer alternate-content-source [OPTIONS] SUBCOMMAND [ARG]",
"hammer alternate-content-source bulk [OPTIONS] SUBCOMMAND [ARG]",
"hammer alternate-content-source bulk destroy [OPTIONS]",
"hammer alternate-content-source bulk refresh [OPTIONS]",
"hammer alternate-content-source bulk refresh-all [OPTIONS]",
"hammer alternate-content-source create [OPTIONS]",
"hammer alternate-content-source <delete|destroy> [OPTIONS]",
"hammer alternate-content-source <info|show> [OPTIONS]",
"hammer alternate-content-source <list|index> [OPTIONS]",
"hammer alternate-content-source refresh [OPTIONS]",
"hammer alternate-content-source update [OPTIONS]",
"hammer ansible [OPTIONS] SUBCOMMAND [ARG]",
"hammer ansible inventory [OPTIONS] SUBCOMMAND [ARG]",
"hammer ansible inventory hostgroups [OPTIONS]",
"hammer ansible inventory hosts [OPTIONS]",
"hammer ansible inventory schedule [OPTIONS]",
"hammer ansible roles [OPTIONS] SUBCOMMAND [ARG]",
"hammer ansible roles <delete|destroy> [OPTIONS]",
"hammer ansible roles fetch [OPTIONS]",
"hammer ansible roles import [OPTIONS]",
"hammer ansible roles <info|show> [OPTIONS]",
"hammer ansible roles <list|index> [OPTIONS]",
"hammer ansible roles obsolete [OPTIONS]",
"hammer ansible roles play-hostgroups [OPTIONS]",
"hammer ansible roles play-hosts [OPTIONS]",
"hammer ansible roles sync [OPTIONS]",
"hammer ansible variables [OPTIONS] SUBCOMMAND [ARG]",
"hammer ansible variables add-matcher [OPTIONS]",
"hammer ansible variables create [OPTIONS]",
"hammer ansible variables <delete|destroy> [OPTIONS]",
"hammer ansible variables import [OPTIONS]",
"hammer ansible variables <info|show> [OPTIONS]",
"hammer ansible variables <list|index> [OPTIONS]",
"hammer ansible variables obsolete [OPTIONS]",
"hammer ansible variables remove-matcher [OPTIONS]",
"hammer ansible variables update [OPTIONS]",
"hammer architecture [OPTIONS] SUBCOMMAND [ARG]",
"hammer architecture add-operatingsystem [OPTIONS]",
"hammer architecture create [OPTIONS]",
"hammer architecture <delete|destroy> [OPTIONS]",
"hammer architecture <info|show> [OPTIONS]",
"hammer architecture <list|index> [OPTIONS]",
"hammer architecture remove-operatingsystem [OPTIONS]",
"hammer architecture update [OPTIONS]",
"hammer arf-report [OPTIONS] SUBCOMMAND [ARG]",
"hammer arf-report <delete|destroy> [OPTIONS]",
"hammer arf-report download [OPTIONS]",
"hammer arf-report download-html [OPTIONS]",
"hammer arf-report <info|show> [OPTIONS]",
"hammer arf-report <list|index> [OPTIONS]",
"hammer audit [OPTIONS] SUBCOMMAND [ARG]",
"hammer audit <info|show> [OPTIONS]",
"hammer audit <list|index> [OPTIONS]",
"hammer auth [OPTIONS] SUBCOMMAND [ARG]",
"hammer auth login [OPTIONS] SUBCOMMAND [ARG]",
"hammer auth login basic [OPTIONS]",
"hammer auth login basic-external [OPTIONS]",
"hammer auth login negotiate [OPTIONS]",
"hammer auth login oauth [OPTIONS]",
"hammer auth logout [OPTIONS]",
"hammer auth status [OPTIONS]",
"hammer auth-source [OPTIONS] SUBCOMMAND [ARG]",
"hammer auth-source external [OPTIONS] SUBCOMMAND [ARG]",
"hammer auth-source external <info|show> [OPTIONS]",
"hammer auth-source external <list|index> [OPTIONS]",
"hammer auth-source external update [OPTIONS]",
"hammer auth-source ldap [OPTIONS] SUBCOMMAND [ARG]",
"hammer auth-source ldap create [OPTIONS]",
"hammer auth-source ldap <delete|destroy> [OPTIONS]",
"hammer auth-source ldap <info|show> [OPTIONS]",
"hammer auth-source ldap <list|index> [OPTIONS]",
"hammer auth-source ldap update [OPTIONS]",
"hammer auth-source <list|index> [OPTIONS]",
"hammer bookmark [OPTIONS] SUBCOMMAND [ARG]",
"hammer bookmark create [OPTIONS]",
"hammer bookmark <delete|destroy> [OPTIONS]",
"hammer bookmark <info|show> [OPTIONS]",
"hammer bookmark <list|index> [OPTIONS]",
"hammer bookmark update [OPTIONS]",
"hammer bootdisk [OPTIONS] SUBCOMMAND [ARG]",
"hammer bootdisk generic [OPTIONS]",
"hammer bootdisk host [OPTIONS]",
"hammer bootdisk subnet [OPTIONS]",
"hammer capsule [OPTIONS] SUBCOMMAND [ARG]",
"hammer capsule content [OPTIONS] SUBCOMMAND [ARG]",
"hammer capsule content add-lifecycle-environment [OPTIONS]",
"hammer capsule content available-lifecycle-environments [OPTIONS]",
"hammer capsule content cancel-synchronization [OPTIONS]",
"hammer capsule content info [OPTIONS]",
"hammer capsule content lifecycle-environments [OPTIONS]",
"hammer capsule content reclaim-space [OPTIONS]",
"hammer capsule content remove-lifecycle-environment [OPTIONS]",
"hammer capsule content synchronization-status [OPTIONS]",
"hammer capsule content synchronize [OPTIONS]",
"hammer capsule content update-counts [OPTIONS]",
"hammer capsule create [OPTIONS]",
"hammer capsule <delete|destroy> [OPTIONS]",
"hammer capsule import-subnets [OPTIONS]",
"hammer capsule <info|show> [OPTIONS]",
"hammer capsule <list|index> [OPTIONS]",
"hammer capsule refresh-features [OPTIONS]",
"hammer capsule update [OPTIONS]",
"hammer compute-profile [OPTIONS] SUBCOMMAND [ARG]",
"hammer compute-profile create [OPTIONS]",
"hammer compute-profile <delete|destroy> [OPTIONS]",
"hammer compute-profile <info|show> [OPTIONS]",
"hammer compute-profile <list|index> [OPTIONS]",
"hammer compute-profile update [OPTIONS]",
"hammer compute-profile values [OPTIONS] SUBCOMMAND [ARG]",
"hammer compute-profile values add-interface [OPTIONS]",
"hammer compute-profile values add-volume [OPTIONS]",
"hammer compute-profile values create [OPTIONS]",
"hammer compute-profile values remove-interface [OPTIONS]",
"hammer compute-profile values remove-volume [OPTIONS]",
"hammer compute-profile values update [OPTIONS]",
"hammer compute-profile values update-interface [OPTIONS]",
"hammer compute-profile values update-volume [OPTIONS]",
"hammer compute-resource [OPTIONS] SUBCOMMAND [ARG]",
"hammer compute-resource associate-vms [OPTIONS]",
"hammer compute-resource clusters [OPTIONS]",
"hammer compute-resource create [OPTIONS]",
"hammer compute-resource <delete|destroy> [OPTIONS]",
"hammer compute-resource flavors [OPTIONS]",
"hammer compute-resource folders [OPTIONS]",
"hammer compute-resource image [OPTIONS] SUBCOMMAND [ARG]",
"hammer compute-resource image available [OPTIONS]",
"hammer compute-resource image create [OPTIONS]",
"hammer compute-resource image <delete|destroy> [OPTIONS]",
"hammer compute-resource image <info|show> [OPTIONS]",
"hammer compute-resource image <list|index> [OPTIONS]",
"hammer compute-resource image update [OPTIONS]",
"hammer compute-resource images [OPTIONS]",
"hammer compute-resource <info|show> [OPTIONS]",
"hammer compute-resource <list|index> [OPTIONS]",
"hammer compute-resource networks [OPTIONS]",
"hammer compute-resource resource-pools [OPTIONS]",
"hammer compute-resource security-groups [OPTIONS]",
"hammer compute-resource storage-domains [OPTIONS]",
"hammer compute-resource storage-pods [OPTIONS]",
"hammer compute-resource update [OPTIONS]",
"hammer compute-resource virtual-machine [OPTIONS] SUBCOMMAND [ARG]",
"hammer compute-resource virtual-machine <delete|destroy> [OPTIONS]",
"hammer compute-resource virtual-machine <info|show> [OPTIONS]",
"hammer compute-resource virtual-machine power [OPTIONS]",
"hammer compute-resource virtual-machines [OPTIONS]",
"hammer compute-resource vnic-profiles [OPTIONS]",
"hammer compute-resource zones [OPTIONS]",
"hammer config-report [OPTIONS] SUBCOMMAND [ARG]",
"hammer config-report <delete|destroy> [OPTIONS]",
"hammer config-report <info|show> [OPTIONS]",
"hammer config-report <list|index> [OPTIONS]",
"hammer content-credentials [OPTIONS] SUBCOMMAND [ARG]",
"hammer content-credentials create [OPTIONS]",
"hammer content-credentials <delete|destroy> [OPTIONS]",
"hammer content-credentials <info|show> [OPTIONS]",
"hammer content-credentials <list|index> [OPTIONS]",
"hammer content-credentials update [OPTIONS]",
"hammer content-export [OPTIONS] SUBCOMMAND [ARG]",
"hammer content-export complete [OPTIONS] SUBCOMMAND [ARG]",
"hammer content-export complete library [OPTIONS]",
"hammer content-export complete repository [OPTIONS]",
"hammer content-export complete version [OPTIONS]",
"hammer content-export generate-listing [OPTIONS]",
"hammer content-export generate-metadata [OPTIONS]",
"hammer content-export incremental [OPTIONS] SUBCOMMAND [ARG]",
"hammer content-export incremental library [OPTIONS]",
"hammer content-export incremental repository [OPTIONS]",
"hammer content-export incremental version [OPTIONS]",
"hammer content-export <list|index> [OPTIONS]",
"hammer content-import [OPTIONS] SUBCOMMAND [ARG]",
"hammer content-import library [OPTIONS]",
"hammer content-import <list|index> [OPTIONS]",
"hammer content-import repository [OPTIONS]",
"hammer content-import version [OPTIONS]",
"hammer content-units [OPTIONS] SUBCOMMAND [ARG]",
"hammer content-units <info|show> [OPTIONS]",
"hammer content-units <list|index> [OPTIONS]",
"hammer content-view [OPTIONS] SUBCOMMAND [ARG]",
"hammer content-view add-repository [OPTIONS]",
"hammer content-view add-version [OPTIONS]",
"hammer content-view component [OPTIONS] SUBCOMMAND [ARG]",
"hammer content-view component add [OPTIONS]",
"hammer content-view component <list|index> [OPTIONS]",
"hammer content-view component remove [OPTIONS]",
"hammer content-view component update [OPTIONS]",
"hammer content-view copy [OPTIONS]",
"hammer content-view create [OPTIONS]",
"hammer content-view delete [OPTIONS]",
"hammer content-view filter [OPTIONS] SUBCOMMAND [ARG]",
"hammer content-view filter add-repository [OPTIONS]",
"hammer content-view filter create [OPTIONS]",
"hammer content-view filter <delete|destroy> [OPTIONS]",
"hammer content-view filter <info|show> [OPTIONS]",
"hammer content-view filter <list|index> [OPTIONS]",
"hammer content-view filter remove-repository [OPTIONS]",
"hammer content-view filter rule [OPTIONS] SUBCOMMAND [ARG]",
"hammer content-view filter rule create [OPTIONS]",
"hammer content-view filter rule <delete|destroy> [OPTIONS]",
"hammer content-view filter rule <info|show> [OPTIONS]",
"hammer content-view filter rule <list|index> [OPTIONS]",
"hammer content-view filter rule update [OPTIONS]",
"hammer content-view filter update [OPTIONS]",
"hammer content-view <info|show> [OPTIONS]",
"hammer content-view <list|index> [OPTIONS]",
"hammer content-view publish [OPTIONS]",
"hammer content-view purge [OPTIONS]",
"hammer content-view remove [OPTIONS]",
"hammer content-view remove-from-environment [OPTIONS]",
"hammer content-view remove-repository [OPTIONS]",
"hammer content-view remove-version [OPTIONS]",
"hammer content-view update [OPTIONS]",
"hammer content-view version [OPTIONS] SUBCOMMAND [ARG]",
"hammer content-view version delete [OPTIONS]",
"hammer content-view version incremental-update [OPTIONS]",
"hammer content-view version <info|show> [OPTIONS]",
"hammer content-view version <list|index> [OPTIONS]",
"hammer content-view version promote [OPTIONS]",
"hammer content-view version republish-repositories [OPTIONS]",
"hammer content-view version update [OPTIONS]",
"hammer deb-package [OPTIONS] SUBCOMMAND [ARG]",
"hammer deb-package <info|show> [OPTIONS]",
"hammer deb-package <list|index> [OPTIONS]",
"hammer defaults [OPTIONS] SUBCOMMAND [ARG]",
"hammer defaults add [OPTIONS]",
"hammer defaults delete [OPTIONS]",
"hammer defaults list [OPTIONS]",
"hammer defaults providers [OPTIONS]",
"hammer discovery [OPTIONS] SUBCOMMAND [ARG]",
"hammer discovery auto-provision [OPTIONS]",
"hammer discovery <delete|destroy> [OPTIONS]",
"hammer discovery facts [OPTIONS]",
"hammer discovery <info|show> [OPTIONS]",
"hammer discovery <list|index> [OPTIONS]",
"hammer discovery provision [OPTIONS]",
"hammer discovery reboot [OPTIONS]",
"hammer discovery refresh-facts [OPTIONS]",
"hammer discovery-rule [OPTIONS] SUBCOMMAND [ARG]",
"hammer discovery-rule create [OPTIONS]",
"hammer discovery-rule <delete|destroy> [OPTIONS]",
"hammer discovery-rule <info|show> [OPTIONS]",
"hammer discovery-rule <list|index> [OPTIONS]",
"hammer discovery-rule update [OPTIONS]",
"hammer docker [OPTIONS] SUBCOMMAND [ARG]",
"hammer docker manifest [OPTIONS] SUBCOMMAND [ARG]",
"hammer docker manifest <info|show> [OPTIONS]",
"hammer docker manifest <list|index> [OPTIONS]",
"hammer docker tag [OPTIONS] SUBCOMMAND [ARG]",
"hammer docker tag <info|show> [OPTIONS]",
"hammer docker tag <list|index> [OPTIONS]",
"hammer domain [OPTIONS] SUBCOMMAND [ARG]",
"hammer domain create [OPTIONS]",
"hammer domain <delete|destroy> [OPTIONS]",
"hammer domain delete-parameter [OPTIONS]",
"hammer domain <info|show> [OPTIONS]",
"hammer domain <list|index> [OPTIONS]",
"hammer domain set-parameter [OPTIONS]",
"hammer domain update [OPTIONS]",
"hammer erratum [OPTIONS] SUBCOMMAND [ARG]",
"hammer erratum info [OPTIONS]",
"hammer erratum <list|index> [OPTIONS]",
"hammer export-templates [OPTIONS]",
"hammer fact [OPTIONS] SUBCOMMAND [ARG]",
"hammer fact <list|index> [OPTIONS]",
"hammer file [OPTIONS] SUBCOMMAND [ARG]",
"hammer file <info|show> [OPTIONS]",
"hammer file <list|index> [OPTIONS]",
"hammer filter [OPTIONS] SUBCOMMAND [ARG]",
"hammer filter available-permissions [OPTIONS]",
"hammer filter available-resources [OPTIONS]",
"hammer filter create [OPTIONS]",
"hammer filter <delete|destroy> [OPTIONS]",
"hammer filter <info|show> [OPTIONS]",
"hammer filter <list|index> [OPTIONS]",
"hammer filter update [OPTIONS]",
"hammer foreign-input-set [OPTIONS] SUBCOMMAND [ARG]",
"hammer foreign-input-set create [OPTIONS]",
"hammer foreign-input-set <delete|destroy> [OPTIONS]",
"hammer foreign-input-set <info|show> [OPTIONS]",
"hammer foreign-input-set <list|index> [OPTIONS]",
"hammer foreign-input-set update [OPTIONS]",
"hammer full-help [OPTIONS]",
"hammer global-parameter [OPTIONS] SUBCOMMAND [ARG]",
"hammer global-parameter <delete|destroy> [OPTIONS]",
"hammer global-parameter <list|index> [OPTIONS]",
"hammer global-parameter set [OPTIONS]",
"hammer host [OPTIONS] SUBCOMMAND [ARG]",
"hammer host ansible-roles [OPTIONS] SUBCOMMAND [ARG]",
"hammer host ansible-roles add [OPTIONS]",
"hammer host ansible-roles assign [OPTIONS]",
"hammer host ansible-roles <list|index> [OPTIONS]",
"hammer host ansible-roles play [OPTIONS]",
"hammer host ansible-roles remove [OPTIONS]",
"hammer host boot [OPTIONS]",
"hammer host config-reports [OPTIONS]",
"hammer host create [OPTIONS]",
"hammer host deb-package [OPTIONS] SUBCOMMAND [ARG]",
"hammer host deb-package <list|index> [OPTIONS]",
"hammer host <delete|destroy> [OPTIONS]",
"hammer host delete-parameter [OPTIONS]",
"hammer host disassociate [OPTIONS]",
"hammer host enc-dump [OPTIONS]",
"hammer host errata [OPTIONS] SUBCOMMAND [ARG]",
"hammer host errata apply [OPTIONS]",
"hammer host errata info [OPTIONS]",
"hammer host errata list [OPTIONS]",
"hammer host errata recalculate [OPTIONS]",
"hammer host facts [OPTIONS]",
"hammer host <info|show> [OPTIONS]",
"hammer host interface [OPTIONS] SUBCOMMAND [ARG]",
"hammer host interface create [OPTIONS]",
"hammer host interface <delete|destroy> [OPTIONS]",
"hammer host interface <info|show> [OPTIONS]",
"hammer host interface <list|index> [OPTIONS]",
"hammer host interface update [OPTIONS]",
"hammer host <list|index> [OPTIONS]",
"hammer host package [OPTIONS] SUBCOMMAND [ARG]",
"hammer host package install [OPTIONS]",
"hammer host package <list|index> [OPTIONS]",
"hammer host package remove [OPTIONS]",
"hammer host package upgrade [OPTIONS]",
"hammer host package upgrade-all [OPTIONS]",
"hammer host package-group [OPTIONS] SUBCOMMAND [ARG]",
"hammer host package-group install [OPTIONS]",
"hammer host package-group remove [OPTIONS]",
"hammer host policies-enc [OPTIONS]",
"hammer host reboot [OPTIONS]",
"hammer host rebuild-config [OPTIONS]",
"hammer host reports [OPTIONS]",
"hammer host reset [OPTIONS]",
"hammer host set-parameter [OPTIONS]",
"hammer host start [OPTIONS]",
"hammer host status [OPTIONS]",
"hammer host stop [OPTIONS]",
"hammer host subscription [OPTIONS] SUBCOMMAND [ARG]",
"hammer host subscription attach [OPTIONS]",
"hammer host subscription auto-attach [OPTIONS]",
"hammer host subscription content-override [OPTIONS]",
"hammer host subscription enabled-repositories [OPTIONS]",
"hammer host subscription product-content [OPTIONS]",
"hammer host subscription register [OPTIONS]",
"hammer host subscription remove [OPTIONS]",
"hammer host subscription unregister [OPTIONS]",
"hammer host traces [OPTIONS] SUBCOMMAND [ARG]",
"hammer host traces list [OPTIONS]",
"hammer host traces resolve [OPTIONS]",
"hammer host update [OPTIONS]",
"hammer host-collection [OPTIONS] SUBCOMMAND [ARG]",
"hammer host-collection add-host [OPTIONS]",
"hammer host-collection copy [OPTIONS]",
"hammer host-collection create [OPTIONS]",
"hammer host-collection <delete|destroy> [OPTIONS]",
"hammer host-collection erratum [OPTIONS] SUBCOMMAND [ARG]",
"hammer host-collection erratum install [OPTIONS]",
"hammer host-collection hosts [OPTIONS]",
"hammer host-collection <info|show> [OPTIONS]",
"hammer host-collection <list|index> [OPTIONS]",
"hammer host-collection package [OPTIONS] SUBCOMMAND [ARG]",
"hammer host-collection package install [OPTIONS]",
"hammer host-collection package remove [OPTIONS]",
"hammer host-collection package update [OPTIONS]",
"hammer host-collection package-group [OPTIONS] SUBCOMMAND [ARG]",
"hammer host-collection package-group install [OPTIONS]",
"hammer host-collection package-group remove [OPTIONS]",
"hammer host-collection package-group update [OPTIONS]",
"hammer host-collection remove-host [OPTIONS]",
"hammer host-collection update [OPTIONS]",
"hammer host-registration [OPTIONS] SUBCOMMAND [ARG]",
"hammer host-registration generate-command [OPTIONS]",
"hammer hostgroup [OPTIONS] SUBCOMMAND [ARG]",
"hammer hostgroup ansible-roles [OPTIONS] SUBCOMMAND [ARG]",
"hammer hostgroup ansible-roles add [OPTIONS]",
"hammer hostgroup ansible-roles assign [OPTIONS]",
"hammer hostgroup ansible-roles <list|index> [OPTIONS]",
"hammer hostgroup ansible-roles play [OPTIONS]",
"hammer hostgroup ansible-roles remove [OPTIONS]",
"hammer hostgroup create [OPTIONS]",
"hammer hostgroup <delete|destroy> [OPTIONS]",
"hammer hostgroup delete-parameter [OPTIONS]",
"hammer hostgroup <info|show> [OPTIONS]",
"hammer hostgroup <list|index> [OPTIONS]",
"hammer hostgroup rebuild-config [OPTIONS]",
"hammer hostgroup set-parameter [OPTIONS]",
"hammer hostgroup update [OPTIONS]",
"hammer http-proxy [OPTIONS] SUBCOMMAND [ARG]",
"hammer http-proxy create [OPTIONS]",
"hammer http-proxy <delete|destroy> [OPTIONS]",
"hammer http-proxy <info|show> [OPTIONS]",
"hammer http-proxy <list|index> [OPTIONS]",
"hammer http-proxy update [OPTIONS]",
"hammer import-templates [OPTIONS]",
"hammer job-invocation [OPTIONS] SUBCOMMAND [ARG]",
"hammer job-invocation cancel [OPTIONS]",
"hammer job-invocation create [OPTIONS]",
"hammer job-invocation <info|show> [OPTIONS]",
"hammer job-invocation <list|index> [OPTIONS]",
"hammer job-invocation output [OPTIONS]",
"hammer job-invocation rerun [OPTIONS]",
"hammer job-template [OPTIONS] SUBCOMMAND [ARG]",
"hammer job-template create [OPTIONS]",
"hammer job-template <delete|destroy> [OPTIONS]",
"hammer job-template dump [OPTIONS]",
"hammer job-template export [OPTIONS]",
"hammer job-template import [OPTIONS]",
"hammer job-template <info|show> [OPTIONS]",
"hammer job-template <list|index> [OPTIONS]",
"hammer job-template update [OPTIONS]",
"hammer lifecycle-environment [OPTIONS] SUBCOMMAND [ARG]",
"hammer lifecycle-environment create [OPTIONS]",
"hammer lifecycle-environment <delete|destroy> [OPTIONS]",
"hammer lifecycle-environment <info|show> [OPTIONS]",
"hammer lifecycle-environment <list|index> [OPTIONS]",
"hammer lifecycle-environment paths [OPTIONS]",
"hammer lifecycle-environment update [OPTIONS]",
"hammer location [OPTIONS] SUBCOMMAND [ARG]",
"hammer location add-compute-resource [OPTIONS]",
"hammer location add-domain [OPTIONS]",
"hammer location add-hostgroup [OPTIONS]",
"hammer location add-medium [OPTIONS]",
"hammer location add-organization [OPTIONS]",
"hammer location add-provisioning-template [OPTIONS]",
"hammer location add-smart-proxy [OPTIONS]",
"hammer location add-subnet [OPTIONS]",
"hammer location add-user [OPTIONS]",
"hammer location create [OPTIONS]",
"hammer location <delete|destroy> [OPTIONS]",
"hammer location delete-parameter [OPTIONS]",
"hammer location <info|show> [OPTIONS]",
"hammer location <list|index> [OPTIONS]",
"hammer location remove-compute-resource [OPTIONS]",
"hammer location remove-domain [OPTIONS]",
"hammer location remove-hostgroup [OPTIONS]",
"hammer location remove-medium [OPTIONS]",
"hammer location remove-organization [OPTIONS]",
"hammer location remove-provisioning-template [OPTIONS]",
"hammer location remove-smart-proxy [OPTIONS]",
"hammer location remove-subnet [OPTIONS]",
"hammer location remove-user [OPTIONS]",
"hammer location set-parameter [OPTIONS]",
"hammer location update [OPTIONS]",
"hammer mail-notification [OPTIONS] SUBCOMMAND [ARG]",
"hammer mail-notification <info|show> [OPTIONS]",
"hammer mail-notification <list|index> [OPTIONS]",
"hammer medium [OPTIONS] SUBCOMMAND [ARG]",
"hammer medium add-operatingsystem [OPTIONS]",
"hammer medium create [OPTIONS]",
"hammer medium <delete|destroy> [OPTIONS]",
"hammer medium <info|show> [OPTIONS]",
"hammer medium <list|index> [OPTIONS]",
"hammer medium remove-operatingsystem [OPTIONS]",
"hammer medium update [OPTIONS]",
"hammer model [OPTIONS] SUBCOMMAND [ARG]",
"hammer model create [OPTIONS]",
"hammer model <delete|destroy> [OPTIONS]",
"hammer model <info|show> [OPTIONS]",
"hammer model <list|index> [OPTIONS]",
"hammer model update [OPTIONS]",
"hammer module-stream [OPTIONS] SUBCOMMAND [ARG]",
"hammer module-stream <info|show> [OPTIONS]",
"hammer module-stream <list|index> [OPTIONS]",
"hammer organization [OPTIONS] SUBCOMMAND [ARG]",
"hammer organization add-compute-resource [OPTIONS]",
"hammer organization add-domain [OPTIONS]",
"hammer organization add-hostgroup [OPTIONS]",
"hammer organization add-location [OPTIONS]",
"hammer organization add-medium [OPTIONS]",
"hammer organization add-provisioning-template [OPTIONS]",
"hammer organization add-smart-proxy [OPTIONS]",
"hammer organization add-subnet [OPTIONS]",
"hammer organization add-user [OPTIONS]",
"hammer organization configure-cdn [OPTIONS]",
"hammer organization create [OPTIONS]",
"hammer organization <delete|destroy> [OPTIONS]",
"hammer organization delete-parameter [OPTIONS]",
"hammer organization <info|show> [OPTIONS]",
"hammer organization <list|index> [OPTIONS]",
"hammer organization remove-compute-resource [OPTIONS]",
"hammer organization remove-domain [OPTIONS]",
"hammer organization remove-hostgroup [OPTIONS]",
"hammer organization remove-location [OPTIONS]",
"hammer organization remove-medium [OPTIONS]",
"hammer organization remove-provisioning-template [OPTIONS]",
"hammer organization remove-smart-proxy [OPTIONS]",
"hammer organization remove-subnet [OPTIONS]",
"hammer organization remove-user [OPTIONS]",
"hammer organization set-parameter [OPTIONS]",
"hammer organization update [OPTIONS]",
"hammer os [OPTIONS] SUBCOMMAND [ARG]",
"hammer os add-architecture [OPTIONS]",
"hammer os add-provisioning-template [OPTIONS]",
"hammer os add-ptable [OPTIONS]",
"hammer os create [OPTIONS]",
"hammer os <delete|destroy> [OPTIONS]",
"hammer os delete-default-template [OPTIONS]",
"hammer os delete-parameter [OPTIONS]",
"hammer os <info|show> [OPTIONS]",
"hammer os <list|index> [OPTIONS]",
"hammer os remove-architecture [OPTIONS]",
"hammer os remove-provisioning-template [OPTIONS]",
"hammer os remove-ptable [OPTIONS]",
"hammer os set-default-template [OPTIONS]",
"hammer os set-parameter [OPTIONS]",
"hammer os update [OPTIONS]",
"hammer package [OPTIONS] SUBCOMMAND [ARG]",
"hammer package <info|show> [OPTIONS]",
"hammer package <list|index> [OPTIONS]",
"hammer package-group [OPTIONS] SUBCOMMAND [ARG]",
"hammer package-group <info|show> [OPTIONS]",
"hammer package-group <list|index> [OPTIONS]",
"hammer partition-table [OPTIONS] SUBCOMMAND [ARG]",
"hammer partition-table add-operatingsystem [OPTIONS]",
"hammer partition-table create [OPTIONS]",
"hammer partition-table <delete|destroy> [OPTIONS]",
"hammer partition-table dump [OPTIONS]",
"hammer partition-table export [OPTIONS]",
"hammer partition-table import [OPTIONS]",
"hammer partition-table <info|show> [OPTIONS]",
"hammer partition-table <list|index> [OPTIONS]",
"hammer partition-table remove-operatingsystem [OPTIONS]",
"hammer partition-table update [OPTIONS]",
"hammer ping [OPTIONS] [SUBCOMMAND] [ARG]",
"hammer ping foreman [OPTIONS]",
"hammer ping katello [OPTIONS]",
"hammer policy [OPTIONS] SUBCOMMAND [ARG]",
"hammer policy create [OPTIONS]",
"hammer policy <delete|destroy> [OPTIONS]",
"hammer policy hosts [OPTIONS]",
"hammer policy <info|show> [OPTIONS]",
"hammer policy <list|index> [OPTIONS]",
"hammer policy update [OPTIONS]",
"hammer prebuild-bash-completion [OPTIONS]",
"hammer product [OPTIONS] SUBCOMMAND [ARG]",
"hammer product create [OPTIONS]",
"hammer product <delete|destroy> [OPTIONS]",
"hammer product <info|show> [OPTIONS]",
"hammer product <list|index> [OPTIONS]",
"hammer product remove-sync-plan [OPTIONS]",
"hammer product set-sync-plan [OPTIONS]",
"hammer product synchronize [OPTIONS]",
"hammer product update [OPTIONS]",
"hammer product update-proxy [OPTIONS]",
"hammer proxy [OPTIONS] SUBCOMMAND [ARG]",
"hammer proxy content [OPTIONS] SUBCOMMAND [ARG]",
"hammer proxy content add-lifecycle-environment [OPTIONS]",
"hammer proxy content available-lifecycle-environments [OPTIONS]",
"hammer proxy content cancel-synchronization [OPTIONS]",
"hammer proxy content info [OPTIONS]",
"hammer proxy content lifecycle-environments [OPTIONS]",
"hammer proxy content reclaim-space [OPTIONS]",
"hammer proxy content remove-lifecycle-environment [OPTIONS]",
"hammer proxy content synchronization-status [OPTIONS]",
"hammer proxy content synchronize [OPTIONS]",
"hammer proxy content update-counts [OPTIONS]",
"hammer proxy create [OPTIONS]",
"hammer proxy <delete|destroy> [OPTIONS]",
"hammer proxy import-subnets [OPTIONS]",
"hammer proxy <info|show> [OPTIONS]",
"hammer proxy <list|index> [OPTIONS]",
"hammer proxy refresh-features [OPTIONS]",
"hammer proxy update [OPTIONS]",
"hammer realm [OPTIONS] SUBCOMMAND [ARG]",
"hammer realm create [OPTIONS]",
"hammer realm <delete|destroy> [OPTIONS]",
"hammer realm <info|show> [OPTIONS]",
"hammer realm <list|index> [OPTIONS]",
"hammer realm update [OPTIONS]",
"hammer recurring-logic [OPTIONS] SUBCOMMAND [ARG]",
"hammer recurring-logic cancel [OPTIONS]",
"hammer recurring-logic delete [OPTIONS]",
"hammer recurring-logic <info|show> [OPTIONS]",
"hammer recurring-logic <list|index> [OPTIONS]",
"hammer remote-execution-feature [OPTIONS] SUBCOMMAND [ARG]",
"hammer remote-execution-feature <info|show> [OPTIONS]",
"hammer remote-execution-feature <list|index> [OPTIONS]",
"hammer remote-execution-feature update [OPTIONS]",
"hammer report [OPTIONS] SUBCOMMAND [ARG]",
"hammer report <delete|destroy> [OPTIONS]",
"hammer report <info|show> [OPTIONS]",
"hammer report <list|index> [OPTIONS]",
"hammer report-template [OPTIONS] SUBCOMMAND [ARG]",
"hammer report-template clone [OPTIONS]",
"hammer report-template create [OPTIONS]",
"hammer report-template <delete|destroy> [OPTIONS]",
"hammer report-template dump [OPTIONS]",
"hammer report-template export [OPTIONS]",
"hammer report-template generate [OPTIONS]",
"hammer report-template import [OPTIONS]",
"hammer report-template <info|show> [OPTIONS]",
"hammer report-template <list|index> [OPTIONS]",
"hammer report-template report-data [OPTIONS]",
"hammer report-template schedule [OPTIONS]",
"hammer report-template update [OPTIONS]",
"hammer repository [OPTIONS] SUBCOMMAND [ARG]",
"hammer repository create [OPTIONS]",
"hammer repository <delete|destroy> [OPTIONS]",
"hammer repository <info|show> [OPTIONS]",
"hammer repository <list|index> [OPTIONS]",
"hammer repository reclaim-space [OPTIONS]",
"hammer repository remove-content [OPTIONS]",
"hammer repository republish [OPTIONS]",
"hammer repository synchronize [OPTIONS]",
"hammer repository types [OPTIONS]",
"hammer repository update [OPTIONS]",
"hammer repository upload-content [OPTIONS]",
"hammer repository-set [OPTIONS] SUBCOMMAND [ARG]",
"hammer repository-set available-repositories [OPTIONS]",
"hammer repository-set disable [OPTIONS]",
"hammer repository-set enable [OPTIONS]",
"hammer repository-set <info|show> [OPTIONS]",
"hammer repository-set <list|index> [OPTIONS]",
"hammer role [OPTIONS] SUBCOMMAND [ARG]",
"hammer role clone [OPTIONS]",
"hammer role create [OPTIONS]",
"hammer role <delete|destroy> [OPTIONS]",
"hammer role filters [OPTIONS]",
"hammer role <info|show> [OPTIONS]",
"hammer role <list|index> [OPTIONS]",
"hammer role update [OPTIONS]",
"hammer scap-content [OPTIONS] SUBCOMMAND [ARG]",
"hammer scap-content bulk-upload [OPTIONS]",
"hammer scap-content create [OPTIONS]",
"hammer scap-content <delete|destroy> [OPTIONS]",
"hammer scap-content download [OPTIONS]",
"hammer scap-content <info|show> [OPTIONS]",
"hammer scap-content <list|index> [OPTIONS]",
"hammer scap-content update [OPTIONS]",
"hammer scap-content-profile [OPTIONS] SUBCOMMAND [ARG]",
"hammer scap-content-profile <list|index> [OPTIONS]",
"hammer settings [OPTIONS] SUBCOMMAND [ARG]",
"hammer settings <info|show> [OPTIONS]",
"hammer settings <list|index> [OPTIONS]",
"hammer settings set [OPTIONS]",
"hammer shell [OPTIONS]",
"hammer simple-content-access [OPTIONS] SUBCOMMAND [ARG]",
"hammer simple-content-access disable [OPTIONS]",
"hammer simple-content-access enable [OPTIONS]",
"hammer simple-content-access status [OPTIONS]",
"hammer srpm [OPTIONS] SUBCOMMAND [ARG]",
"hammer srpm <info|show> [OPTIONS]",
"hammer srpm <list|index> [OPTIONS]",
"hammer status [OPTIONS] [SUBCOMMAND] [ARG]",
"hammer status foreman [OPTIONS]",
"hammer status katello [OPTIONS]",
"hammer subnet [OPTIONS] SUBCOMMAND [ARG]",
"hammer subnet create [OPTIONS]",
"hammer subnet <delete|destroy> [OPTIONS]",
"hammer subnet delete-parameter [OPTIONS]",
"hammer subnet <info|show> [OPTIONS]",
"hammer subnet <list|index> [OPTIONS]",
"hammer subnet set-parameter [OPTIONS]",
"hammer subnet update [OPTIONS]",
"hammer subscription [OPTIONS] SUBCOMMAND [ARG]",
"hammer subscription delete-manifest [OPTIONS]",
"hammer subscription <list|index> [OPTIONS]",
"hammer subscription manifest-history [OPTIONS]",
"hammer subscription refresh-manifest [OPTIONS]",
"hammer subscription upload [OPTIONS]",
"hammer sync-plan [OPTIONS] SUBCOMMAND [ARG]",
"hammer sync-plan create [OPTIONS]",
"hammer sync-plan <delete|destroy> [OPTIONS]",
"hammer sync-plan <info|show> [OPTIONS]",
"hammer sync-plan <list|index> [OPTIONS]",
"hammer sync-plan update [OPTIONS]",
"hammer tailoring-file [OPTIONS] SUBCOMMAND [ARG]",
"hammer tailoring-file create [OPTIONS]",
"hammer tailoring-file <delete|destroy> [OPTIONS]",
"hammer tailoring-file download [OPTIONS]",
"hammer tailoring-file <info|show> [OPTIONS]",
"hammer tailoring-file <list|index> [OPTIONS]",
"hammer tailoring-file update [OPTIONS]",
"hammer task [OPTIONS] SUBCOMMAND [ARG]",
"hammer task <info|show> [OPTIONS]",
"hammer task <list|index> [OPTIONS]",
"hammer task progress [OPTIONS]",
"hammer task resume [OPTIONS]",
"hammer template [OPTIONS] SUBCOMMAND [ARG]",
"hammer template add-operatingsystem [OPTIONS]",
"hammer template build-pxe-default [OPTIONS]",
"hammer template clone [OPTIONS]",
"hammer template combination [OPTIONS] SUBCOMMAND [ARG]",
"hammer template combination create [OPTIONS]",
"hammer template combination <delete|destroy> [OPTIONS]",
"hammer template combination <info|show> [OPTIONS]",
"hammer template combination <list|index> [OPTIONS]",
"hammer template combination update [OPTIONS]",
"hammer template create [OPTIONS]",
"hammer template <delete|destroy> [OPTIONS]",
"hammer template dump [OPTIONS]",
"hammer template export [OPTIONS]",
"hammer template import [OPTIONS]",
"hammer template <info|show> [OPTIONS]",
"hammer template kinds [OPTIONS]",
"hammer template <list|index> [OPTIONS]",
"hammer template remove-operatingsystem [OPTIONS]",
"hammer template update [OPTIONS]",
"hammer template-input [OPTIONS] SUBCOMMAND [ARG]",
"hammer template-input create [OPTIONS]",
"hammer template-input <delete|destroy> [OPTIONS]",
"hammer template-input <info|show> [OPTIONS]",
"hammer template-input <list|index> [OPTIONS]",
"hammer template-input update [OPTIONS]",
"hammer user [OPTIONS] SUBCOMMAND [ARG]",
"hammer user access-token [OPTIONS] SUBCOMMAND [ARG]",
"hammer user access-token create [OPTIONS]",
"hammer user access-token <info|show> [OPTIONS]",
"hammer user access-token <list|index> [OPTIONS]",
"hammer user access-token revoke [OPTIONS]",
"hammer user add-role [OPTIONS]",
"hammer user create [OPTIONS]",
"hammer user <delete|destroy> [OPTIONS]",
"hammer user <info|show> [OPTIONS]",
"hammer user <list|index> [OPTIONS]",
"hammer user mail-notification [OPTIONS] SUBCOMMAND [ARG]",
"hammer user mail-notification add [OPTIONS]",
"hammer user mail-notification <list|index> [OPTIONS]",
"hammer user mail-notification remove [OPTIONS]",
"hammer user mail-notification update [OPTIONS]",
"hammer user remove-role [OPTIONS]",
"hammer user ssh-keys [OPTIONS] SUBCOMMAND [ARG]",
"hammer user ssh-keys add [OPTIONS]",
"hammer user ssh-keys <delete|destroy> [OPTIONS]",
"hammer user ssh-keys <info|show> [OPTIONS]",
"hammer user ssh-keys <list|index> [OPTIONS]",
"hammer user table-preference [OPTIONS] SUBCOMMAND [ARG]",
"hammer user table-preference create [OPTIONS]",
"hammer user table-preference <delete|destroy> [OPTIONS]",
"hammer user table-preference <info|show> [OPTIONS]",
"hammer user table-preference <list|index> [OPTIONS]",
"hammer user table-preference update [OPTIONS]",
"hammer user update [OPTIONS]",
"hammer user-group [OPTIONS] SUBCOMMAND [ARG]",
"hammer user-group add-role [OPTIONS]",
"hammer user-group add-user [OPTIONS]",
"hammer user-group add-user-group [OPTIONS]",
"hammer user-group create [OPTIONS]",
"hammer user-group <delete|destroy> [OPTIONS]",
"hammer user-group external [OPTIONS] SUBCOMMAND [ARG]",
"hammer user-group external create [OPTIONS]",
"hammer user-group external <delete|destroy> [OPTIONS]",
"hammer user-group external <info|show> [OPTIONS]",
"hammer user-group external <list|index> [OPTIONS]",
"hammer user-group external refresh [OPTIONS]",
"hammer user-group external update [OPTIONS]",
"hammer user-group <info|show> [OPTIONS]",
"hammer user-group <list|index> [OPTIONS]",
"hammer user-group remove-role [OPTIONS]",
"hammer user-group remove-user [OPTIONS]",
"hammer user-group remove-user-group [OPTIONS]",
"hammer user-group update [OPTIONS]",
"hammer virt-who-config [OPTIONS] SUBCOMMAND [ARG]",
"hammer virt-who-config create [OPTIONS]",
"hammer virt-who-config <delete|destroy> [OPTIONS]",
"hammer virt-who-config deploy [OPTIONS]",
"hammer virt-who-config fetch [OPTIONS]",
"hammer virt-who-config <info|show> [OPTIONS]",
"hammer virt-who-config <list|index> [OPTIONS]",
"hammer virt-who-config update [OPTIONS]",
"hammer webhook [OPTIONS] SUBCOMMAND [ARG]",
"hammer webhook create [OPTIONS]",
"hammer webhook <delete|destroy> [OPTIONS]",
"hammer webhook <info|show> [OPTIONS]",
"hammer webhook <list|index> [OPTIONS]",
"hammer webhook update [OPTIONS]",
"hammer webhook-template [OPTIONS] SUBCOMMAND [ARG]",
"hammer webhook-template clone [OPTIONS]",
"hammer webhook-template create [OPTIONS]",
"hammer webhook-template <delete|destroy> [OPTIONS]",
"hammer webhook-template dump [OPTIONS]",
"hammer webhook-template export [OPTIONS]",
"hammer webhook-template import [OPTIONS]",
"hammer webhook-template <info|show> [OPTIONS]",
"hammer webhook-template <list|index> [OPTIONS]",
"hammer webhook-template update [OPTIONS]"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/hammer_cli_guide/hammer-reference |
Chapter 1. About the Job Explorer | Chapter 1. About the Job Explorer The Job Explorer provides a detailed view of jobs run on Ansible Tower clusters across your organizations. You can access the Job Explorer by selecting Automation Analytics Job Explorer from the navigation panel or using the drill-down view available across each of the application's charts. Using the Job Explorer you can: Filter the types of jobs running in a cluster or organization; Directly link out to templates on your Ansible Tower for further assessment; Identify and review job failures; View more details for top templates running on a cluster; Filter out nested workflows and jobs. You can review the features and details of the Job Explorer in the following sections. 1.1. Creating a filtered and sorted view of jobs You can view a list of jobs, filtered by attributes you choose, using the Job Explorer . Filter options include: Status Job Cluster Organization Template You can sort results by a set of parameters by using the Sort by options from the filter toolbar. Procedure From the navigation panel, select Automation Analytics Job Explorer . In the filter toolbar, click the Filter by drop-down menu and select Job . In that same toolbar, select a time range. Job Explorer will now display jobs within that time range. To further refine results, return to the filter toolbar and select a different attribute to filter results by, including job status, cluster, or organization. The Job Explorer view will update and present a list of jobs based on the attributes you selected. 1.1.1. Viewing more information about an individual job You can click on the arrow icon to the job Id/Name column to view more details related to that job. 1.1.2. Reviewing job details on Ansible Tower Click the job in the Id/Name column to view the job itself on the Ansible Tower job details page. For more information on viewing job details on Ansible Tower, see Jobs in the Ansible Tower User Guide . 1.2. Drilling down into cluster data You can drill down into cluster data to review more detailed information about successful or failed jobs. The detailed view, presented on the Job Explorer page, provides information on the cluster, organization, template, and job type. Filters you select on the Clusters view carry over to the Job Explorer page. Details on those job templates will appear in the Job Explorer view, modified by any filters you selected in the Clusters view. For example, you can drill down to review details for failed jobs in a cluster. See below to learn more. 1.2.1. Example: Reviewing failed jobs You can view more detail about failed jobs across your organization by drilling down on the graph on the Cluster view and using the Job Explorer to refine results. Clicking on a specific portion in a graph will open that information in the Job Explorer , preserving contextual information created when using filters on the Clusters view. Procedure From the navigation panel, select Automation Analytics Clusters . In the filter toolbar, apply filters for clusters and time range of your choosing. Click on a segment on the graph. You will redirected to the Job Explorer view, which will present a list of successful and failed jobs corresponding to that day on the bar graph. To view only failed jobs: Select Status from the Filter by list. Select the Failed filter. The view will update to show only failed jobs run on that day. Add additional context to the view by applying additional filters and selecting attributes to sort results by. Link out and review more information for failed jobs on the Ansible Tower job details page. 1.3. Viewing top templates job details for a specific cluster You can view job instances for top templates in a cluster to learn more about individual job runs associated with that template or to apply filters to further drill down into the data. Procedure From the navigation panel, select Automation Analytics Clusters . Select a cluster from the Clusters list. The view will update with that cluster's data. Click on a template name in Top Templates . Click View all jobs in the modal that appears. The Job Explorer will display all jobs on the chosen cluster associated with that template. The view presented will preserve the contextual information of the template based on the parameters selected in the Clusters view. 1.4. Ignoring nested workflows and jobs Use the toggle switch on the Job Explorer view to ignore nested workflows and job. Select this option to filter out duplicate workflow and job templates entries and exclude those items from overall totals. Note About nested workflows Nested workflows allow you to create workflow job templates that call other workflow job templates. Nested workflows promotes reuse, as modular components, of workflows that include existing business logic and organizational requirements in automating complex processes and operations. To learn more about nested workflows, see Workflows in the Ansible Tower User Guide . | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/evaluating_your_automation_controller_job_runs_using_the_job_explorer/assembly-using-job-explorer |
Providing feedback on Red Hat build of Quarkus documentation | Providing feedback on Red Hat build of Quarkus documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.15/html/developing_and_compiling_your_red_hat_build_of_quarkus_applications_with_apache_maven/proc_providing-feedback-on-red-hat-documentation_quarkus-maven |
Chapter 9. Advisories related to this release | Chapter 9. Advisories related to this release The following advisories have been issued to document enhancements, bug fixes, and CVE fixes included in this release: RHSA-2024:1916 RHSA-2024:1917 RHBA-2024:1918 | null | https://docs.redhat.com/en/documentation/red_hat_jboss_web_server/6.0/html/red_hat_jboss_web_server_6.0_service_pack_2_release_notes/advisories_related_to_this_release |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.8/html/openid_connect_oidc_authentication/making-open-source-more-inclusive |
34.2.2. Supported Backup Methods | 34.2.2. Supported Backup Methods In addition to the NETFS internal backup method, ReaR supports several external backup methods. This means that the rescue system restores files from the backup automatically, but the backup creation cannot be triggered using ReaR. For a list and configuration options of the supported external backup methods, see the "Backup Software Integration" section of the rear(8) man page. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/ch34s02s02 |
Chapter 4. An active/active Samba Server in a Red Hat High Availability Cluster (Red Hat Enterprise Linux 7.4 and Later) | Chapter 4. An active/active Samba Server in a Red Hat High Availability Cluster (Red Hat Enterprise Linux 7.4 and Later) As of the Red Hat Enterprise Linux 7.4 release, the Red Hat Resilient Storage Add-On provides support for running Samba in an active/active cluster configuration using Pacemaker. The Red Hat Resilient Storage Add-On includes the High Availability Add-On. Note For further information on support policies for Samba, see Support Policies for RHEL Resilient Storage - ctdb General Policies and Support Policies for RHEL Resilient Storage - Exporting gfs2 contents via other protocols on the Red Hat Customer Portal. This chapter describes how to configure a highly available active/active Samba server on a two-node Red Hat Enterprise Linux High Availability Add-On cluster using shared storage. The procedure uses pcs to configure Pacemaker cluster resources. This use case requires that your system include the following components: Two nodes, which will be used to create the cluster running Clustered Samba. In this example, the nodes used are z1.example.com and z2.example.com which have IP address of 192.168.1.151 and 192.168.1.152 . A power fencing device for each node of the cluster. This example uses two ports of the APC power switch with a host name of zapc.example.com . Shared storage for the nodes in the cluster, using iSCSI or Fibre Channel. Configuring a highly available active/active Samba server on a two-node Red Hat Enterprise Linux High Availability Add-On cluster requires that you perform the following steps. Create the cluster that will export the Samba shares and configure fencing for each node in the cluster, as described in Section 4.1, "Creating the Cluster" . Configure a gfs2 file system mounted on the clustered LVM logical volume my_clv on the shared storage for the nodes in the cluster, as described in Section 4.2, "Configuring a Clustered LVM Volume with a GFS2 File System" . Configure Samba on each node in the cluster, Section 4.3, "Configuring Samba" . Create the Samba cluster resources as described in Section 4.4, "Configuring the Samba Cluster Resources" . Test the Samba share you have configured, as described in Section 4.5, "Testing the Resource Configuration" . 4.1. Creating the Cluster Use the following procedure to install and create the cluster to use for the Samba service: Install the cluster software on nodes z1.example.com and z2.example.com , using the procedure provided in Section 1.1, "Cluster Software Installation" . Create the two-node cluster that consists of z1.example.com and z2.example.com , using the procedure provided in Section 1.2, "Cluster Creation" . As in that example procedure, this use case names the cluster my_cluster . Configure fencing devices for each node of the cluster, using the procedure provided in Section 1.3, "Fencing Configuration" . This example configures fencing using two ports of the APC power switch with a host name of zapc.example.com . | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_administration/ch-hasamba-haaa |
3.4. Setting up an IdM Client Through Kickstart | 3.4. Setting up an IdM Client Through Kickstart A Kickstart enrollment automatically adds a new system to the IdM domain at the time Red Hat Enterprise Linux is installed. For details on Kickstart, see Kickstart Installations in the Installation Guide . Preparing for a Kickstart client installation includes these steps: Section 3.4.1, "Pre-creating a Client Host Entry on the IdM Server" Section 3.4.2, "Creating a Kickstart File for the Client" 3.4.1. Pre-creating a Client Host Entry on the IdM Server Log in as admin: Create the host entry on the IdM server, and set a temporary password for the entry: The password is used by Kickstart to authenticate during the client installation and expires after the first authentication attempt. After the client is successfully installed, it authenticates using its keytab. 3.4.2. Creating a Kickstart File for the Client A Kickstart file used to set up an IdM client must include the following: The ipa-client package in the list of packages to be installed: See Package Selection in the Installation Guide for details. Post-installation instructions that: ensure SSH keys are generated before enrollment runs the ipa-client-install utility, specifying: all required information to access and configure the IdM domain services the password which you set when pre-creating the client host on the IdM server, in Section 3.4.1, "Pre-creating a Client Host Entry on the IdM Server" For example: For a non-interactive installation, add also the --unattended option. To let the client installation script request a certificate for the machine: Add the --request-cert option to ipa-client-install . Set the system bus address to /dev/null for both the getcert and ipa-client-install utility in the kickstart chroot environment. To do this, add these lines to the post-installation instruction file before the ipa-client-install instruction: Note Red Hat recommends not to start the sshd service prior to the kickstart enrollment. While starting sshd before enrolling the client generates the SSH keys automatically, using the above script is the preferred solution. See Post-installation Script in the Installation Guide for details. For details on using Kickstart, see How Do You Perform a Kickstart Installation? in the Installation Guide . For examples of Kickstart files, see Sample Kickstart Configurations . | [
"kinit admin",
"ipa host-add client.example.com --password= secret",
"%packages @ X Window System @ Desktop @ Sound and Video ipa-client",
"%post --log=/root/ks-post.log Generate SSH keys to ensure that ipa-client-install uploads them to the IdM server /usr/sbin/sshd-keygen Run the client install script /usr/sbin/ipa-client-install --hostname= client.example.com --domain= EXAMPLE.COM --enable-dns-updates --mkhomedir -w secret --realm= EXAMPLE.COM --server= server.example.com",
"env DBUS_SYSTEM_BUS_ADDRESS=unix:path=/dev/null getcert list env DBUS_SYSTEM_BUS_ADDRESS=unix:path=/dev/null ipa-client-install"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/client-kickstart |
13.2.30. Using NSCD with SSSD | 13.2.30. Using NSCD with SSSD SSSD is not designed to be used with the NSCD daemon. Even though SSSD does not directly conflict with NSCD, using both services can result in unexpected behavior, especially with how long entries are cached. The most common evidence of a problem is conflicts with NFS. When using Network Manager to manage network connections, it may take several minutes for the network interface to come up. During this time, various services attempt to start. If these services start before the network is up and the DNS servers are available, these services fail to identify the forward or reverse DNS entries they need. These services will read an incorrect or possibly empty resolv.conf file. This file is typically only read once, and so any changes made to this file are not automatically applied. This can cause NFS locking to fail on the machine where the NSCD service is running, unless that service is manually restarted. To avoid this problem, enable caching for hosts and services in the /etc/nscd.conf file and rely on the SSSD cache for the passwd , group , and netgroup entries. Change the /etc/nscd.conf file: With NSCD answering hosts requests, these entries will be cached by NSCD and returned by NSCD during the boot process. All other entries are handled by SSSD. | [
"enable-cache hosts yes enable-cache passwd no enable-cache group no enable-cache netgroup no"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/usingnscd-sssd |
19.2. Viewing Password Policies | 19.2. Viewing Password Policies There can be multiple password policies configured in IdM. There is always a global policy, which is set when the server is created. Additional policies can be created for groups in IdM. The UI lists all of the group password policies and the global policy on the Password Policies page. Using the CLI, both global and group-level password policies can be viewed using the pwpolicy-show command. The CLI can also display the password policy in effect for a user. 19.2.1. Viewing the Global Password Policy The global password policy is created as part of the initial IdM server setup. This policy applies to every user until a group-level password policy supersedes it. The default settings for the global password policy are listed in Table 19.2, "Default Global Password Policy" . Table 19.2. Default Global Password Policy Attribute Value Max lifetime 90 (days) Min lifetime 1 (hour) History size 0 (unset) Character classes 0 (unset) Min length 8 Max failures 6 Failure reset interval 60 Lockout duration 600 19.2.1.1. With the Web UI Click the Policy tab, and then click the Password Policies subtab. All of the policies in the UI are listed by group. The global password policy is defined by the global_policy group. Click the group link. The global policy is displayed. 19.2.1.2. With the Command Line To view the global policy, simply run the pwpolicy-show command with no arguments: 19.2.2. Viewing Group-Level Password Policies 19.2.2.1. With the Web UI Click the Policy tab, and then click the Password Policies subtab. All of the policies in the UI are listed by group. Click the name of the group which is assigned the policy. The group policy is displayed. 19.2.2.2. With the Command Line For a group-level password policy, specify the group name with the command: 19.2.3. Viewing the Password Policy in Effect for a User A user may belong to multiple groups, each with their own separate password policies. These policies are not additive. Only one policy is in effect at a time and it applies to all password policy attributes. To see which policy is in effect for a specific user, the pwpolicy-show command can be run for a specific user. The results also show which group policy is in effect for that user. | [
"kinit admin ipa pwpolicy-show Group: global_policy Max lifetime (days): 90 Min lifetime (hours): 1 History size: 0 Character classes: 0 Min length: 8 Max failures: 6 Failure reset interval: 60 Lockout duration: 600",
"kinit admin ipa pwpolicy-show ipausers Group: ipausers Max lifetime (days): 120 Min lifetime (hours): 10 Min length: 10 Priority: 50",
"kinit admin ipa pwpolicy-show --user=jsmith Group: global_policy Max lifetime (days): 90 Min lifetime (hours): 1 History size: 0 Character classes: 0 Min length: 8 Max failures: 6 Failure reset interval: 60 Lockout duration: 600"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/viewing-the_password_policy |
Chapter 2. Nagios Core installation and configuration | Chapter 2. Nagios Core installation and configuration As a storage administrator, you can install Nagios Core by downloading the Nagios Core source code; then, configuring, making, and installing it on the node that will run the Nagios Core instance. 2.1. Installing and configuring the Nagios Core server from source There is not a Red Hat Enterprise Linux package for the Nagios Core software, so the Nagios Core software must be compiled from source. Prerequisites Internet access. Root-level access to the Nagios Core host. Procedure Install the prerequisites: Example If you are using a firewall, open port 80 for httpd : Example Create a user and group for Nagios Core: Example Download the latest version of Nagios Core and Plug-ins: Example Run ./configure : Example Compile the Nagios Core source code: Example Install Nagios source code: Example Copy the event handlers and change their ownership: Example Run the pre-flight check: Example Make and install the Nagios Core plug-ins: Example Create a user for the Nagios Core user interface: Example Important If adding a user other than nagiosadmin , ensure the /usr/local/nagios/etc/cgi.cfg file gets updated with the user name too. Modify the /usr/local/nagios/etc/objects/contacts.cfg file with the user name, full name, and email address as needed. 2.2. Starting the Nagios Core service Start the Nagios Core service to monitor the Red Hat Ceph Storage cluster health. Prerequisites Root-level access to the Nagios Core host. Procedure Add Nagios Core and Apache as a service: Example Start the Nagios Core daemon and Apache: Example 2.3. Logging into the Nagios Core server Log in to the Nagios Core server to view the health status of the Red Hat Ceph Storage cluster. Prerequisites User name and password for the Nagios dashboard. Procedure With Nagios up and running, log in to the dashboard using the credentials of the default Nagios Core user: Syntax Replace IP_ADDRESS with the IP address of your Nagios Core server. | [
"dnf install -y httpd php php-cli gcc glibc glibc-common gd gd-devel net-snmp openssl openssl-devel wget unzip make",
"firewall-cmd --zone=public --add-port=80/tcp firewall-cmd --zone=public --add-port=80/tcp --permanent",
"useradd nagios passwd nagios groupadd nagcmd usermod -a -G nagcmd nagios usermod -a -G nagcmd apache",
"wget --inet4-only https://assets.nagios.com/downloads/nagioscore/releases/nagios-4.4.5.tar.gz wget --inet4-only http://www.nagios-plugins.org/download/nagios-plugins-2.3.3.tar.gz tar zxf nagios-4.4.5.tar.gz tar zxf nagios-plugins-2.3.3.tar.gz cd nagios-4.4.5",
"./configure --with-command-group=nagcmd",
"make all",
"make install make install-init make install-config make install-commandmode make install-webconf",
"cp -R contrib/eventhandlers/ /usr/local/nagios/libexec/ chown -R nagios:nagios /usr/local/nagios/libexec/eventhandlers",
"/usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg",
"cd ../nagios-plugins-2.3.3 ./configure --with-nagios-user=nagios --with-nagios-group=nagios make make install",
"htpasswd -c /usr/local/nagios/etc/htpasswd.users nagiosadmin",
"systemctl enable nagios systemctl enable httpd",
"systemctl start nagios systemctl start httpd",
"http:// IP_ADDRESS /nagios"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/monitoring_ceph_with_nagios_guide/nagios-core-installation-and-configuration |
Chapter 52. Boon DataFormat | Chapter 52. Boon DataFormat Available as of Camel version 2.16 Boon is a Data Format which uses the Boon JSON marshalling library to unmarshal an JSON payload into Java objects or to marshal Java objects into an JSON payload. Boon aims to be a simple and fast parser than other common parsers currently used. 52.1. Options The Boon dataformat supports 3 options, which are listed below. Name Default Java Type Description unmarshalTypeName String Class name of the java type to use when unarmshalling useList false Boolean To unarmshal to a List of Map or a List of Pojo. contentTypeHeader false Boolean Whether the data format should set the Content-Type header with the type from the data format if the data format is capable of doing so. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSon etc. 52.2. Spring Boot Auto-Configuration The component supports 4 options, which are listed below. Name Description Default Type camel.dataformat.boon.content-type-header Whether the data format should set the Content-Type header with the type from the data format if the data format is capable of doing so. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSon etc. false Boolean camel.dataformat.boon.enabled Enable boon dataformat true Boolean camel.dataformat.boon.unmarshal-type-name Class name of the java type to use when unarmshalling String camel.dataformat.boon.use-list To unarmshal to a List of Map or a List of Pojo. false Boolean ND 52.3. Using the Java DSL DataFormat boonDataFormat = new BoonDataFormat("com.acme.model.Person"); from("activemq:My.Queue") .unmarshal(boonDataFormat) .to("mqseries:Another.Queue"); 52.4. Using Blueprint XML <bean id="boonDataFormat" class="org.apache.camel.component.boon.BoonDataFormat"> <argument value="com.acme.model.Person"/> </bean> <camelContext id="camel" xmlns="http://camel.apache.org/schema/blueprint"> <route> <from uri="activemq:My.Queue"/> <unmarshal ref="boonDataFormat"/> <to uri="mqseries:Another.Queue"/> </route> </camelContext> 52.5. Dependencies <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-boon</artifactId> <version>x.x.x</version> </dependency> | [
"DataFormat boonDataFormat = new BoonDataFormat(\"com.acme.model.Person\"); from(\"activemq:My.Queue\") .unmarshal(boonDataFormat) .to(\"mqseries:Another.Queue\");",
"<bean id=\"boonDataFormat\" class=\"org.apache.camel.component.boon.BoonDataFormat\"> <argument value=\"com.acme.model.Person\"/> </bean> <camelContext id=\"camel\" xmlns=\"http://camel.apache.org/schema/blueprint\"> <route> <from uri=\"activemq:My.Queue\"/> <unmarshal ref=\"boonDataFormat\"/> <to uri=\"mqseries:Another.Queue\"/> </route> </camelContext>",
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-boon</artifactId> <version>x.x.x</version> </dependency>"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/boon-dataformat |
Chapter 7. Advisories related to this release | Chapter 7. Advisories related to this release The following advisory has been issued to document bug fixes and CVE fixes included in the Cryostat 2.3 release: RHSA-2023:3167 RHBA-2023:4786 Revised on 2023-08-29 14:37:02 UTC | null | https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/2/html/release_notes_for_the_red_hat_build_of_cryostat_2.3/cryostat-advisories-2-3_cryostat |
33.7. Managing Reverse DNS Zones | 33.7. Managing Reverse DNS Zones A reverse DNS zone can be identified in the following two ways: By the zone name, in the format reverse_ipv4_address .in-addr.arpa or reverse_ipv6_address .ip6.arpa . The reverse IP address is created by reversing the order of the components of the IP address. For example, if the IPv4 network is 192.0.2.0/24 , the reverse zone name is 2.0.192.in-addr.arpa. (with the trailing period). By the network address, in the format network_ip_address / subnet_mask_bit_count To create the reverse zone by its IP network, set the network information to the (forward-style) IP address, with the subnet mask bit count. The bit count must be a multiple of eight for IPv4 addresses or a multiple of four for IPv6 addresses. Adding a Reverse DNS Zone in the Web UI Open the Network Services tab, and select the DNS subtab, followed by the DNS Zones section. Figure 33.30. DNS Zone Management Click Add at the top of the list of all zones. Figure 33.31. Adding a Reverse DNS Zone Fill in the zone name or the reverse zone IP network. For example, to add a reverse DNS zone by the zone name: Figure 33.32. Creating a Reverse Zone by Name Alternatively, to add a reverse DNS zone by the reverse zone IP network: Figure 33.33. Creating a Reverse Zone by IP Network The validator for the Reverse zone IP network field warns you about an invalid network address during typing. The warning will disappear once you enter the full network address. Click Add to confirm the new reverse zone. Adding a Reverse DNS Zone from the Command Line To create a reverse DNS zone from the command line, use the ipa dnszone-add command. For example, to create the reverse zone by the zone name: Alternatively, to create the reverse zone by the IP network: Other Management Operations for Reverse DNS Zones Section 33.4, "Managing Master DNS Zones" describes other zone management operations, some of which are also applicable to reverse DNS zone management, such as editing or disabling and enabling DNS zones. | [
"[user@server]USD ipa dnszone-add 2.0.192.in-addr.arpa.",
"[user@server ~]USD ipa dnszone-add --name-from-ip= 192.0.2.0/24"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/managing-reverse-dns-zones |
Chapter 5. OAuthClient [oauth.openshift.io/v1] | Chapter 5. OAuthClient [oauth.openshift.io/v1] Description OAuthClient describes an OAuth client Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 5.1. Specification Property Type Description accessTokenInactivityTimeoutSeconds integer AccessTokenInactivityTimeoutSeconds overrides the default token inactivity timeout for tokens granted to this client. The value represents the maximum amount of time that can occur between consecutive uses of the token. Tokens become invalid if they are not used within this temporal window. The user will need to acquire a new token to regain access once a token times out. This value needs to be set only if the default set in configuration is not appropriate for this client. Valid values are: - 0: Tokens for this client never time out - X: Tokens time out if there is no activity for X seconds The current minimum allowed value for X is 300 (5 minutes) WARNING: existing tokens' timeout will not be affected (lowered) by changing this value accessTokenMaxAgeSeconds integer AccessTokenMaxAgeSeconds overrides the default access token max age for tokens granted to this client. 0 means no expiration. additionalSecrets array (string) AdditionalSecrets holds other secrets that may be used to identify the client. This is useful for rotation and for service account token validation apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources grantMethod string GrantMethod is a required field which determines how to handle grants for this client. Valid grant handling methods are: - auto: always approves grant requests, useful for trusted clients - prompt: prompts the end user for approval of grant requests, useful for third-party clients kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata redirectURIs array (string) RedirectURIs is the valid redirection URIs associated with a client respondWithChallenges boolean RespondWithChallenges indicates whether the client wants authentication needed responses made in the form of challenges instead of redirects scopeRestrictions array ScopeRestrictions describes which scopes this client can request. Each requested scope is checked against each restriction. If any restriction matches, then the scope is allowed. If no restriction matches, then the scope is denied. scopeRestrictions[] object ScopeRestriction describe one restriction on scopes. Exactly one option must be non-nil. secret string Secret is the unique secret associated with a client 5.1.1. .scopeRestrictions Description ScopeRestrictions describes which scopes this client can request. Each requested scope is checked against each restriction. If any restriction matches, then the scope is allowed. If no restriction matches, then the scope is denied. Type array 5.1.2. .scopeRestrictions[] Description ScopeRestriction describe one restriction on scopes. Exactly one option must be non-nil. Type object Property Type Description clusterRole object ClusterRoleScopeRestriction describes restrictions on cluster role scopes literals array (string) ExactValues means the scope has to match a particular set of strings exactly 5.1.3. .scopeRestrictions[].clusterRole Description ClusterRoleScopeRestriction describes restrictions on cluster role scopes Type object Required roleNames namespaces allowEscalation Property Type Description allowEscalation boolean AllowEscalation indicates whether you can request roles and their escalating resources namespaces array (string) Namespaces is the list of namespaces that can be referenced. * means any of them (including *) roleNames array (string) RoleNames is the list of cluster roles that can referenced. * means anything 5.2. API endpoints The following API endpoints are available: /apis/oauth.openshift.io/v1/oauthclients DELETE : delete collection of OAuthClient GET : list or watch objects of kind OAuthClient POST : create an OAuthClient /apis/oauth.openshift.io/v1/watch/oauthclients GET : watch individual changes to a list of OAuthClient. deprecated: use the 'watch' parameter with a list operation instead. /apis/oauth.openshift.io/v1/oauthclients/{name} DELETE : delete an OAuthClient GET : read the specified OAuthClient PATCH : partially update the specified OAuthClient PUT : replace the specified OAuthClient /apis/oauth.openshift.io/v1/watch/oauthclients/{name} GET : watch changes to an object of kind OAuthClient. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 5.2.1. /apis/oauth.openshift.io/v1/oauthclients Table 5.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of OAuthClient Table 5.2. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 5.3. Body parameters Parameter Type Description body DeleteOptions schema Table 5.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind OAuthClient Table 5.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 5.6. HTTP responses HTTP code Reponse body 200 - OK OAuthClientList schema 401 - Unauthorized Empty HTTP method POST Description create an OAuthClient Table 5.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.8. Body parameters Parameter Type Description body OAuthClient schema Table 5.9. HTTP responses HTTP code Reponse body 200 - OK OAuthClient schema 201 - Created OAuthClient schema 202 - Accepted OAuthClient schema 401 - Unauthorized Empty 5.2.2. /apis/oauth.openshift.io/v1/watch/oauthclients Table 5.10. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of OAuthClient. deprecated: use the 'watch' parameter with a list operation instead. Table 5.11. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 5.2.3. /apis/oauth.openshift.io/v1/oauthclients/{name} Table 5.12. Global path parameters Parameter Type Description name string name of the OAuthClient Table 5.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an OAuthClient Table 5.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 5.15. Body parameters Parameter Type Description body DeleteOptions schema Table 5.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified OAuthClient Table 5.17. HTTP responses HTTP code Reponse body 200 - OK OAuthClient schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified OAuthClient Table 5.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 5.19. Body parameters Parameter Type Description body Patch schema Table 5.20. HTTP responses HTTP code Reponse body 200 - OK OAuthClient schema 201 - Created OAuthClient schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified OAuthClient Table 5.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.22. Body parameters Parameter Type Description body OAuthClient schema Table 5.23. HTTP responses HTTP code Reponse body 200 - OK OAuthClient schema 201 - Created OAuthClient schema 401 - Unauthorized Empty 5.2.4. /apis/oauth.openshift.io/v1/watch/oauthclients/{name} Table 5.24. Global path parameters Parameter Type Description name string name of the OAuthClient Table 5.25. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind OAuthClient. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 5.26. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/oauth_apis/oauthclient-oauth-openshift-io-v1 |
Chapter 12. Integrating Applications | Chapter 12. Integrating Applications When integrating an application with the GNOME Desktop, the system administrator usually performs tasks related to customizing the Applications menu structure, and MIME types, such as: Add or modify a menu item for the application, or customize the Applications menu structure by creating or modifying submenus. See Section 12.1, "Customizing Menus" for more information on menu customization. Customize the default favorite applications visible on the GNOME Shell dash in the Activities Overview . See Section 12.2, "Customizing Default Favorite Applications" for more information on how to do that. Add or modify a MIME type for the application, and associate the application with a specific MIME type. See Section 12.3, "Configuring File Associations" for more information on configuring MIME types. 12.1. Customizing Menus The GNOME menu system is based on the freedesktop.org Desktop Menu Specification and consists of three major sets of configuration and data files: Desktop Entry Files ( .desktop ) The .desktop files provide data about each menu item such as its name, command to run, and its icon. The .desktop entry files also specify the location of the menu item in the menu hierarchy, and keywords used for application search in the Activities Overview . The system .desktop files are located in the /usr/share/applications/ directory. User-specific .desktop files are located in the ~/.local/share/applications/ directory. The following is a sample .desktop file named ~/.local/share/applications/myapplication1.desktop : The file above specifies the application's name ( My Application 1 ), the application's icon ( myapplication1 ), and the command to run the application ( myapplication1 ). It also places the application in a specified category ( Network;WebBrowser; ), and associates the application with the application/x-newtype MIME type. Menu Definition Files ( .menu ) The .menu files are XML configuration files that specify the order, hierarchy, and merging of both menus and menu items. The machine-wide .menu files are located in the /etc/xdg/menus/ directory. User-specific .menu files are located in the ~/.config/menus/ directory and can be used to override the values specified in the machine-wide .menu files. In particular, the /etc/xdg/menus/applications.menu file contains the definition of the Applications menu layout. Directory Entry Files ( .directory ) The .directory files provide data about each menu such as its name, and are located in the /usr/share/desktop-directories/ . Getting More Information For more information describing the Desktop Entry Files, see the Desktop Entry Specification located at the freedesktop.org website: http://freedesktop.org/wiki/Specifications/desktop-entry-spec For detailed information describing the implementation of the GNOME menus system, see the Desktop Menu Specification located at the freedesktop.org website: http://standards.freedesktop.org/menu-spec/latest 12.1.1. Removing a Menu Item for Individual Users The Applications menu customization for a given user is by default stored in the ~/.config/menus/gnome-applications.menu definition file. The location of that file can be overridden by setting the USDXDG_DATA_HOME environment variable. To override the Applications menu defaults, you first need to create a gnome-applications.menu file. Note that removing an item from the Applications menu and its submenus also removes it from the Applications view in the Activities Overview , thus preventing the user from searching for that item from within the Activities Overview . Procedure 12.1. Example: Remove the Calculator menu item from the Accessories submenu Consult the contents of the /usr/share/applications/ directory and determine a .desktop file that corresponds to the menu item you want to remove: As shown above, the Calculator menu item corresponds to the /usr/share/applications/gcalctool.desktop file. Create a ~/.config/menus/gnome-applications.menu file: <!DOCTYPE Menu PUBLIC "-//freedesktop//DTD Menu 1.0//EN" "http://www.freedesktop.org/standards/menu-spec/1.0/menu.dtd"> <Menu> <Name>Applications</Name> <MergeFile type="parent">/etc/xdg/menus/gnome-applications.menu</MergeFile> <!-- Removes the Calculator from the Accessories submenu --> <Menu> <Name>Accessories</Name> <Exclude> <Filename>gcalctool.desktop</Filename> </Exclude> </Menu> <!-- END of Calculator removal content --> </Menu> As shown above, the file contains a <Menu> section that specifies the name of the submenu ( Accessories ), the name of the .desktop file ( gcalctool.desktop ), and includes the <Exclude> element. 12.1.2. Removing a Menu Item for All Users The Applications menu customization for all users is by default stored in the /etc/xdg/menus/applications.menu definition file. The location of that file can be overridden by setting the USDXDG_CONFIG_DIRS environment variable. To override the Applications menu defaults, you need to edit that .menu file. Note that removing an item from the Applications menu and its submenus also removes it from the Applications view in the Activities Overview , thus preventing the user from searching for that item from within the Activities Overview . Procedure 12.2. Example: Remove the Calculator menu item from the Accessories submenu Consult the contents of the /usr/share/applications/ directory and determine a .desktop file that corresponds to the menu item you want to remove: As shown above, the Calculator menu item corresponds to the /usr/share/applications/gcalctool.desktop file. Edit the /etc/xdg/menus/applications.menu file and add a new <Menu> section before the final </Menu> tag at the end of that .menu file using the <Exclude> element as shown below: <!-- Removes the Calculator from the Accessories submenu --> <Menu> <Name>Accessories</Name> <Exclude> <Filename>gcalctool.desktop</Filename> </Exclude> </Menu> <!-- END of Calculator removal content --> </Menu> <!-- End Applications --> 12.1.3. Removing a Submenu for Individual Users The Applications menu customization for a given user is by default stored in the ~/.config/menus/gnome-applications.menu definition file. The location of that file can be overridden by setting the USDXDG_DATA_HOME environment variable. To override the Applications menu defaults, you first need to create a gnome-applications.menu file. Note that removing a submenu from the Applications menu also removes all menu items contained within that submenu from the Applications view in the Activities Overview , thus preventing the user from searching for those items from within the Activities Overview . Example 12.1. Remove the System Tools submenu from the Applications menu Create a ~/.config/menus/gnome-applications.menu file: <!DOCTYPE Menu PUBLIC "-//freedesktop//DTD Menu 1.0//EN" "http://www.freedesktop.org/standards/menu-spec/1.0/menu.dtd"> <Menu> <Name>Applications</Name> <MergeFile type="parent">/etc/xdg/menus/gnome-applications.menu</MergeFile> <!-- Removes the System Tools submenu from the Applications menu--> <Menu> <Name>System Tools</Name> <Deleted/> </Menu> <!-- END of System Tools removal content --> </Menu> As shown above, the file contains a <Menu> section that specifies the name of the submenu ( System Tools ), and includes the <Deleted/> tag. 12.1.4. Removing a Submenu for All Users The Applications menu customization for all users is by default stored in the /etc/xdg/menus/applications.menu definition file. The location of that file can be overridden by setting the USDXDG_CONFIG_DIRS environment variable. To override the Applications menu defaults, you need to edit that .menu file. Note that removing a submenu from the Applications menu also removes all menu items contained within that submenu from the Applications view in the Activities Overview , thus preventing the user from searching for those items from within the Activities Overview . Example 12.2. Remove the System Tools submenu from the Applications menu Edit a /etc/xdg/menus/applications.menu file and add a new <Menu> section before the final </Menu> tag at the end of that .menu file using the <Deleted/> element as shown below: <!-- Removes the System Tools submenu from the Applications menu--> <Menu> <Name>System Tools</Name> <Deleted/> </Menu> <!-- END of System Tools removal content --> </Menu> | [
"[Desktop Entry] Type=Application Name= My Application 1 Icon= myapplication1 Exec= myapplication1 Categories= Network;WebBrowser; MimeType= application/x-newtype",
"grep -r \"Name= Calculator \" /usr/share/applications/ /usr/share/applications/gcalctool.desktop:Name=Calculator",
"<!DOCTYPE Menu PUBLIC \"-//freedesktop//DTD Menu 1.0//EN\" \"http://www.freedesktop.org/standards/menu-spec/1.0/menu.dtd\"> <Menu> <Name>Applications</Name> <MergeFile type=\"parent\">/etc/xdg/menus/gnome-applications.menu</MergeFile> <!-- Removes the Calculator from the Accessories submenu --> <Menu> <Name>Accessories</Name> <Exclude> <Filename>gcalctool.desktop</Filename> </Exclude> </Menu> <!-- END of Calculator removal content --> </Menu>",
"grep -r \"Name= Calculator \" /usr/share/applications/ /usr/share/applications/gcalctool.desktop:Name=Calculator",
"<!-- Removes the Calculator from the Accessories submenu --> <Menu> <Name>Accessories</Name> <Exclude> <Filename>gcalctool.desktop</Filename> </Exclude> </Menu> <!-- END of Calculator removal content --> </Menu> <!-- End Applications -->",
"<!DOCTYPE Menu PUBLIC \"-//freedesktop//DTD Menu 1.0//EN\" \"http://www.freedesktop.org/standards/menu-spec/1.0/menu.dtd\"> <Menu> <Name>Applications</Name> <MergeFile type=\"parent\">/etc/xdg/menus/gnome-applications.menu</MergeFile> <!-- Removes the System Tools submenu from the Applications menu--> <Menu> <Name>System Tools</Name> <Deleted/> </Menu> <!-- END of System Tools removal content --> </Menu>",
"<!-- Removes the System Tools submenu from the Applications menu--> <Menu> <Name>System Tools</Name> <Deleted/> </Menu> <!-- END of System Tools removal content --> </Menu>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/desktop_migration_and_administration_guide/installing-integrating-applications |
18.3.3. Installation using VNC | 18.3.3. Installation using VNC Using VNC is recommended for slow or long-distance network connections. To use VNC, disable X11 forwarding in your SSH client prior to connecting to the temporary Linux installation system. The loader will then provide a choice between text-mode and VNC; choose VNC here. Alternatively, provide the vnc variable and optionally the vncpassword variable in your parameter file (refer to Section 26.4, "VNC and X11 Parameters" for details). A message on the workstation SSH terminal prompts you to start the VNC client viewer and provides details about the VNC display specifications. Enter the specifications from the SSH terminal into the VNC client viewer and connect to the temporary Linux installation system to begin the installation. Refer to Chapter 31, Installing Through VNC for details. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/installation_procedure_overview-s390-gui-vnc |
Chapter 26. GroupService | Chapter 26. GroupService 26.1. GetGroup GET /v1/group 26.1.1. Description 26.1.2. Parameters 26.1.2.1. Query Parameters Name Description Required Default Pattern id Unique identifier for group properties and respectively the group. - null traits.mutabilityMode - ALLOW_MUTATE traits.visibility - VISIBLE traits.origin - IMPERATIVE authProviderId - null key - null value - null 26.1.3. Return Type StorageGroup 26.1.4. Content Type application/json 26.1.5. Responses Table 26.1. HTTP Response Codes Code Message Datatype 200 A successful response. StorageGroup 0 An unexpected error response. RuntimeError 26.1.6. Samples 26.1.7. Common object reference 26.1.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 26.1.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 26.1.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 26.1.7.3. StorageGroup Group is a GroupProperties : Role mapping. Field Name Required Nullable Type Description Format props StorageGroupProperties roleName String This is the name of the role that will apply to users in this group. 26.1.7.4. StorageGroupProperties GroupProperties defines the properties of a group. Groups apply to users when their properties match. For instance: - If GroupProperties has only an auth_provider_id, then that group applies to all users logged in with that auth provider. - If GroupProperties in addition has a claim key, then it applies to all users with that auth provider and the claim key, etc. Note: Changes to GroupProperties may require changes to v1.DeleteGroupRequest. Field Name Required Nullable Type Description Format id String Unique identifier for group properties and respectively the group. traits StorageTraits authProviderId String key String value String 26.1.7.5. StorageTraits Field Name Required Nullable Type Description Format mutabilityMode TraitsMutabilityMode ALLOW_MUTATE, ALLOW_MUTATE_FORCED, visibility TraitsVisibility VISIBLE, HIDDEN, origin TraitsOrigin IMPERATIVE, DEFAULT, DECLARATIVE, DECLARATIVE_ORPHANED, 26.1.7.6. TraitsMutabilityMode EXPERIMENTAL. NOTE: Please refer from using MutabilityMode for the time being. It will be replaced in the future (ROX-14276). MutabilityMode specifies whether and how an object can be modified. Default is ALLOW_MUTATE and means there are no modification restrictions; this is equivalent to the absence of MutabilityMode specification. ALLOW_MUTATE_FORCED forbids all modifying operations except object removal with force bit on. Be careful when changing the state of this field. For example, modifying an object from ALLOW_MUTATE to ALLOW_MUTATE_FORCED is allowed but will prohibit any further changes to it, including modifying it back to ALLOW_MUTATE. Enum Values ALLOW_MUTATE ALLOW_MUTATE_FORCED 26.1.7.7. TraitsOrigin Origin specifies the origin of an object. Objects can have four different origins: - IMPERATIVE: the object was created via the API. This is assumed by default. - DEFAULT: the object is a default object, such as default roles, access scopes etc. - DECLARATIVE: the object is created via declarative configuration. - DECLARATIVE_ORPHANED: the object is created via declarative configuration and then unsuccessfully deleted(for example, because it is referenced by another object) Based on the origin, different rules apply to the objects. Objects with the DECLARATIVE origin are not allowed to be modified via API, only via declarative configuration. Additionally, they may not reference objects with the IMPERATIVE origin. Objects with the DEFAULT origin are not allowed to be modified via either API or declarative configuration. They may be referenced by all other objects. Objects with the IMPERATIVE origin are allowed to be modified via API, not via declarative configuration. They may reference all other objects. Objects with the DECLARATIVE_ORPHANED origin are not allowed to be modified via either API or declarative configuration. DECLARATIVE_ORPHANED resource can become DECLARATIVE again if it is redefined in declarative configuration. Objects with this origin will be cleaned up from the system immediately after they are not referenced by other resources anymore. They may be referenced by all other objects. Enum Values IMPERATIVE DEFAULT DECLARATIVE DECLARATIVE_ORPHANED 26.1.7.8. TraitsVisibility EXPERIMENTAL. visibility allows to specify whether the object should be visible for certain APIs. Enum Values VISIBLE HIDDEN 26.2. BatchUpdate POST /v1/groupsbatch 26.2.1. Description 26.2.2. Parameters 26.2.2.1. Body Parameter Name Description Required Default Pattern body V1GroupBatchUpdateRequest X 26.2.3. Return Type Object 26.2.4. Content Type application/json 26.2.5. Responses Table 26.2. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. RuntimeError 26.2.6. Samples 26.2.7. Common object reference 26.2.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 26.2.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 26.2.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 26.2.7.3. StorageGroup Group is a GroupProperties : Role mapping. Field Name Required Nullable Type Description Format props StorageGroupProperties roleName String This is the name of the role that will apply to users in this group. 26.2.7.4. StorageGroupProperties GroupProperties defines the properties of a group. Groups apply to users when their properties match. For instance: - If GroupProperties has only an auth_provider_id, then that group applies to all users logged in with that auth provider. - If GroupProperties in addition has a claim key, then it applies to all users with that auth provider and the claim key, etc. Note: Changes to GroupProperties may require changes to v1.DeleteGroupRequest. Field Name Required Nullable Type Description Format id String Unique identifier for group properties and respectively the group. traits StorageTraits authProviderId String key String value String 26.2.7.5. StorageTraits Field Name Required Nullable Type Description Format mutabilityMode TraitsMutabilityMode ALLOW_MUTATE, ALLOW_MUTATE_FORCED, visibility TraitsVisibility VISIBLE, HIDDEN, origin TraitsOrigin IMPERATIVE, DEFAULT, DECLARATIVE, DECLARATIVE_ORPHANED, 26.2.7.6. TraitsMutabilityMode EXPERIMENTAL. NOTE: Please refer from using MutabilityMode for the time being. It will be replaced in the future (ROX-14276). MutabilityMode specifies whether and how an object can be modified. Default is ALLOW_MUTATE and means there are no modification restrictions; this is equivalent to the absence of MutabilityMode specification. ALLOW_MUTATE_FORCED forbids all modifying operations except object removal with force bit on. Be careful when changing the state of this field. For example, modifying an object from ALLOW_MUTATE to ALLOW_MUTATE_FORCED is allowed but will prohibit any further changes to it, including modifying it back to ALLOW_MUTATE. Enum Values ALLOW_MUTATE ALLOW_MUTATE_FORCED 26.2.7.7. TraitsOrigin Origin specifies the origin of an object. Objects can have four different origins: - IMPERATIVE: the object was created via the API. This is assumed by default. - DEFAULT: the object is a default object, such as default roles, access scopes etc. - DECLARATIVE: the object is created via declarative configuration. - DECLARATIVE_ORPHANED: the object is created via declarative configuration and then unsuccessfully deleted(for example, because it is referenced by another object) Based on the origin, different rules apply to the objects. Objects with the DECLARATIVE origin are not allowed to be modified via API, only via declarative configuration. Additionally, they may not reference objects with the IMPERATIVE origin. Objects with the DEFAULT origin are not allowed to be modified via either API or declarative configuration. They may be referenced by all other objects. Objects with the IMPERATIVE origin are allowed to be modified via API, not via declarative configuration. They may reference all other objects. Objects with the DECLARATIVE_ORPHANED origin are not allowed to be modified via either API or declarative configuration. DECLARATIVE_ORPHANED resource can become DECLARATIVE again if it is redefined in declarative configuration. Objects with this origin will be cleaned up from the system immediately after they are not referenced by other resources anymore. They may be referenced by all other objects. Enum Values IMPERATIVE DEFAULT DECLARATIVE DECLARATIVE_ORPHANED 26.2.7.8. TraitsVisibility EXPERIMENTAL. visibility allows to specify whether the object should be visible for certain APIs. Enum Values VISIBLE HIDDEN 26.2.7.9. V1GroupBatchUpdateRequest Field Name Required Nullable Type Description Format previousGroups List of StorageGroup groups are the groups expected to be present in the store. Performs a diff on the GroupProperties present in previous_groups and required_groups: 1) if in previous_groups but not required_groups, it gets deleted. 2) if in previous_groups and required_groups, it gets updated. 3) if not in previous_groups but in required_groups, it gets added. requiredGroups List of StorageGroup Required groups are the groups we want to mutate the groups into. force Boolean 26.3. DeleteGroup DELETE /v1/groups 26.3.1. Description 26.3.2. Parameters 26.3.2.1. Query Parameters Name Description Required Default Pattern authProviderId We copy over parameters from storage.GroupProperties for seamless HTTP API migration. - null key - null value - null id - null force - null 26.3.3. Return Type Object 26.3.4. Content Type application/json 26.3.5. Responses Table 26.3. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. RuntimeError 26.3.6. Samples 26.3.7. Common object reference 26.3.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 26.3.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 26.3.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 26.4. GetGroups GET /v1/groups 26.4.1. Description 26.4.2. Parameters 26.4.2.1. Query Parameters Name Description Required Default Pattern authProviderId - null key - null value - null id - null 26.4.3. Return Type V1GetGroupsResponse 26.4.4. Content Type application/json 26.4.5. Responses Table 26.4. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetGroupsResponse 0 An unexpected error response. RuntimeError 26.4.6. Samples 26.4.7. Common object reference 26.4.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 26.4.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 26.4.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 26.4.7.3. StorageGroup Group is a GroupProperties : Role mapping. Field Name Required Nullable Type Description Format props StorageGroupProperties roleName String This is the name of the role that will apply to users in this group. 26.4.7.4. StorageGroupProperties GroupProperties defines the properties of a group. Groups apply to users when their properties match. For instance: - If GroupProperties has only an auth_provider_id, then that group applies to all users logged in with that auth provider. - If GroupProperties in addition has a claim key, then it applies to all users with that auth provider and the claim key, etc. Note: Changes to GroupProperties may require changes to v1.DeleteGroupRequest. Field Name Required Nullable Type Description Format id String Unique identifier for group properties and respectively the group. traits StorageTraits authProviderId String key String value String 26.4.7.5. StorageTraits Field Name Required Nullable Type Description Format mutabilityMode TraitsMutabilityMode ALLOW_MUTATE, ALLOW_MUTATE_FORCED, visibility TraitsVisibility VISIBLE, HIDDEN, origin TraitsOrigin IMPERATIVE, DEFAULT, DECLARATIVE, DECLARATIVE_ORPHANED, 26.4.7.6. TraitsMutabilityMode EXPERIMENTAL. NOTE: Please refer from using MutabilityMode for the time being. It will be replaced in the future (ROX-14276). MutabilityMode specifies whether and how an object can be modified. Default is ALLOW_MUTATE and means there are no modification restrictions; this is equivalent to the absence of MutabilityMode specification. ALLOW_MUTATE_FORCED forbids all modifying operations except object removal with force bit on. Be careful when changing the state of this field. For example, modifying an object from ALLOW_MUTATE to ALLOW_MUTATE_FORCED is allowed but will prohibit any further changes to it, including modifying it back to ALLOW_MUTATE. Enum Values ALLOW_MUTATE ALLOW_MUTATE_FORCED 26.4.7.7. TraitsOrigin Origin specifies the origin of an object. Objects can have four different origins: - IMPERATIVE: the object was created via the API. This is assumed by default. - DEFAULT: the object is a default object, such as default roles, access scopes etc. - DECLARATIVE: the object is created via declarative configuration. - DECLARATIVE_ORPHANED: the object is created via declarative configuration and then unsuccessfully deleted(for example, because it is referenced by another object) Based on the origin, different rules apply to the objects. Objects with the DECLARATIVE origin are not allowed to be modified via API, only via declarative configuration. Additionally, they may not reference objects with the IMPERATIVE origin. Objects with the DEFAULT origin are not allowed to be modified via either API or declarative configuration. They may be referenced by all other objects. Objects with the IMPERATIVE origin are allowed to be modified via API, not via declarative configuration. They may reference all other objects. Objects with the DECLARATIVE_ORPHANED origin are not allowed to be modified via either API or declarative configuration. DECLARATIVE_ORPHANED resource can become DECLARATIVE again if it is redefined in declarative configuration. Objects with this origin will be cleaned up from the system immediately after they are not referenced by other resources anymore. They may be referenced by all other objects. Enum Values IMPERATIVE DEFAULT DECLARATIVE DECLARATIVE_ORPHANED 26.4.7.8. TraitsVisibility EXPERIMENTAL. visibility allows to specify whether the object should be visible for certain APIs. Enum Values VISIBLE HIDDEN 26.4.7.9. V1GetGroupsResponse Field Name Required Nullable Type Description Format groups List of StorageGroup 26.5. CreateGroup POST /v1/groups 26.5.1. Description 26.5.2. Parameters 26.5.2.1. Body Parameter Name Description Required Default Pattern body StorageGroup X 26.5.3. Return Type Object 26.5.4. Content Type application/json 26.5.5. Responses Table 26.5. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. RuntimeError 26.5.6. Samples 26.5.7. Common object reference 26.5.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 26.5.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 26.5.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 26.5.7.3. StorageGroup Group is a GroupProperties : Role mapping. Field Name Required Nullable Type Description Format props StorageGroupProperties roleName String This is the name of the role that will apply to users in this group. 26.5.7.4. StorageGroupProperties GroupProperties defines the properties of a group. Groups apply to users when their properties match. For instance: - If GroupProperties has only an auth_provider_id, then that group applies to all users logged in with that auth provider. - If GroupProperties in addition has a claim key, then it applies to all users with that auth provider and the claim key, etc. Note: Changes to GroupProperties may require changes to v1.DeleteGroupRequest. Field Name Required Nullable Type Description Format id String Unique identifier for group properties and respectively the group. traits StorageTraits authProviderId String key String value String 26.5.7.5. StorageTraits Field Name Required Nullable Type Description Format mutabilityMode TraitsMutabilityMode ALLOW_MUTATE, ALLOW_MUTATE_FORCED, visibility TraitsVisibility VISIBLE, HIDDEN, origin TraitsOrigin IMPERATIVE, DEFAULT, DECLARATIVE, DECLARATIVE_ORPHANED, 26.5.7.6. TraitsMutabilityMode EXPERIMENTAL. NOTE: Please refer from using MutabilityMode for the time being. It will be replaced in the future (ROX-14276). MutabilityMode specifies whether and how an object can be modified. Default is ALLOW_MUTATE and means there are no modification restrictions; this is equivalent to the absence of MutabilityMode specification. ALLOW_MUTATE_FORCED forbids all modifying operations except object removal with force bit on. Be careful when changing the state of this field. For example, modifying an object from ALLOW_MUTATE to ALLOW_MUTATE_FORCED is allowed but will prohibit any further changes to it, including modifying it back to ALLOW_MUTATE. Enum Values ALLOW_MUTATE ALLOW_MUTATE_FORCED 26.5.7.7. TraitsOrigin Origin specifies the origin of an object. Objects can have four different origins: - IMPERATIVE: the object was created via the API. This is assumed by default. - DEFAULT: the object is a default object, such as default roles, access scopes etc. - DECLARATIVE: the object is created via declarative configuration. - DECLARATIVE_ORPHANED: the object is created via declarative configuration and then unsuccessfully deleted(for example, because it is referenced by another object) Based on the origin, different rules apply to the objects. Objects with the DECLARATIVE origin are not allowed to be modified via API, only via declarative configuration. Additionally, they may not reference objects with the IMPERATIVE origin. Objects with the DEFAULT origin are not allowed to be modified via either API or declarative configuration. They may be referenced by all other objects. Objects with the IMPERATIVE origin are allowed to be modified via API, not via declarative configuration. They may reference all other objects. Objects with the DECLARATIVE_ORPHANED origin are not allowed to be modified via either API or declarative configuration. DECLARATIVE_ORPHANED resource can become DECLARATIVE again if it is redefined in declarative configuration. Objects with this origin will be cleaned up from the system immediately after they are not referenced by other resources anymore. They may be referenced by all other objects. Enum Values IMPERATIVE DEFAULT DECLARATIVE DECLARATIVE_ORPHANED 26.5.7.8. TraitsVisibility EXPERIMENTAL. visibility allows to specify whether the object should be visible for certain APIs. Enum Values VISIBLE HIDDEN 26.6. UpdateGroup PUT /v1/groups 26.6.1. Description 26.6.2. Parameters 26.6.2.1. Body Parameter Name Description Required Default Pattern body StorageGroup X 26.6.2.2. Query Parameters Name Description Required Default Pattern force - null 26.6.3. Return Type Object 26.6.4. Content Type application/json 26.6.5. Responses Table 26.6. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. RuntimeError 26.6.6. Samples 26.6.7. Common object reference 26.6.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 26.6.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 26.6.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 26.6.7.3. StorageGroup Group is a GroupProperties : Role mapping. Field Name Required Nullable Type Description Format props StorageGroupProperties roleName String This is the name of the role that will apply to users in this group. 26.6.7.4. StorageGroupProperties GroupProperties defines the properties of a group. Groups apply to users when their properties match. For instance: - If GroupProperties has only an auth_provider_id, then that group applies to all users logged in with that auth provider. - If GroupProperties in addition has a claim key, then it applies to all users with that auth provider and the claim key, etc. Note: Changes to GroupProperties may require changes to v1.DeleteGroupRequest. Field Name Required Nullable Type Description Format id String Unique identifier for group properties and respectively the group. traits StorageTraits authProviderId String key String value String 26.6.7.5. StorageTraits Field Name Required Nullable Type Description Format mutabilityMode TraitsMutabilityMode ALLOW_MUTATE, ALLOW_MUTATE_FORCED, visibility TraitsVisibility VISIBLE, HIDDEN, origin TraitsOrigin IMPERATIVE, DEFAULT, DECLARATIVE, DECLARATIVE_ORPHANED, 26.6.7.6. TraitsMutabilityMode EXPERIMENTAL. NOTE: Please refer from using MutabilityMode for the time being. It will be replaced in the future (ROX-14276). MutabilityMode specifies whether and how an object can be modified. Default is ALLOW_MUTATE and means there are no modification restrictions; this is equivalent to the absence of MutabilityMode specification. ALLOW_MUTATE_FORCED forbids all modifying operations except object removal with force bit on. Be careful when changing the state of this field. For example, modifying an object from ALLOW_MUTATE to ALLOW_MUTATE_FORCED is allowed but will prohibit any further changes to it, including modifying it back to ALLOW_MUTATE. Enum Values ALLOW_MUTATE ALLOW_MUTATE_FORCED 26.6.7.7. TraitsOrigin Origin specifies the origin of an object. Objects can have four different origins: - IMPERATIVE: the object was created via the API. This is assumed by default. - DEFAULT: the object is a default object, such as default roles, access scopes etc. - DECLARATIVE: the object is created via declarative configuration. - DECLARATIVE_ORPHANED: the object is created via declarative configuration and then unsuccessfully deleted(for example, because it is referenced by another object) Based on the origin, different rules apply to the objects. Objects with the DECLARATIVE origin are not allowed to be modified via API, only via declarative configuration. Additionally, they may not reference objects with the IMPERATIVE origin. Objects with the DEFAULT origin are not allowed to be modified via either API or declarative configuration. They may be referenced by all other objects. Objects with the IMPERATIVE origin are allowed to be modified via API, not via declarative configuration. They may reference all other objects. Objects with the DECLARATIVE_ORPHANED origin are not allowed to be modified via either API or declarative configuration. DECLARATIVE_ORPHANED resource can become DECLARATIVE again if it is redefined in declarative configuration. Objects with this origin will be cleaned up from the system immediately after they are not referenced by other resources anymore. They may be referenced by all other objects. Enum Values IMPERATIVE DEFAULT DECLARATIVE DECLARATIVE_ORPHANED 26.6.7.8. TraitsVisibility EXPERIMENTAL. visibility allows to specify whether the object should be visible for certain APIs. Enum Values VISIBLE HIDDEN | [
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"GroupBatchUpdateRequest is an in transaction batch update to the groups present. Next Available Tag: 3",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"API for updating Groups and getting users. Next Available Tag: 2",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }"
] | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/api_reference/groupservice |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/data_grid_operator_8.5_release_notes/making-open-source-more-inclusive_datagrid |
Chapter 9. How to use dedicated worker nodes for Red Hat OpenShift Data Foundation | Chapter 9. How to use dedicated worker nodes for Red Hat OpenShift Data Foundation Any Red Hat OpenShift Container Platform subscription requires an OpenShift Data Foundation subscription. However, you can save on the OpenShift Container Platform subscription costs if you are using infrastructure nodes to schedule OpenShift Data Foundation resources. It is important to maintain consistency across environments with or without Machine API support. Because of this, it is highly recommended in all cases to have a special category of nodes labeled as either worker or infra or have both roles. See the Section 9.3, "Manual creation of infrastructure nodes" section for more information. 9.1. Anatomy of an Infrastructure node Infrastructure nodes for use with OpenShift Data Foundation have a few attributes. The infra node-role label is required to ensure the node does not consume RHOCP entitlements. The infra node-role label is responsible for ensuring only OpenShift Data Foundation entitlements are necessary for the nodes running OpenShift Data Foundation. Labeled with node-role.kubernetes.io/infra Adding an OpenShift Data Foundation taint with a NoSchedule effect is also required so that the infra node will only schedule OpenShift Data Foundation resources. Tainted with node.ocs.openshift.io/storage="true" The label identifies the RHOCP node as an infra node so that RHOCP subscription cost is not applied. The taint prevents non OpenShift Data Foundation resources to be scheduled on the tainted nodes. Note Adding storage taint on nodes might require toleration handling for the other daemonset pods such as openshift-dns daemonset . For information about how to manage the tolerations, see Knowledgebase article: https://access.redhat.com/solutions/6592171 . Example of the taint and labels required on infrastructure node that will be used to run OpenShift Data Foundation services: 9.2. Machine sets for creating Infrastructure nodes If the Machine API is supported in the environment, then labels should be added to the templates for the Machine Sets that will be provisioning the infrastructure nodes. Avoid the anti-pattern of adding labels manually to nodes created by the machine API. Doing so is analogous to adding labels to pods created by a deployment. In both cases, when the pod/node fails, the replacement pod/node will not have the appropriate labels. Note In EC2 environments, you will need three machine sets, each configured to provision infrastructure nodes in a distinct availability zone (such as us-east-2a, us-east-2b, us-east-2c). Currently, OpenShift Data Foundation does not support deploying in more than three availability zones. The following Machine Set template example creates nodes with the appropriate taint and labels required for infrastructure nodes. This will be used to run OpenShift Data Foundation services. Important If you add a taint to the infrastructure nodes, you also need to add tolerations to the taint for other workloads, for example, the fluentd pods. For more information, see the Red Hat Knowledgebase solution Infrastructure Nodes in OpenShift 4 . 9.3. Manual creation of infrastructure nodes Only when the Machine API is not supported in the environment should labels be directly applied to nodes. Manual creation requires that at least 3 RHOCP worker nodes are available to schedule OpenShift Data Foundation services, and that these nodes have sufficient CPU and memory resources. To avoid the RHOCP subscription cost, the following is required: Adding a NoSchedule OpenShift Data Foundation taint is also required so that the infra node will only schedule OpenShift Data Foundation resources and repel any other non-OpenShift Data Foundation workloads. Warning Do not remove the node-role node-role.kubernetes.io/worker="" The removal of the node-role.kubernetes.io/worker="" can cause issues unless changes are made both to the OpenShift scheduler and to MachineConfig resources. If already removed, it should be added again to each infra node. Adding node-role node-role.kubernetes.io/infra="" and OpenShift Data Foundation taint is sufficient to conform to entitlement exemption requirements. 9.4. Taint a node from the user interface This section explains the procedure to taint nodes after the OpenShift Data Foundation deployment. Procedure In the OpenShift Web Console, click Compute Nodes , and then select the node which has to be tainted. In the Details page click on Edit taints . Enter the values in the Key <node.ocs.openshift.io/storage>, Value <true> and in the Effect <Noschedule> field. Click Save. Verification steps Follow the steps to verify that the node has tainted successfully: Navigate to Compute Nodes . Select the node to verify its status, and then click on the YAML tab. In the specs section check the values of the following parameters: Additional resources For more information, refer to Creating the OpenShift Data Foundation cluster on VMware vSphere . | [
"spec: taints: - effect: NoSchedule key: node.ocs.openshift.io/storage value: \"true\" metadata: creationTimestamp: null labels: node-role.kubernetes.io/worker: \"\" node-role.kubernetes.io/infra: \"\" cluster.ocs.openshift.io/openshift-storage: \"\"",
"template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: kb-s25vf machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: kb-s25vf-infra-us-west-2a spec: taints: - effect: NoSchedule key: node.ocs.openshift.io/storage value: \"true\" metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" cluster.ocs.openshift.io/openshift-storage: \"\"",
"label node <node> node-role.kubernetes.io/infra=\"\" label node <node> cluster.ocs.openshift.io/openshift-storage=\"\"",
"adm taint node <node> node.ocs.openshift.io/storage=\"true\":NoSchedule",
"Taints: Key: node.ocs.openshift.io/storage Value: true Effect: Noschedule"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/how-to-use-dedicated-worker-nodes-for-openshift-data-foundation_osp |
Preface | Preface Thank you for your interest in Red Hat Ansible Automation Platform. Ansible Automation Platform is a commercial offering that helps teams manage complex multi-tier deployments by adding control, knowledge, and delegation to Ansible-powered environments. This guide describes how to use Ansible plug-ins for Red Hat Developer Hub. This document has been updated to include information for the latest release of Ansible Automation Platform. | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/using_ansible_plug-ins_for_red_hat_developer_hub/pr01 |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/deploying_mail_servers/proc_providing-feedback-on-red-hat-documentation_deploying-mail-servers |
Chapter 4. Configuring automation controller websocket connections | Chapter 4. Configuring automation controller websocket connections You can configure automation controller in order to align the websocket configuration with your nginx or load balancer configuration. 4.1. Websocket configuration for automation controller Automation controller nodes are interconnected through websockets to distribute all websocket-emitted messages throughout your system. This configuration setup enables any browser client websocket to subscribe to any job that might be running on any automation controller node. Websocket clients are not routed to specific automation controller nodes. Instead, any automation controller node can handle any websocket request and each automation controller node must know about all websocket messages destined for all clients. You can configure websockets at /etc/tower/conf.d/websocket_config.py in all of your automation controller nodes and the changes will be effective after the service restarts. Automation controller automatically handles discovery of other automation controller nodes through the Instance record in the database. Important Your automation controller nodes are designed to broadcast websocket traffic across a private, trusted subnet (and not the open Internet). Therefore, if you turn off HTTPS for websocket broadcasting, the websocket traffic, composed mostly of Ansible playbook stdout, is sent unencrypted between automation controller nodes. 4.1.1. Configuring automatic discovery of other automation controller nodes You can configure websocket connections to enable automation controller to automatically handle discovery of other automation controller nodes through the Instance record in the database. Edit automation controller websocket information for port and protocol, and confirm whether to verify certificates with True or False when establishing the websocket connections: BROADCAST_WEBSOCKET_PROTOCOL = 'http' BROADCAST_WEBSOCKET_PORT = 80 BROADCAST_WEBSOCKET_VERIFY_CERT = False Restart automation controller with the following command: USD automation-controller-service restart | [
"BROADCAST_WEBSOCKET_PROTOCOL = 'http' BROADCAST_WEBSOCKET_PORT = 80 BROADCAST_WEBSOCKET_VERIFY_CERT = False",
"automation-controller-service restart"
] | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/red_hat_ansible_automation_platform_operations_guide/assembly-configuring-websockets |
6.6. Configuring port forwarding using nftables | 6.6. Configuring port forwarding using nftables Port forwarding enables administrators to forward packets sent to a specific destination port to a different local or remote port. For example, if your web server does not have a public IP address, you can set a port forwarding rule on your firewall that forwards incoming packets on port 80 and 443 on the firewall to the web server. With this firewall rule, users on the internet can access the web server using the IP or host name of the firewall. 6.6.1. Forwarding incoming packets to a different local port This section describes an example of how to forward incoming IPv4 packets on port 8022 to port 22 on the local system. Procedure 6.17. Forwarding incoming packets to a different local port Create a table named nat with the ip address family: Add the prerouting and postrouting chains to the table: Note Pass the -- option to the nft command to avoid that the shell interprets the negative priority value as an option of the nft command. Add a rule to the prerouting chain that redirects incoming packets on port 8022 to the local port 22 : 6.6.2. Forwarding incoming packets on a specific local port to a different host You can use a destination network address translation ( DNAT ) rule to forward incoming packets on a local port to a remote host. This enables users on the Internet to access a service that runs on a host with a private IP address. The procedure describes how to forward incoming IPv4 packets on the local port 443 to the same port number on the remote system with the 192.0.2.1 IP address. Prerequisite You are logged in as the root user on the system that should forward the packets. Procedure 6.18. Forwarding incoming packets on a specific local port to a different host Create a table named nat with the ip address family: Add the prerouting and postrouting chains to the table: Note Pass the -- option to the nft command to avoid that the shell interprets the negative priority value as an option of the nft command. Add a rule to the prerouting chain that redirects incoming packets on port 443 to the same port on 192.0.2.1 : Add a rule to the postrouting chain to masquerade outgoing traffic: Enable packet forwarding: | [
"nft add table ip nat",
"nft -- add chain ip nat prerouting { type nat hook prerouting priority -100 \\; }",
"nft add rule ip nat prerouting tcp dport 8022 redirect to : 22",
"nft add table ip nat",
"nft -- add chain ip nat prerouting { type nat hook prerouting priority -100 \\; } nft add chain ip nat postrouting { type nat hook postrouting priority 100 \\; }",
"nft add rule ip nat prerouting tcp dport 443 dnat to 192.0.2.1",
"nft add rule ip nat postrouting ip daddr 192.0.2.1 masquerade",
"echo \"net.ipv4.ip_forward=1\" > /etc/sysctl.d/95-IPv4-forwarding.conf sysctl -p /etc/sysctl.d/95-IPv4-forwarding.conf"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/security_guide/sec-configuring_port_forwarding_using_nftables |
Chapter 3. Installing a cluster quickly on GCP | Chapter 3. Installing a cluster quickly on GCP In OpenShift Container Platform version 4.17, you can install a cluster on Google Cloud Platform (GCP) that uses the default configuration options. 3.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured a GCP project to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. 3.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.17, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 3.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 3.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 3.5. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Remove any existing GCP credentials that do not use the service account key for the GCP account that you configured for your cluster and that are stored in the following locations: The GOOGLE_CREDENTIALS , GOOGLE_CLOUD_KEYFILE_JSON , or GCLOUD_KEYFILE_JSON environment variables The ~/.gcp/osServiceAccount.json file The gcloud cli default credentials Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. 2 To view different installation details, specify warn , debug , or error instead of info . When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Provide values at the prompts: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select gcp as the platform to target. If you have not configured the service account key for your GCP account on your host, you must obtain it from GCP and paste the contents of the file or enter the absolute path to the file. Select the project ID to provision the cluster in. The default value is specified by the service account that you configured. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Enter a descriptive name for your cluster. If you provide a name that is longer than 6 characters, only the first 6 characters will be used in the infrastructure ID that is generated from the cluster name. Paste the pull secret from Red Hat OpenShift Cluster Manager . Optional: You can reduce the number of permissions for the service account that you used to install the cluster. If you assigned the Owner role to your service account, you can remove that role and replace it with the Viewer role. If you included the Service Account Key Admin role, you can remove it. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 3.6. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.17. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.17 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.17 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.17 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.17 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 3.7. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 3.8. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.17, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 3.9. steps Customize your cluster . If necessary, you can opt out of remote health reporting . | [
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/installing_on_gcp/installing-gcp-default |
Chapter 6. Reference materials | Chapter 6. Reference materials To learn more about the advisor service, see the following resources: Assessing RHEL Configuration Issues Using the Red Hat Insights Advisor Service Red Hat Insights Remediations Guide Red Hat Insights for Red Hat Enterprise Linux Documentation Red Hat Insights for Red Hat Enterprise Linux Product Support page | null | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/generating_advisor_service_reports/insights-report-reference-materials |
Chapter 1. Installing Developer Hub on EKS with the Operator | Chapter 1. Installing Developer Hub on EKS with the Operator The Red Hat Developer Hub Operator installation requires the Operator Lifecycle Manager (OLM) framework. Additional resources For information about the OLM, see Operator Lifecycle Manager(OLM) documentation. 1.1. Installing the Developer Hub Operator with the OLM framework You can install the Developer Hub Operator on EKS using the Operator Lifecycle Manager (OLM) framework . Following that, you can proceed to deploy your Developer Hub instance in EKS. Prerequisites You have set the context to the EKS cluster in your current kubeconfig . For more information, see Creating or updating a kubeconfig file for an Amazon EKS cluster . You have installed kubectl . For more information, see Installing or updating kubectl . You have subscribed to registry.redhat.io . For more information, see Red Hat Container Registry Authentication . You have installed the Operator Lifecycle Manager (OLM). For more information about installation and troubleshooting, see OLM QuickStart or How do I get Operator Lifecycle Manager? Procedure Run the following command in your terminal to create the rhdh-operator namespace where the Operator is installed: kubectl create namespace rhdh-operator Create a pull secret using the following command: kubectl -n rhdh-operator create secret docker-registry rhdh-pull-secret \ --docker-server=registry.redhat.io \ --docker-username=<user_name> \ 1 --docker-password=<password> \ 2 --docker-email=<email> 3 1 Enter your username in the command. 2 Enter your password in the command. 3 Enter your email address in the command. The created pull secret is used to pull the Developer Hub images from the Red Hat Ecosystem. Create a CatalogSource resource that contains the Operators from the Red Hat Ecosystem: cat <<EOF | kubectl -n rhdh-operator apply -f - apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: redhat-catalog spec: sourceType: grpc image: registry.redhat.io/redhat/redhat-operator-index:v4.17 secrets: - "rhdh-pull-secret" displayName: Red Hat Operators EOF Create an OperatorGroup resource as follows: cat <<EOF | kubectl apply -n rhdh-operator -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: rhdh-operator-group EOF Create a Subscription resource using the following code: cat <<EOF | kubectl apply -n rhdh-operator -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: rhdh namespace: rhdh-operator spec: channel: fast installPlanApproval: Automatic name: rhdh source: redhat-catalog sourceNamespace: rhdh-operator startingCSV: rhdh-operator.v1.4.2 EOF Run the following command to verify that the created Operator is running: kubectl -n rhdh-operator get pods -w If the operator pod shows ImagePullBackOff status, then you might need permissions to pull the image directly within the Operator deployment's manifest. Tip You can include the required secret name in the deployment.spec.template.spec.imagePullSecrets list and verify the deployment name using kubectl get deployment -n rhdh-operator command: kubectl -n rhdh-operator patch deployment \ rhdh.fast --patch '{"spec":{"template":{"spec":{"imagePullSecrets":[{"name":"rhdh-pull-secret"}]}}}}' \ --type=merge Update the default configuration of the operator to ensure that Developer Hub resources can start correctly in EKS using the following steps: Edit the backstage-default-config ConfigMap in the rhdh-operator namespace using the following command: kubectl -n rhdh-operator edit configmap backstage-default-config Locate the db-statefulset.yaml string and add the fsGroup to its spec.template.spec.securityContext , as shown in the following example: db-statefulset.yaml: | apiVersion: apps/v1 kind: StatefulSet --- TRUNCATED --- spec: --- TRUNCATED --- restartPolicy: Always securityContext: # You can assign any random value as fsGroup fsGroup: 2000 serviceAccount: default serviceAccountName: default --- TRUNCATED --- Locate the deployment.yaml string and add the fsGroup to its specification, as shown in the following example: deployment.yaml: | apiVersion: apps/v1 kind: Deployment --- TRUNCATED --- spec: securityContext: # You can assign any random value as fsGroup fsGroup: 3000 automountServiceAccountToken: false --- TRUNCATED --- Locate the service.yaml string and change the type to NodePort as follows: service.yaml: | apiVersion: v1 kind: Service spec: # NodePort is required for the ALB to route to the Service type: NodePort --- TRUNCATED --- Save and exit. Wait for a few minutes until the changes are automatically applied to the operator pods. 1.2. Deploying the Developer Hub instance on EKS with the Operator Prerequisites A cluster administrator has installed the Red Hat Developer Hub Operator. You have an EKS cluster with AWS Application Load Balancer (ALB) add-on installed. For more information, see Application load balancing on Amazon Elastic Kubernetes Service and Installing the AWS Load Balancer Controller add-on . You have configured a domain name for your Developer Hub instance. The domain name can be a hosted zone entry on Route 53 or managed outside of AWS. For more information, see Configuring Amazon Route 53 as your DNS service documentation. You have an entry in the AWS Certificate Manager (ACM) for your preferred domain name. Make sure to keep a record of your Certificate ARN. You have subscribed to registry.redhat.io . For more information, see Red Hat Container Registry Authentication . You have set the context to the EKS cluster in your current kubeconfig . For more information, see Creating or updating a kubeconfig file for an Amazon EKS cluster . You have installed kubectl . For more information, see Installing or updating kubectl . Procedure Create a ConfigMap named app-config-rhdh containing the Developer Hub configuration using the following template: apiVersion: v1 kind: ConfigMap metadata: name: app-config-rhdh data: "app-config-rhdh.yaml": | app: title: Red Hat Developer Hub baseUrl: https://<rhdh_dns_name> backend: auth: externalAccess: - type: legacy options: subject: legacy-default-config secret: "USD{BACKEND_SECRET}" baseUrl: https://<rhdh_dns_name> cors: origin: https://<rhdh_dns_name> Create a Secret named my-rhdh-secrets and add a key named BACKEND_SECRET with a Base64-encoded string as value: apiVersion: v1 kind: Secret metadata: name: my-rhdh-secrets stringData: # TODO: See https://backstage.io/docs/auth/service-to-service-auth/#setup BACKEND_SECRET: "xxx" Important Ensure that you use a unique value of BACKEND_SECRET for each Developer Hub instance. You can use the following command to generate a key: node-p'require("crypto").randomBytes(24).toString("base64")' To enable pulling the PostgreSQL image from the Red Hat Ecosystem Catalog, add the image pull secret in the default service account within the namespace where the Developer Hub instance is being deployed: kubectl patch serviceaccount default \ -p '{"imagePullSecrets": [{"name": "rhdh-pull-secret"}]}' \ -n <your_namespace> Create your Backstage custom resource using the following template: apiVersion: rhdh.redhat.com/v1alpha3 kind: Backstage metadata: # TODO: this the name of your Developer Hub instance name: my-rhdh spec: application: imagePullSecrets: - "rhdh-pull-secret" route: enabled: false appConfig: configMaps: - name: "app-config-rhdh" extraEnvs: secrets: - name: my-rhdh-secrets Create an Ingress resource using the following template, ensuring to customize the names as needed: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: # TODO: this the name of your Developer Hub Ingress name: my-rhdh annotations: alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/target-type: ip # TODO: Using an ALB HTTPS Listener requires a certificate for your own domain. Fill in the ARN of your certificate, e.g.: alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-xxx:xxxx:certificate/xxxxxx alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]' alb.ingress.kubernetes.io/ssl-redirect: '443' # TODO: Set your application domain name. external-dns.alpha.kubernetes.io/hostname: <rhdh_dns_name> spec: ingressClassName: alb rules: # TODO: Set your application domain name. - host: <rhdh_dns_name> http: paths: - path: / pathType: Prefix backend: service: # TODO: my-rhdh is the name of your `Backstage` custom resource. # Adjust if you changed it! name: backstage-my-rhdh port: name: http-backend In the template, replace ` <rhdh_dns_name>` with your Developer Hub domain name and update the value of alb.ingress.kubernetes.io/certificate-arn with your certificate ARN. Verification Wait until the DNS name is responsive, indicating that your Developer Hub instance is ready for use. | [
"create namespace rhdh-operator",
"-n rhdh-operator create secret docker-registry rhdh-pull-secret --docker-server=registry.redhat.io --docker-username=<user_name> \\ 1 --docker-password=<password> \\ 2 --docker-email=<email> 3",
"cat <<EOF | kubectl -n rhdh-operator apply -f - apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: redhat-catalog spec: sourceType: grpc image: registry.redhat.io/redhat/redhat-operator-index:v4.17 secrets: - \"rhdh-pull-secret\" displayName: Red Hat Operators EOF",
"cat <<EOF | kubectl apply -n rhdh-operator -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: rhdh-operator-group EOF",
"cat <<EOF | kubectl apply -n rhdh-operator -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: rhdh namespace: rhdh-operator spec: channel: fast installPlanApproval: Automatic name: rhdh source: redhat-catalog sourceNamespace: rhdh-operator startingCSV: rhdh-operator.v1.4.2 EOF",
"-n rhdh-operator get pods -w",
"-n rhdh-operator patch deployment rhdh.fast --patch '{\"spec\":{\"template\":{\"spec\":{\"imagePullSecrets\":[{\"name\":\"rhdh-pull-secret\"}]}}}}' --type=merge",
"-n rhdh-operator edit configmap backstage-default-config",
"db-statefulset.yaml: | apiVersion: apps/v1 kind: StatefulSet --- TRUNCATED --- spec: --- TRUNCATED --- restartPolicy: Always securityContext: # You can assign any random value as fsGroup fsGroup: 2000 serviceAccount: default serviceAccountName: default --- TRUNCATED ---",
"deployment.yaml: | apiVersion: apps/v1 kind: Deployment --- TRUNCATED --- spec: securityContext: # You can assign any random value as fsGroup fsGroup: 3000 automountServiceAccountToken: false --- TRUNCATED ---",
"service.yaml: | apiVersion: v1 kind: Service spec: # NodePort is required for the ALB to route to the Service type: NodePort --- TRUNCATED ---",
"apiVersion: v1 kind: ConfigMap metadata: name: app-config-rhdh data: \"app-config-rhdh.yaml\": | app: title: Red Hat Developer Hub baseUrl: https://<rhdh_dns_name> backend: auth: externalAccess: - type: legacy options: subject: legacy-default-config secret: \"USD{BACKEND_SECRET}\" baseUrl: https://<rhdh_dns_name> cors: origin: https://<rhdh_dns_name>",
"apiVersion: v1 kind: Secret metadata: name: my-rhdh-secrets stringData: # TODO: See https://backstage.io/docs/auth/service-to-service-auth/#setup BACKEND_SECRET: \"xxx\"",
"node-p'require(\"crypto\").randomBytes(24).toString(\"base64\")'",
"patch serviceaccount default -p '{\"imagePullSecrets\": [{\"name\": \"rhdh-pull-secret\"}]}' -n <your_namespace>",
"apiVersion: rhdh.redhat.com/v1alpha3 kind: Backstage metadata: # TODO: this the name of your Developer Hub instance name: my-rhdh spec: application: imagePullSecrets: - \"rhdh-pull-secret\" route: enabled: false appConfig: configMaps: - name: \"app-config-rhdh\" extraEnvs: secrets: - name: my-rhdh-secrets",
"apiVersion: networking.k8s.io/v1 kind: Ingress metadata: # TODO: this the name of your Developer Hub Ingress name: my-rhdh annotations: alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/target-type: ip # TODO: Using an ALB HTTPS Listener requires a certificate for your own domain. Fill in the ARN of your certificate, e.g.: alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-xxx:xxxx:certificate/xxxxxx alb.ingress.kubernetes.io/listen-ports: '[{\"HTTP\": 80}, {\"HTTPS\":443}]' alb.ingress.kubernetes.io/ssl-redirect: '443' # TODO: Set your application domain name. external-dns.alpha.kubernetes.io/hostname: <rhdh_dns_name> spec: ingressClassName: alb rules: # TODO: Set your application domain name. - host: <rhdh_dns_name> http: paths: - path: / pathType: Prefix backend: service: # TODO: my-rhdh is the name of your `Backstage` custom resource. # Adjust if you changed it! name: backstage-my-rhdh port: name: http-backend"
] | https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html/installing_red_hat_developer_hub_on_amazon_elastic_kubernetes_service/proc-rhdh-deploy-eks-operator_title-install-rhdh-eks |
Chapter 17. Impersonating the system:admin user | Chapter 17. Impersonating the system:admin user 17.1. API impersonation You can configure a request to the OpenShift Container Platform API to act as though it originated from another user. For more information, see User impersonation in the Kubernetes documentation. 17.2. Impersonating the system:admin user You can grant a user permission to impersonate system:admin , which grants them cluster administrator permissions. Procedure To grant a user permission to impersonate system:admin , run the following command: USD oc create clusterrolebinding <any_valid_name> --clusterrole=sudoer --user=<username> Tip You can alternatively apply the following YAML to grant permission to impersonate system:admin : apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: <any_valid_name> roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: sudoer subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: <username> 17.3. Impersonating the system:admin group When a system:admin user is granted cluster administration permissions through a group, you must include the --as=<user> --as-group=<group1> --as-group=<group2> parameters in the command to impersonate the associated groups. Procedure To grant a user permission to impersonate a system:admin by impersonating the associated cluster administration groups, run the following command: USD oc create clusterrolebinding <any_valid_name> --clusterrole=sudoer --as=<user> \ --as-group=<group1> --as-group=<group2> 17.4. Adding unauthenticated groups to cluster roles As a cluster administrator, you can add unauthenticated users to the following cluster roles in OpenShift Container Platform by creating a cluster role binding. Unauthenticated users do not have access to non-public cluster roles. This should only be done in specific use cases when necessary. You can add unauthenticated users to the following cluster roles: system:scope-impersonation system:webhook system:oauth-token-deleter self-access-reviewer Important Always verify compliance with your organization's security standards when modifying unauthenticated access. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Create a YAML file named add-<cluster_role>-unauth.yaml and add the following content: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" name: <cluster_role>access-unauthenticated roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: <cluster_role> subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:unauthenticated Apply the configuration by running the following command: USD oc apply -f add-<cluster_role>.yaml | [
"oc create clusterrolebinding <any_valid_name> --clusterrole=sudoer --user=<username>",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: <any_valid_name> roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: sudoer subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: <username>",
"oc create clusterrolebinding <any_valid_name> --clusterrole=sudoer --as=<user> --as-group=<group1> --as-group=<group2>",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: \"true\" name: <cluster_role>access-unauthenticated roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: <cluster_role> subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:unauthenticated",
"oc apply -f add-<cluster_role>.yaml"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/authentication_and_authorization/impersonating-system-admin |
Chapter 5. Setting up HawtIO on OpenShift 4 | Chapter 5. Setting up HawtIO on OpenShift 4 On OpenShift 4.x, setting up HawtIO involves installing and deploying it. The preferred mechanism for this installation is using the HawtIO Operator available from the OperatorHub (see Section 5.1, "Installing and deploying HawtIO on OpenShift 4.x by using the OperatorHub"). Optionally, you can customize role-based access control (RBAC) for HawtIO as described in Section 2.3, "Role-based access control for HawtIO on OpenShift 4.x". 5.1. Installing and deploying HawtIO on OpenShift 4 by using the OperatorHub The HawtIO Operator is provided in the OpenShift OperatorHub for the installation of HawtIO. To deploy HawtIO you will have to deploy an instance of the installed operator as well as a HawtIO Custom Resource (CR). To install and deploy HawtIO: Log in to the OpenShift console in the web browser as a user with cluster admin access. Click Operators and then click OperatorHub . In the search field window, type HawtIO to filter the list of operators. Click HawtIO Operator . In the HawtIO Operator install window, click Install . The Create Operator Subscription form opens: For Update Channel , select stable-v1 . For Installation Mode , accept the default (a specific namespace on the cluster). Note This mode determines what namespaces the operator will monitor for HawtIO CRs. This is different to what namespaces HawtIO will monitor when it is fully deployed. The latter can be configured via the HawtIO CR. For Installed Namespace , select the namespace in which you want to install HawtIO Operator. For the Update Approval , select Automatic or Manual to configure how OpenShift handles updates to HawtIO Operator. If the Automatic updates option is selected and a new version of HawtIO Operator is available, the OpenShift Operator Lifecycle Manager (OLM) automatically upgrades the running instance of HawtIO without human intervention; If the Manual updates option is selected and a newer version of an Operator is available, the OLM only creates an update request. A Cluster Administrator must then manually approve the update request to have HawtIO Operator updated to the new version. Click Install and OpenShift installs HawtIO Operator into the current namespace. To verify the installation, click Operators and then click Installed Operators . HawtIO should be visible in the list of operators. To deploy HawtIO by using the OpenShift web console: In the list of Installed Operators , under the Name column, click HawtIO Operator . On the Operator Details page under Provided APIs , click Create HawtIO . Accept the configuration default values or optionally edit them. For Replicas , to increase HawtIO performance (for example, in a high availability environment), the number of pods allocated to HawtIO can be increased; For RBAC (role-based access control), only specify a value in the Config Map field if you want to customize the default RBAC behaviour and if the ConfigMap file already exists in the namespace in which you installed HawtIO Operator For Nginx , see Performance tuning for HawtIO Operator installation For Type , specify either: Cluster : for HawtIO to monitor all namespaces on the OpenShift cluster for any HawtIO-enabled applications; Namespace : for HawtIO to monitor only the HawtIO-enabled applications that have been deployed in the same namespace. Click Create . The HawtIO Operator Details page opens and shows the status of the deployment. To open HawtIO : For a namespace deployment: In the OpenShift web console, open the project in which the HawtIO operator is installed, and then select Overview . In the Project Overview page, scroll down to the Launcher section and click the HawtIO link. For a cluster deployment, in the OpenShift web console's title bar, click the grid icon. In the popup menu, under Red Hat Applications , click the HawtIO URL link. Log into HawtIO. An Authorize Access page opens in the browser listing the required permissions. Click Allow selected permissions . HawtIO opens in the browser and shows any HawtIO-enabled application pods that are authorized for access. Click Connect to view the monitored application. A new browser window opens showing the application in HawtIO. 5.2. Role-based access control for HawtIO on OpenShift 4 HawtIO offers role-based access control (RBAC) that infers access according to the user authorization provided by OpenShift. In HawtIO, RBAC determines a user's ability to perform MBean operations on a pod. For information on OpenShift authorization, see the Using RBAC to define and apply permissions section of the OpenShift documentation. Role-based access is enabled by default when you use the Operator to install HawtIO on OpenShift. HawtIO RBAC leverages the user's verb access on a pod resource in OpenShift to determine the user's access to a pod's MBean operations in HawtIO. By default, there are two user roles for HawtIO: admin : if a user can update a pod in OpenShift, then the user is conferred the admin role for HawtIO. The user can perform write MBean operations in HawtIO for the pod. viewer : if a user can get a pod in OpenShift, then the user is conferred the viewer role for HawtIO. The user can perform read-only MBean operations in HawtIO for the pod. 5.2.1. Determining access roles for HawtIO on OpenShift 4 HawtIO role-based access control is inferred from a user's OpenShift permissions for a pod. To determine HawtIO access role granted to a particular user, obtain the OpenShift permissions granted to the user for a pod. Prerequisites : The user's name The pod's name Procedure : To determine whether a user has HawtIO admin role for the pod, run the following command to see whether the user can update the pod on OpenShift: oc auth can-i update pods/<pod> --as <user> If the response is yes, the user has the admin role for the pod. The user can perform write operations in HawtIO for the pod. To determine whether a user has HawtIO viewer role for the pod, run the following command to see whether the user can get a pod on OpenShift: oc auth can-i get pods/<pod> --as <user> If the response is yes, the user has the viewer role for the pod. The user can perform read-only operations in HawtIO for the pod. Depending on the context, HawtIO prevents the user with the viewer role from performing a write MBean operation, by disabling an option or by displaying an operation not allowed for this user message when the user attempts a write MBean operation. If the response is no, the user is not bound to any HawtIO roles and the user cannot view the pod in HawtIO. 5.2.2. Customizing role-based access to HawtIO on OpenShift 4 If you use the OperatorHub to install HawtIO, role-based access control (RBAC) is enabled by default. To customize HawtIO RBAC behaviour, before deployment of HawtIO, a ConfigMap resource (that defines the custom RBAC behaviour) must be provided. The name of this ConfigMap should be entered in the rbac configuration section of the HawtIO Custom Resource (CR). The custom ConfigMap resource must be added in the same namespace in which the HawtIO Operator has been installed. Prerequisite : The HawtIO Operator has been installed from the OperatorHub. Procedure : To customize HawtIO RBAC roles: Create an RBAC ConfigMap: Make sure the current OpenShift project is the project to which you want to install HawtIO. For example, to install HawtIO in the hawtio-test project, run this command: oc project hawtio-test Create a HawtIO RBAC ConfigMap file from the template, and run this command: oc process -f https://raw.githubusercontent.com/hawtio/hawtio-online/2.x/docker/ACL.yaml -p APP_NAME=custom-hawtio | oc create -f - Edit the new custom ConfigMap, using the command: oc edit ConfigMap custom-hawtio-rbac By saving the edits, the ConfigMap resource will be updated Create a new HawtIO CR, as described above, and edit the rbac section by adding the name of the new ConfigMap under the property configMap . Click Create . The operator should deploy a new version of HawtIO making use of the custom ConfigMap 5.3. Migrating from Fuse Console The version of the HawtIO Custom Resource Definition (CRD) has been upgraded in HawtIO from v1alpha1 to v1 and contains non-backwards compatible changes. Therefore, since the CRD is cluster-wide, this will have a detrimental impact on existing installations of Fuse Console if HawtIO is subsequently installed on the same cluster. Users are advised at this time to uninstall all versions of Fuse Console before proceeding with the installation of HawtIO. Users wishing to migrate their existing HawtIO Custom Resources to HawtIO, can store the resource configuration in a file and re-apply them once the HawtIO Operator has been installed. On re-applying, the CR will be upgraded to the version v1 automatically. An important change in the new specification is that the version property can no longer be specified in the CR as the version is provided as an internal constant of the operator itself. 5.4. Upgrading HawtIO on OpenShift 4 Red Hat OpenShift 4.x handles updates to operators, including HawtIO operators. For more information see the Operators OpenShift documentation . In turn, the operator updates will trigger application upgrades, depending on how the application is configured. 5.5. Tuning the performance of HawtIO on OpenShift 4 By default, HawtIO uses the following Nginx settings: clientBodyBufferSize: 256k proxyBuffers: 16 128k subrequestOutputBufferSize: 10m Note For descriptions of these settings, see the Nginx documentation . To tune the performance of HawtIO, you can set any of the clientBodyBufferSize , proxyBuffers , and subrequestOutputBufferSize environment variables. For example, if you are using HawtIO to monitor numerous pods and routes (for instance, 100 routes in total), you can resolve a loading timeout issue by setting HawtIO's subrequestOutputBufferSize environment variable between 60m to 100m . 5.5.1. Performance tuning for HawtIO Operator installation On Openshift 4.x, you can set the Nginx performance tuning environment variables before or after you deploy HawtIO. If you do so afterwards, OpenShift redeploys HawtIO. Prerequisite : You must have cluster admin access to the OpenShift cluster. Procedure : You can set the environment variables before or after you deploy HawtIO. To set the environment variables before deploying HawtIO : In the OpenShift web console, in a project that has HawtIO Operator installed, select Operators> Installed Operators> HawtIO Operator . Click the HawtIO tab, and then click Create HawtIO . On the Create HawtIO page, in the Form view , scroll down to the Config> Nginx section. Expand the Nginx section and then set the environment variables. For example: clientBodyBufferSize: 256k proxyBuffers: 16 128k subrequestOutputBufferSize: 100m Click Create to deploy HawtIO. After the deployment completes, open the Deployments> HawtIO-console page, and then click Environment to verify that the environment variables are in the list. To set the environment variables after you deploy HawtIO : In the OpenShift web console, open the project in which HawtIO is deployed. Select Operators> Installed Operators> HawtIO Operator . Click the HawtIO tab, and then click HawtIO . Select Actions> Edit HawtIO . In the Editor window, scroll down to the spec section. Under the spec section, add a new nginx section and specify one or more environment variables, for example: apiVersion: hawt.io/v1 kind: Hawtio metadata: name: hawtio-console spec: type: Namespace nginx: clientBodyBufferSize: 256k proxyBuffers: 16 128k subrequestOutputBufferSize: 100m Click Save . OpenShift redeploys HawtIO. After the redeployment completes, open the Workloads> Deployments> HawtIO-console page, and then click Environment to see the environment variables in the list. 5.5.2. Performance tuning for viewing applications on HawtIO Enhanced performance tuning capability of HawtIO allows viewing of the applications with a large number of MBeans. To use this capability perform the following steps. Prerequisite : You must have cluster admin access to the OpenShift cluster. Procedure : Increase the memory limit for the applications. To increase the memory limits after deploying HawtIO : In the OpenShift web console, open the project in which HawtIO is deployed. Select Operators> Installed Operators> HawtIO Operator . Click the HawtIO tab, and then click HawtIO . Select Actions> Edit HawtIO . In the Editor window, scroll down to the spec.resources section. Update the values for both requests and limits to preferred amounts Click Save HawtIO should re-deploy using the new resource specification. | [
"auth can-i update pods/<pod> --as <user>",
"auth can-i get pods/<pod> --as <user>",
"project hawtio-test",
"process -f https://raw.githubusercontent.com/hawtio/hawtio-online/2.x/docker/ACL.yaml -p APP_NAME=custom-hawtio | oc create -f -",
"edit ConfigMap custom-hawtio-rbac",
"apiVersion: hawt.io/v1 kind: Hawtio metadata: name: hawtio-console spec: type: Namespace nginx: clientBodyBufferSize: 256k proxyBuffers: 16 128k subrequestOutputBufferSize: 100m"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/hawtio_diagnostic_console_guide/setting-up-hawtio-on-openshift-4 |
Chapter 7. Installing a cluster in an LPAR on IBM Z and IBM LinuxONE in a restricted network | Chapter 7. Installing a cluster in an LPAR on IBM Z and IBM LinuxONE in a restricted network In OpenShift Container Platform version 4.15, you can install a cluster in a logical partition (LPAR) on IBM Z(R) or IBM(R) LinuxONE infrastructure that you provision in a restricted network. Note While this document refers to only IBM Z(R), all information in it also applies to IBM(R) LinuxONE. Important Additional considerations exist for non-bare metal platforms. Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you install an OpenShift Container Platform cluster. 7.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You created a mirror registry for installation in a restricted network and obtained the imageContentSources data for your version of OpenShift Container Platform. Before you begin the installation process, you must move or remove any existing installation files. This ensures that the required installation files are created and updated during the installation process. Important Ensure that installation steps are done from a machine with access to the installation media. You provisioned persistent storage using OpenShift Data Foundation or other supported storage protocols for your cluster. To deploy a private image registry, you must set up persistent storage with ReadWriteMany access. If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 7.2. About installations in restricted networks In OpenShift Container Platform 4.15, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. Important Because of the complexity of the configuration for user-provisioned installations, consider completing a standard user-provisioned infrastructure installation before you attempt a restricted network installation using user-provisioned infrastructure. Completing this test installation might make it easier to isolate and troubleshoot any issues that might arise during your installation in a restricted network. 7.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 7.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.15, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. 7.4. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 7.4.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 7.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 7.4.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 7.2. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) Bootstrap RHCOS 4 16 GB 100 GB N/A Control plane RHCOS 4 16 GB 100 GB N/A Compute RHCOS 2 8 GB 100 GB N/A One physical core (IFL) provides two logical cores (threads) when SMT-2 is enabled. The hypervisor can provide two or more vCPUs. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 7.4.3. Minimum IBM Z system environment You can install OpenShift Container Platform version 4.15 on the following IBM(R) hardware: IBM(R) z16 (all models), IBM(R) z15 (all models), IBM(R) z14 (all models) IBM(R) LinuxONE 4 (all models), IBM(R) LinuxONE III (all models), IBM(R) LinuxONE Emperor II, IBM(R) LinuxONE Rockhopper II Important When running OpenShift Container Platform on IBM Z(R) without a hypervisor use the Dynamic Partition Manager (DPM) to manage your machine. Hardware requirements The equivalent of six Integrated Facilities for Linux (IFL), which are SMT2 enabled, for each cluster. At least one network connection to both connect to the LoadBalancer service and to serve data for traffic outside the cluster. Note You can use dedicated or shared IFLs to assign sufficient compute resources. Resource sharing is one of the key strengths of IBM Z(R). However, you must adjust capacity correctly on each hypervisor layer and ensure sufficient resources for every OpenShift Container Platform cluster. Important Since the overall performance of the cluster can be impacted, the LPARs that are used to set up the OpenShift Container Platform clusters must provide sufficient compute capacity. In this context, LPAR weight management, entitlements, and CPU shares on the hypervisor level play an important role. Operating system requirements Five logical partitions (LPARs) Three LPARs for OpenShift Container Platform control plane machines Two LPARs for OpenShift Container Platform compute machines One machine for the temporary OpenShift Container Platform bootstrap machine IBM Z network connectivity requirements To install on IBM Z(R) in an LPAR, you need: A direct-attached OSA or RoCE network adapter For a preferred setup, use OSA link aggregation. Disk storage FICON attached disk storage (DASDs). These can be dedicated DASDs that must be formatted as CDL, which is the default. To reach the minimum required DASD size for Red Hat Enterprise Linux CoreOS (RHCOS) installations, you need extended address volumes (EAV). If available, use HyperPAV to ensure optimal performance. FCP attached disk storage Storage / Main Memory 16 GB for OpenShift Container Platform control plane machines 8 GB for OpenShift Container Platform compute machines 16 GB for the temporary OpenShift Container Platform bootstrap machine Additional resources Processors Resource/Systems Manager Planning Guide in IBM(R) Documentation for PR/SM mode considerations. IBM Dynamic Partition Manager (DPM) Guide in IBM(R) Documentation for DPM mode considerations. Topics in LPAR performance for LPAR weight management and entitlements. Recommended host practices for IBM Z(R) & IBM(R) LinuxONE environments 7.4.4. Preferred IBM Z system environment Hardware requirements Three LPARS that each have the equivalent of six IFLs, which are SMT2 enabled, for each cluster. Two network connections to both connect to the LoadBalancer service and to serve data for traffic outside the cluster. HiperSockets that are attached to a node directly as a device. To directly connect HiperSockets to a node, you must set up a gateway to the external network via a RHEL 8 guest to bridge to the HiperSockets network. Operating system requirements Three LPARs for OpenShift Container Platform control plane machines. At least six LPARs for OpenShift Container Platform compute machines. One machine or LPAR for the temporary OpenShift Container Platform bootstrap machine. IBM Z network connectivity requirements To install on IBM Z(R) in an LPAR, you need: A direct-attached OSA or RoCE network adapter For a preferred setup, use OSA link aggregation. Disk storage FICON attached disk storage (DASDs). These can be dedicated DASDs that must be formatted as CDL, which is the default. To reach the minimum required DASD size for Red Hat Enterprise Linux CoreOS (RHCOS) installations, you need extended address volumes (EAV). If available, use HyperPAV to ensure optimal performance. FCP attached disk storage Storage / Main Memory 16 GB for OpenShift Container Platform control plane machines 8 GB for OpenShift Container Platform compute machines 16 GB for the temporary OpenShift Container Platform bootstrap machine 7.4.5. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 7.4.6. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 7.4.6.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 7.4.6.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Table 7.3. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 7.4. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 7.5. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . Additional resources Configuring chrony time service 7.4.7. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 7.6. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <control_plane><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <compute><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 7.4.7.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 7.1. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 7.2. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 7.4.8. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application Ingress load balancing infrastructure. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 7.7. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 7.8. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 7.4.8.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 7.3. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 7.5. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, preparing a web server for the Ignition files, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure Set up static IP addresses. Set up an HTTP or HTTPS server to provide Ignition files to the cluster nodes. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. 7.6. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 604800 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 604800 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. 7.7. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 7.8. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for IBM Z(R) 7.8.1. Sample install-config.yaml file for IBM Z You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Specifies whether to enable or disable simultaneous multithreading (SMT), or hyperthreading. By default, SMT is enabled to increase the performance of the cores in your machines. You can disable it by setting the parameter value to Disabled . If you disable SMT, you must disable it in all cluster machines; this includes both control plane and compute machines. Note Simultaneous multithreading (SMT) is enabled by default. If SMT is not available on your OpenShift Container Platform nodes, the hyperthreading parameter has no effect. Important If you disable hyperthreading , whether on your OpenShift Container Platform nodes or in the install-config.yaml file, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned infrastructure. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. In user-provisioned installations, you must manually deploy the compute machines before you finish installing the cluster. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 11 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 12 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 13 You must set the platform to none . You cannot provide additional platform configuration variables for IBM Z(R) infrastructure. Important Clusters that are installed with the platform type none are unable to use some features, such as managing compute machines with the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that would normally support the feature. This parameter cannot be changed after installation. 14 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 15 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example, registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. 16 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 17 Add the additionalTrustBundle parameter and value. The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority or the self-signed certificate that you generated for the mirror registry. 18 Provide the imageContentSources section according to the output of the command that you used to mirror the repository. Important When using the oc adm release mirror command, use the output from the imageContentSources section. When using oc mirror command, use the repositoryDigestMirrors section of the ImageContentSourcePolicy file that results from running the command. ImageContentSourcePolicy is deprecated. For more information see Configuring image registry repository mirroring . 7.8.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 7.8.3. Configuring a three-node cluster Optionally, you can deploy zero compute machines in a bare metal cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. In three-node OpenShift Container Platform environments, the three control plane machines are schedulable, which means that your application workloads are scheduled to run on them. Prerequisites You have an existing install-config.yaml file. Procedure Ensure that the number of compute replicas is set to 0 in your install-config.yaml file, as shown in the following compute stanza: compute: - name: worker platform: {} replicas: 0 Note You must set the value of the replicas parameter for the compute machines to 0 when you install OpenShift Container Platform on user-provisioned infrastructure, regardless of the number of compute machines you are deploying. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. This does not apply to user-provisioned installations, where the compute machines are deployed manually. For three-node cluster installations, follow these steps: If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. See the Load balancing requirements for user-provisioned infrastructure section for more information. When you create the Kubernetes manifest files in the following procedure, ensure that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml file is set to true . This enables your application workloads to run on the control plane nodes. Do not deploy any compute nodes when you create the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7.9. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin. OVNKubernetes is the only supported plugin during installation. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 7.9.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 7.9. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 7.10. defaultNetwork object Field Type Description type string OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. OpenShift SDN is no longer available as an installation choice for new clusters. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 7.11. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify a configuration object for customizing the IPsec configuration. ipv4 object Specifies a configuration object for IPv4 settings. ipv6 object Specifies a configuration object for IPv6 settings. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. Table 7.12. ovnKubernetesConfig.ipv4 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the 100.88.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is 100.88.0.0/16 . internalJoinSubnet string If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . The default value is 100.64.0.0/16 . Table 7.13. ovnKubernetesConfig.ipv6 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the fd98::/48 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. This field cannot be changed after installation. The default value is fd98::/48 . internalJoinSubnet string If your existing network infrastructure overlaps with the fd98::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is fd98::/64 . Table 7.14. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 7.15. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. ipForwarding object You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the ipForwarding specification in the Network resource. Specify Restricted to only allow IP forwarding for Kubernetes related traffic. Specify Global to allow forwarding of all IP traffic. For new installations, the default is Restricted . For updates to OpenShift Container Platform 4.14 or later, the default is Global . ipv4 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv4 addresses. ipv6 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv6 addresses. Table 7.16. gatewayConfig.ipv4 object Field Type Description internalMasqueradeSubnet string The masquerade IPv4 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is 169.254.169.0/29 . Table 7.17. gatewayConfig.ipv6 object Field Type Description internalMasqueradeSubnet string The masquerade IPv6 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is fd69::/125 . Table 7.18. ipsecConfig object Field Type Description mode string Specifies the behavior of the IPsec implementation. Must be one of the following values: Disabled : IPsec is not enabled on cluster nodes. External : IPsec is enabled for network traffic with external hosts. Full : IPsec is enabled for pod traffic and network traffic with external hosts. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full Important Using OVNKubernetes can lead to a stack exhaustion problem on IBM Power(R). kubeProxyConfig object configuration (OpenShiftSDN container network interface only) The values for the kubeProxyConfig object are defined in the following table: Table 7.19. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 7.10. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Note The installation program that generates the manifest and Ignition files is architecture specific and can be obtained from the client image mirror . The Linux version of the installation program runs on s390x only. This installer program is also available as a Mac OS version. Prerequisites You obtained the OpenShift Container Platform installation program. For a restricted network installation, these files are on your mirror host. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 7.11. Configuring NBDE with static IP in an IBM Z or IBM LinuxONE environment Enabling NBDE disk encryption in an IBM Z(R) or IBM(R) LinuxONE environment requires additional steps, which are described in detail in this section. Prerequisites You have set up the External Tang Server. See Network-bound disk encryption for instructions. You have installed the butane utility. You have reviewed the instructions for how to create machine configs with Butane. Procedure Create Butane configuration files for the control plane and compute nodes. The following example of a Butane configuration for a control plane node creates a file named master-storage.bu for disk encryption: variant: openshift version: 4.15.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root 2 label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 3 1 The cipher option is only required if FIPS mode is enabled. Omit the entry if FIPS is disabled. 2 For installations on DASD-type disks, replace with device: /dev/disk/by-label/root . 3 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Create a customized initramfs file to boot the machine, by running the following command: USD coreos-installer pxe customize \ /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img \ --dest-device /dev/disk/by-id/scsi-<serial_number> --dest-karg-append \ ip=<ip_address>::<gateway_ip>:<subnet_mask>::<network_device>:none \ --dest-karg-append nameserver=<nameserver_ip> \ --dest-karg-append rd.neednet=1 -o \ /root/rhcos-bootfiles/<node_name>-initramfs.s390x.img Note Before first boot, you must customize the initramfs for each node in the cluster, and add PXE kernel parameters. Create a parameter file that includes ignition.platform.id=metal and ignition.firstboot . Example kernel parameter file for the control plane machine: rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/dasda \ 1 ignition.firstboot ignition.platform.id=metal \ coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \ 2 coreos.inst.ignition_url=http://<http_server>/master.ign \ 3 ip=10.19.17.2::10.19.17.1:255.255.255.0::enbdd0:none nameserver=10.19.17.1 \ zfcp.allow_lun_scan=0 \ 4 rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 \ rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 \ 5 1 For installations on DASD-type disks, add coreos.inst.install_dev=/dev/dasda . Omit this value for FCP-type disks. 2 Specify the location of the rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. 3 Specify the location of the Ignition config file. Use master.ign or worker.ign . Only HTTP and HTTPS protocols are supported. 4 For installations on FCP-type disks, add zfcp.allow_lun_scan=0 . Omit this value for DASD-type disks. 5 For installations on DASD-type disks, replace with rd.dasd=0.0.3490 to specify the DASD device. Note Write all options in the parameter file as a single line and make sure you have no newline characters. Additional resources Creating machine configs with Butane 7.12. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on IBM Z(R) infrastructure that you provision, you must install Red Hat Enterprise Linux CoreOS (RHCOS) in an LPAR. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS guest machines have rebooted. Complete the following steps to create the machines. Prerequisites An HTTP or HTTPS server running on your provisioning machine that is accessible to the machines you create. Procedure Log in to Linux on your provisioning machine. Obtain the Red Hat Enterprise Linux CoreOS (RHCOS) kernel, initramfs, and rootfs files from the RHCOS image mirror . Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate kernel, initramfs, and rootfs artifacts described in the following procedure. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel: rhcos-<version>-live-kernel-<architecture> initramfs: rhcos-<version>-live-initramfs.<architecture>.img rootfs: rhcos-<version>-live-rootfs.<architecture>.img Note The rootfs image is the same for FCP and DASD. Create parameter files. The following parameters are specific for a particular virtual machine: For ip= , specify the following seven entries: The IP address for the machine. An empty string. The gateway. The netmask. The machine host and domain name in the form hostname.domainname . Omit this value to let RHCOS decide. The network interface name. Omit this value to let RHCOS decide. If you use static IP addresses, specify none . For coreos.inst.ignition_url= , specify the Ignition file for the machine role. Use bootstrap.ign , master.ign , or worker.ign . Only HTTP and HTTPS protocols are supported. For coreos.live.rootfs_url= , specify the matching rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. For installations on DASD-type disks, complete the following tasks: For coreos.inst.install_dev= , specify /dev/dasda . Use rd.dasd= to specify the DASD where RHCOS is to be installed. Leave all other parameters unchanged. Example parameter file, bootstrap-0.parm , for the bootstrap machine: rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/dasda \ coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img \ coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/bootstrap.ign \ ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 \ rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \ zfcp.allow_lun_scan=0 \ rd.dasd=0.0.3490 Write all options in the parameter file as a single line and make sure you have no newline characters. For installations on FCP-type disks, complete the following tasks: Use rd.zfcp=<adapter>,<wwpn>,<lun> to specify the FCP disk where RHCOS is to be installed. For multipathing repeat this step for each additional path. Note When you install with multiple paths, you must enable multipathing directly after the installation, not at a later point in time, as this can cause problems. Set the install device as: coreos.inst.install_dev=/dev/disk/by-id/scsi-<serial_number> . Note If additional LUNs are configured with NPIV, FCP requires zfcp.allow_lun_scan=0 . If you must enable zfcp.allow_lun_scan=1 because you use a CSI driver, for example, you must configure your NPIV so that each node cannot access the boot partition of another node. Leave all other parameters unchanged. Important Additional postinstallation steps are required to fully enable multipathing. For more information, see "Enabling multipathing with kernel arguments on RHCOS" in Postinstallation machine configuration tasks . The following is an example parameter file worker-1.parm for a worker node with multipathing: rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/disk/by-id/scsi-<serial_number> \ coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img \ coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/worker.ign \ ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 \ rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \ zfcp.allow_lun_scan=0 \ rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000 Write all options in the parameter file as a single line and make sure you have no newline characters. Transfer the initramfs, kernel, parameter files, and RHCOS images to the LPAR, for example with FTP. For details about how to transfer the files with FTP and boot, see Installing in an LPAR . Boot the machine Repeat this procedure for the other machines in the cluster. 7.12.1. Advanced RHCOS installation reference This section illustrates the networking configuration and other advanced options that allow you to modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables describe the kernel arguments and command-line options you can use with the RHCOS live installer and the coreos-installer command. 7.12.1.1. Networking and bonding options for ISO installations If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the image to configure networking for a node. If no networking arguments are specified, DHCP is activated in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file. Important When adding networking arguments manually, you must also add the rd.neednet=1 kernel argument to bring the network up in the initramfs. The following information provides examples for configuring networking and bonding on your RHCOS nodes for ISO installations. The examples describe how to use the ip= , nameserver= , and bond= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= , nameserver= , and then bond= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page . The following examples are the networking options for ISO installation. Configuring DHCP or static IP addresses To configure an IP address, either use DHCP ( ip=dhcp ) or set an individual static IP address ( ip=<host_ip> ). If setting a static IP, you must then identify the DNS server IP address ( nameserver=<dns_ip> ) on each node. The following example sets: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The hostname to core0.example.com The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41 Note When you use DHCP to configure IP addressing for the RHCOS machines, the machines also obtain the DNS server information through DHCP. For DHCP-based deployments, you can define the DNS server address that is used by the RHCOS nodes through your DHCP server configuration. Configuring an IP address without a static hostname You can configure an IP address without assigning a static hostname. If a static hostname is not set by the user, it will be picked up and automatically set by a reverse DNS lookup. To configure an IP address without a static hostname refer to the following example: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41 Specifying multiple network interfaces You can specify multiple network interfaces by setting multiple ip= entries. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring default gateway and route Optional: You can configure routes to additional networks by setting an rd.route= value. Note When you configure one or multiple networks, one default gateway is required. If the additional network gateway is different from the primary network gateway, the default gateway must be the primary network gateway. Run the following command to configure the default gateway: ip=::10.10.10.254:::: Enter the following command to configure the route for the additional network: rd.route=20.20.20.0/24:20.20.20.254:enp2s0 Disabling DHCP on a single interface You can disable DHCP on a single interface, such as when there are two or more network interfaces and only one interface is being used. In the example, the enp1s0 interface has a static networking configuration and DHCP is disabled for enp2s0 , which is not used: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none Combining DHCP and static IP configurations You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example: ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring VLANs on individual interfaces Optional: You can configure VLANs on individual interfaces by using the vlan= parameter. To configure a VLAN on a network interface and use a static IP address, run the following command: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0 To configure a VLAN on a network interface and to use DHCP, run the following command: ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0 Providing multiple DNS servers You can provide multiple DNS servers by adding a nameserver= entry for each server, for example: nameserver=1.1.1.1 nameserver=8.8.8.8 Bonding multiple network interfaces to a single interface Optional: You can bond multiple network interfaces to a single interface by using the bond= option. Refer to the following examples: The syntax for configuring a bonded interface is: bond=<name>[:<network_interfaces>][:options] <name> is the bonding device name ( bond0 ), <network_interfaces> represents a comma-separated list of physical (ethernet) interfaces ( em1,em2 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:em1,em2:mode=active-backup,fail_over_mac=1 ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none Always set the fail_over_mac=1 option in active-backup mode, to avoid problems when shared OSA/RoCE cards are used. Bonding multiple network interfaces to a single interface Optional: You can configure VLANs on bonded interfaces by using the vlan= parameter and to use DHCP, for example: ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0 Use the following example to configure the bonded interface with a VLAN and to use a static IP address: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0 Using network teaming Optional: You can use a network teaming as an alternative to bonding by using the team= parameter: The syntax for configuring a team interface is: team=name[:network_interfaces] name is the team device name ( team0 ) and network_interfaces represents a comma-separated list of physical (ethernet) interfaces ( em1, em2 ). Note Teaming is planned to be deprecated when RHCOS switches to an upcoming version of RHEL. For more information, see this Red Hat Knowledgebase Article . Use the following example to configure a network team: team=team0:em1,em2 ip=team0:dhcp 7.13. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.28.5 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 7.14. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 7.15. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.28.5 master-1 Ready master 73m v1.28.5 master-2 Ready master 74m v1.28.5 worker-0 Ready worker 11m v1.28.5 worker-1 Ready worker 11m v1.28.5 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 7.16. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m Configure the Operators that are not available. 7.16.1. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 7.16.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 7.16.2.1. Configuring registry storage for IBM Z As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a cluster on IBM Z(R). You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have 100Gi capacity. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resources found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.15 True False False 6h50m Ensure that your registry is set to managed to enable building and pushing of images. Run: Then, change the line to 7.16.2.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 7.17. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Postinstallation machine configuration tasks documentation for more information. Register your cluster on the Cluster registration page. Additional resources How to generate SOSREPORT within OpenShift Container Platform version 4 nodes without SSH . 7.18. steps Customize your cluster . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . If necessary, see Registering your disconnected cluster | [
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - name: worker platform: {} replicas: 0",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full",
"kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s",
"./openshift-install create manifests --dir <installation_directory> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"variant: openshift version: 4.15.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root 2 label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 3",
"coreos-installer pxe customize /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img --dest-device /dev/disk/by-id/scsi-<serial_number> --dest-karg-append ip=<ip_address>::<gateway_ip>:<subnet_mask>::<network_device>:none --dest-karg-append nameserver=<nameserver_ip> --dest-karg-append rd.neednet=1 -o /root/rhcos-bootfiles/<node_name>-initramfs.s390x.img",
"rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/dasda \\ 1 ignition.firstboot ignition.platform.id=metal coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 2 coreos.inst.ignition_url=http://<http_server>/master.ign \\ 3 ip=10.19.17.2::10.19.17.1:255.255.255.0::enbdd0:none nameserver=10.19.17.1 zfcp.allow_lun_scan=0 \\ 4 rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 \\ 5",
"rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/dasda coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/bootstrap.ign ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 zfcp.allow_lun_scan=0 rd.dasd=0.0.3490",
"rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/disk/by-id/scsi-<serial_number> coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/worker.ign ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 zfcp.allow_lun_scan=0 rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp",
"bond=bond0:em1,em2:mode=active-backup,fail_over_mac=1 ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0",
"team=team0:em1,em2 ip=team0:dhcp",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.28.5 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.28.5 master-1 Ready master 73m v1.28.5 master-2 Ready master 74m v1.28.5 worker-0 Ready worker 11m v1.28.5 worker-1 Ready worker 11m v1.28.5",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.15 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_ibm_z_and_ibm_linuxone/installing-restricted-networks-ibm-z-lpar |
25.2. Performing a VNC Installation | 25.2. Performing a VNC Installation The Anaconda installation program offers two modes for VNC installation. The modes are Direct Mode and Connect Mode . Direct Mode requires the VNC viewer to initiate the connection to the system being installed. Connect Mode requires the system being installed to initiate the connection to the VNC viewer. Once the connection is established, the two modes do not differ. The mode you select depends on the configuration in your environment. Direct Mode In this mode, Anaconda is configured to start the installation and wait for a VNC viewer before proceeding. The IP address and port are displayed on the system being installed. Using this information, you can connect to the installation system from a different computer. For this reason you must have visual and interactive access to the system being installed. Connect Mode In this mode, the VNC viewer is started on the remote system in listening mode . The VNC viewer waits for an incoming connection on a specified port. Then, Anaconda is started and the host name and port number are provided using a boot option or a Kickstart command. When the installation begins, the installation program establishes a connection with the listening VNC viewer using the specified host name and port number. For this reason, your remote system must be able to accept incoming network connections. Considerations for choosing a VNC installation mode Visual and Interactive access to the system If visual and interactive access to the system being installed is not available, then you must use Connect Mode. Network Connection Rules and Firewalls If the system being installed is not allowed inbound connections by a firewall, then you must use Connect Mode or disable the firewall. Disabling a firewall can have security implications. If the remote system running the VNC viewer is not allowed incoming connections by a firewall, then you must use Direct Mode, or disable the firewall. Disabling a firewall can have security implications. See the Red Hat Enterprise Linux 7 Security Guide for information about configuring the firewall. Note You must specify custom boot options to start a VNC installation. The exact way to do this differs depending on the system architecture. For architecture-specific instructions about editing boot options, see: Section 7.2, "The Boot Menu" for 64-bit AMD, Intel, and ARM systems Section 12.1, "The Boot Menu" for IBM Power Systems servers Chapter 21, Parameter and Configuration Files on IBM Z for IBM Z 25.2.1. Installing in VNC Direct Mode The Direct Mode expects the VNC viewer to initiate a connection to the system being installed. Anaconda asks you to initiate this connection. Procedure 25.1. Starting VNC in Direct Mode Run the VNC viewer of your choice on the workstation you are using to connect to the system being installed. For example, if you use TigerVNC : Figure 25.1. TigerVNC Connection Details Boot the installation system and wait for the boot menu to appear. In the menu, press the Tab key to edit boot options. Append the inst.vnc option to the end of the command line. Optionally, if you want to restrict VNC access to the installation system, add the inst.vncpassword= PASSWORD boot option as well. Replace PASSWORD with the password you want to use for the installation. The VNC password must be between 6 and 8 characters long. Important Use a temporary password for the inst.vncpassword= option. It should not be a real or root password you use on any system. Figure 25.2. Adding VNC Boot Options on AMD, Intel, and ARM Systems Press Enter to start the installation. The system initializes the installation program and starts the necessary services. When the system is ready, you get a message on the screen similar to the following: Note the IP address and port number (in the above example, 192.168.100.131:1 ). On the system running the VNC Viewer, enter the IP address and port number obtained in the step into the Connection Details dialog in the same format as it was displayed on the screen by Anaconda. Then, click Connect . The VNC viewer connects to the installation system. If you set up a VNC password, enter it when prompted and click OK . For further details about using a VNC client, see the corresponding section in the Red Hat Enterprise Linux 7 System Administrator's Guide . After you finish the procedure, a new window opens with the VNC connection established, displaying the installation menu. In this window, you can use the Anaconda graphical interface the same way you would use it when installing directly on the system. You can proceed with: Chapter 8, Installing Using Anaconda for 64-bit AMD, Intel, and ARM systems Chapter 13, Installing Using Anaconda for IBM Power Systems servers Chapter 18, Installing Using Anaconda for IBM Z 25.2.2. Installing in VNC Connect Mode In Connect Mode, the system being installed initiates a connection to the VNC viewer running on a remote system. Before you start, make sure the remote system is configured to accept an incoming connection on the port you want to use for VNC. The exact way to make sure the connection is not blocked depends on your network and on your workstation's configuration. Information about configuring the firewall is available in the Red Hat Enterprise Linux 7 Security Guide . Procedure 25.2. Starting VNC in Connect Mode Start the VNC viewer on the client system in listening mode. For example, on Red Hat Enterprise Linux using TigerVNC , execute the following command: Replace PORT with the port number you want to use for the connection. The terminal displays a message similar to the following example: Example 25.1. TigerVNC Viewer Listening The VNC viewer is now ready and waiting for an incoming connection from the installation system. Boot the system being installed and wait for the boot menu to appear. In the menu, press the Tab key to edit boot options. Append the following options to the command line: Replace HOST with the IP address of the system running the listening VNC viewer, and PORT with the port number that the VNC viewer is listening on. Press Enter to start the installation. The system initializes the installation program and starts the necessary services. Once the initialization is finished, Anaconda attempts to connect to the IP address and port you provided in the step. When the connection is successfully established, a new window opens on the system running the VNC viewer, displaying the installation menu. In this window, you can use the Anaconda graphical interface the same way you would use it when installing directly on the system. Afer you finish the procedure, you can proceed with: Chapter 8, Installing Using Anaconda for 64-bit AMD, Intel, and ARM systems Chapter 13, Installing Using Anaconda for IBM Power Systems servers Chapter 18, Installing Using Anaconda for IBM Z | [
"13:14:47 Please manually connect your VNC viewer to 192.168.100.131:1 to begin the install.",
"vncviewer -listen PORT",
"TigerVNC Viewer 64-bit v1.3.0 (20130924) Built on Sep 24 2013 at 16:32:56 Copyright (C) 1999-2011 TigerVNC Team and many others (see README.txt) See http://www.tigervnc.org for information on TigerVNC. Thu Feb 20 15:23:54 2014 main: Listening on port 5901",
"inst.vnc inst.vncconnect= HOST : PORT"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/sect-vnc-installations-anaconda-modes |
23.12. CPU Models and Topology | 23.12. CPU Models and Topology This section covers the requirements for CPU models. Note that every hypervisor has its own policy for which CPU features guest will see by default. The set of CPU features presented to the guest by KVM depends on the CPU model chosen in the guest virtual machine configuration. qemu32 and qemu64 are basic CPU models, but there are other models (with additional features) available. Each model and its topology is specified using the following elements from the domain XML: <cpu match='exact'> <model fallback='allow'>core2duo</model> <vendor>Intel</vendor> <topology sockets='1' cores='2' threads='1'/> <feature policy='disable' name='lahf_lm'/> </cpu> Figure 23.14. CPU model and topology example 1 <cpu mode='host-model'> <model fallback='forbid'/> <topology sockets='1' cores='2' threads='1'/> </cpu> Figure 23.15. CPU model and topology example 2 <cpu mode='host-passthrough'/> Figure 23.16. CPU model and topology example 3 In cases where no restrictions are to be put on the CPU model or its features, a simpler <cpu> element such as the following may be used: <cpu> <topology sockets='1' cores='2' threads='1'/> </cpu> Figure 23.17. CPU model and topology example 4 <cpu mode='custom'> <model>POWER8</model> </cpu> Figure 23.18. PPC64/PSeries CPU model example <cpu mode='host-passthrough'/> Figure 23.19. aarch64/virt CPU model example The components of this section of the domain XML are as follows: Table 23.8. CPU model and topology elements Element Description <cpu> This is the main container for describing guest virtual machine CPU requirements. <match> Specifies how the virtual CPU is provided to the guest virtual machine must match these requirements. The match attribute can be omitted if topology is the only element within <cpu> . Possible values for the match attribute are: minimum - the specified CPU model and features describes the minimum requested CPU. exact - the virtual CPU provided to the guest virtual machine will exactly match the specification. strict - the guest virtual machine will not be created unless the host physical machine CPU exactly matches the specification. Note that the match attribute can be omitted and will default to exact . <mode> This optional attribute may be used to make it easier to configure a guest virtual machine CPU to be as close to the host physical machine CPU as possible. Possible values for the mode attribute are: custom - Describes how the CPU is presented to the guest virtual machine. This is the default setting when the mode attribute is not specified. This mode makes it so that a persistent guest virtual machine will see the same hardware no matter what host physical machine the guest virtual machine is booted on. host-model - A shortcut to copying host physical machine CPU definition from the capabilities XML into the domain XML. As the CPU definition is copied just before starting a domain, the same XML can be used on different host physical machines while still providing the best guest virtual machine CPU each host physical machine supports. The match attribute and any feature elements cannot be used in this mode. For more information, see the libvirt upstream website . host-passthrough With this mode, the CPU visible to the guest virtual machine is exactly the same as the host physical machine CPU, including elements that cause errors within libvirt. The obvious the downside of this mode is that the guest virtual machine environment cannot be reproduced on different hardware and therefore, this mode is recommended with great caution. The model and feature elements are not allowed in this mode. <model> Specifies the CPU model requested by the guest virtual machine. The list of available CPU models and their definition can be found in the cpu_map.xml file installed in libvirt's data directory. If a hypervisor is unable to use the exact CPU model, libvirt automatically falls back to a closest model supported by the hypervisor while maintaining the list of CPU features. An optional fallback attribute can be used to forbid this behavior, in which case an attempt to start a domain requesting an unsupported CPU model will fail. Supported values for fallback attribute are: allow (the default), and forbid . The optional vendor_id attribute can be used to set the vendor ID seen by the guest virtual machine. It must be exactly 12 characters long. If not set, the vendor iID of the host physical machine is used. Typical possible values are AuthenticAMD and GenuineIntel . <vendor> Specifies the CPU vendor requested by the guest virtual machine. If this element is missing, the guest virtual machine runs on a CPU matching given features regardless of its vendor. The list of supported vendors can be found in cpu_map.xml . <topology> Specifies the requested topology of the virtual CPU provided to the guest virtual machine. Three non-zero values must be given for sockets, cores, and threads: the total number of CPU sockets, number of cores per socket, and number of threads per core, respectively. <feature> Can contain zero or more elements used to fine-tune features provided by the selected CPU model. The list of known feature names can be found in the cpu_map.xml file. The meaning of each feature element depends on its policy attribute, which has to be set to one of the following values: force - forces the virtual to be supported, regardless of whether it is actually supported by host physical machine CPU. require - dictates that guest virtual machine creation will fail unless the feature is supported by host physical machine CPU. This is the default setting, optional - this feature is supported by virtual CPU but only if it is supported by host physical machine CPU. disable - this is not supported by virtual CPU. forbid - guest virtual machine creation will fail if the feature is supported by host physical machine CPU. 23.12.1. Changing the Feature Set for a Specified CPU Although CPU models have an inherent feature set, the individual feature components can either be allowed or forbidden on a feature by feature basis, allowing for a more individualized configuration for the CPU. Procedure 23.1. Enabling and disabling CPU features To begin, shut down the guest virtual machine. Open the guest virtual machine's configuration file by running the virsh edit [domain] command. Change the parameters within the <feature> or <model> to include the attribute value 'allow' to force the feature to be allowed, or 'forbid' to deny support for the feature. <!-- original feature set --> <cpu mode='host-model'> <model fallback='allow'/> <topology sockets='1' cores='2' threads='1'/> </cpu> <!--changed feature set--> <cpu mode='host-model'> <model fallback='forbid'/> <topology sockets='1' cores='2' threads='1'/> </cpu> Figure 23.20. Example for enabling or disabling CPU features <!--original feature set--> <cpu match='exact'> <model fallback='allow'>core2duo</model> <vendor>Intel</vendor> <topology sockets='1' cores='2' threads='1'/> <feature policy='disable' name='lahf_lm'/> </cpu> <!--changed feature set--> <cpu match='exact'> <model fallback='allow'>core2duo</model> <vendor>Intel</vendor> <topology sockets='1' cores='2' threads='1'/> <feature policy='enable' name='lahf_lm'/> </cpu> Figure 23.21. Example 2 for enabling or disabling CPU features When you have completed the changes, save the configuration file and start the guest virtual machine. 23.12.2. Guest Virtual Machine NUMA Topology Guest virtual machine NUMA topology can be specified using the <numa> element in the domain XML: <cpu> <numa> <cell cpus='0-3' memory='512000'/> <cell cpus='4-7' memory='512000'/> </numa> </cpu> ... Figure 23.22. Guest virtual machine NUMA topology Each cell element specifies a NUMA cell or a NUMA node. cpus specifies the CPU or range of CPUs that are part of the node. memory specifies the node memory in kibibytes (blocks of 1024 bytes). Each cell or node is assigned a cellid or nodeid in increasing order starting from 0. | [
"<cpu match='exact'> <model fallback='allow'>core2duo</model> <vendor>Intel</vendor> <topology sockets='1' cores='2' threads='1'/> <feature policy='disable' name='lahf_lm'/> </cpu>",
"<cpu mode='host-model'> <model fallback='forbid'/> <topology sockets='1' cores='2' threads='1'/> </cpu>",
"<cpu mode='host-passthrough'/>",
"<cpu> <topology sockets='1' cores='2' threads='1'/> </cpu>",
"<cpu mode='custom'> <model>POWER8</model> </cpu>",
"<cpu mode='host-passthrough'/>",
"<!-- original feature set --> <cpu mode='host-model'> <model fallback='allow'/> <topology sockets='1' cores='2' threads='1'/> </cpu> <!--changed feature set--> <cpu mode='host-model'> <model fallback='forbid'/> <topology sockets='1' cores='2' threads='1'/> </cpu>",
"<!--original feature set--> <cpu match='exact'> <model fallback='allow'>core2duo</model> <vendor>Intel</vendor> <topology sockets='1' cores='2' threads='1'/> <feature policy='disable' name='lahf_lm'/> </cpu> <!--changed feature set--> <cpu match='exact'> <model fallback='allow'>core2duo</model> <vendor>Intel</vendor> <topology sockets='1' cores='2' threads='1'/> <feature policy='enable' name='lahf_lm'/> </cpu>",
"<cpu> <numa> <cell cpus='0-3' memory='512000'/> <cell cpus='4-7' memory='512000'/> </numa> </cpu>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-Manipulating_the_domain_xml-CPU_model_and_topology |
Chapter 7. Scaling storage of Red Hat Virtualization OpenShift Data Foundation cluster | Chapter 7. Scaling storage of Red Hat Virtualization OpenShift Data Foundation cluster To scale the storage capacity of your configured Red Hat OpenShift Data Foundation worker nodes on Red Hat Virtualization cluster, you can increase the capacity by adding three disks at a time. Three disks are needed since OpenShift Data Foundation uses a replica count of 3 to maintain the high availability. So the amount of storage consumed is three times the usable space. Note Usable space may vary when encryption is enabled or replica 2 pools are being used. 7.1. Scaling up storage capacity on a cluster To increase the storage capacity in a dynamically created storage cluster on an user-provisioned infrastructure, you can add storage capacity and performance to your configured Red Hat OpenShift Data Foundation worker nodes. Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. The disk should be of the same size and type as used during initial deployment. Procedure Log in to the OpenShift Web Console. Click Operators Installed Operators . Click OpenShift Data Foundation Operator. Click the Storage Systems tab. Click the Action Menu (...) on the far right of the storage system name to extend the options menu. Select Add Capacity from the options menu. Select the Storage Class . Choose the storage class which you wish to use to provision new storage devices. Click Add . To check the status, navigate to Storage Data Foundation and verify that Storage System in the Status card has a green tick. Verification steps Verify the Raw Capacity card. In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Block and File tab, check the Raw Capacity card. Note that the capacity increases based on your selections. Note The raw capacity does not take replication into account and shows the full capacity. Verify that the new OSDs and their corresponding new Persistent Volume Claims (PVCs) are created. To view the state of the newly created OSDs: Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. To view the state of the PVCs: Click Storage Persistent Volume Claims from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the selected hosts. <node-name> Is the name of the node. Check for the crypt keyword beside the ocs-deviceset names. Important Cluster reduction is supported only with the Red Hat Support Team's assistance. 7.2. Scaling out storage capacity on a Red Hat Virtualization cluster OpenShift Data Foundation is highly scalable. It can be scaled out by adding new nodes with required storage and enough hardware resources in terms of CPU and RAM. Practically there is no limit on the number of nodes which can be added but from the support perspective 2000 nodes is the limit for OpenShift Data Foundation. Scaling out storage capacity can be broken down into two steps Adding new node Scaling up the storage capacity Note OpenShift Data Foundation does not support heterogeneous OSD/Disk sizes. 7.2.1. Adding a node to an installer-provisioned infrastructure Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Navigate to Compute Machine Sets . On the machine set where you want to add nodes, select Edit Machine Count . Add the amount of nodes, and click Save . Click Compute Nodes and confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node. For the new node, click Action menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . Note It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. In case of bare metal installer-provisioned infrastructure deployment, expand the cluster first using the instructions that can be found here . Verification steps Execute the following command the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 7.2.2. Adding a node using a local storage device You can add nodes to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs or when there are not enough resources to add new OSDs on the existing nodes. Add nodes in the multiple of 3, each of them in different failure domains. Though it is recommended to add nodes in multiples of 3 nodes, you have the flexibility to add one node at a time in flexible scaling deployment. See Knowledgebase article Verify if flexible scaling is enabled Note OpenShift Data Foundation does not support heterogeneous disk size and types. The new nodes to be added should have the disk of the same type and size which was used during initial OpenShift Data Foundation deployment. Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Depending on the type of infrastructure, perform the following steps: Get a new machine with the required infrastructure. See Platform requirements . Create a new OpenShift Container Platform worker node using the new machine. Check for certificate signing requests (CSRs) that are in Pending state. Approve all the required CSRs for the new node. <Certificate_Name> Is the name of the CSR. Click Compute Nodes , confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From User interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From Command line interface Apply the OpenShift Data Foundation label to the new node. <new_node_name> Is the name of the new node. Click Operators Installed Operators from the OpenShift Web Console. From the Project drop-down list, make sure to select the project where the Local Storage Operator is installed. Click Local Storage . Click the Local Volume Discovery tab. Beside the LocalVolumeDiscovery , click Action menu (...) Edit Local Volume Discovery . In the YAML, add the hostname of the new node in the values field under the node selector. Click Save . Click the Local Volume Sets tab. Beside the LocalVolumeSet , click Action menu (...) Edit Local Volume Set . In the YAML, add the hostname of the new node in the values field under the node selector . Figure 7.1. YAML showing the addition of new hostnames Click Save . Note It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. Verification steps Execute the following command the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 7.2.3. Scaling up storage capacity To scale up storage capacity: For dynamic storage devices, see Scaling up storage by adding capacity . For local storage devices, see Scaling up a cluster created using local storage devices | [
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/ <OSD-pod-name>",
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm",
"NODE compute-1",
"oc debug node/ <node-name>",
"chroot /host",
"lsblk",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get csr",
"oc adm certificate approve <Certificate_Name>",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/scaling_storage/scaling_storage_of_red_hat_virtualization_openshift_data_foundation_cluster |
Appendix B. Revision History | Appendix B. Revision History Revision History Revision 6-0.2 Mon Feb 18 2013 Martin Prpic Removed incorrect information about rt2800usb / rt2x00 being updated. Revision 1-0 Wed Jun 20 2012 Martin Prpic Release of the Red Hat Enterprise Linux 6.3 Release Notes. Revision 0-0 Tue Apr 24 2012 Martin Prpic Release of the Red Hat Enterprise Linux 6.3 Beta Release Notes. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_release_notes/appe-6.3_release_notes-revision_history |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_ansible_lightspeed_with_ibm_watsonx_code_assistant/2.x_latest/html/red_hat_ansible_lightspeed_with_ibm_watsonx_code_assistant_release_notes/making-open-source-more-inclusive |
10.3. Transformation Editor | 10.3. Transformation Editor 10.3.1. Transformation Editor The Teiid Designer's Transformation Editor enables you to create the query transformations that describe how to derive your virtual metadata information from physical metadata sources or other virtual metadata and how to update the sources. The Transformation Editor provides a robust set of tools that you can use to create SQL queries. You can use the tools, or you can type an SQL query into the Transformation Editor . To edit a transformation: Double-click on: A relational view table or procedure in the Model Explorer or Diagram Editor A transformation node in a transformation diagram or mapping transformation diagram Right-click Edit action on selected object in the Model Explorer , Diagram Editor or Table Editor : A relational view table or procedure A transformation node in a transformation diagram or mapping transformation diagram A mapping class in a mapping diagram or mapping transformation diagram A Model Editor is opened if it is not currently open for the selected object's model. After the corresponding transformation diagram is opened in the Diagram Editor , the Transformation Editor is displayed in the lower section of the Diagram Editor . Figure 10.13. Editing String Property If this virtual class supports updates, the tabs on the bottom of the Transformation Editor allow you to enter SQL for each type of query that virtual class supports. If this virtual class does not support updates, only the SELECT tab is available. You can enter separate SQL queries on each available tab to accommodate that type of query. Within the Transformation Editor, you can: Disable specific update transformation types on this virtual class. Start your transformation with a provided SQL Template. Build or edit a criteria clause to use in your transformation. Build or edit an expression to use in your transformation. Find and replace a string within your transformation. Validate the transformation to ensure its content contains no errors. Reconcile target attributes to ensure the symbols in your transformation match the attributes in your virtual metadata class. You can also set preferences that impact the display of your Transformation Editor . The Transformation Editor toolbar actions are summarized below. Preview Virtual Data - executes a simple preview query for the target table or procedure of the transformation being edited. Search Transformations - provides a simple way select and edit another transformation based SQL text search criteria. Edit Transformation - provides a simple way to change which transformation to edit without searching in a diagram or the Model Explorer. Click the action and select from a list of views, tables, procedures or operations from the currently edited model. Cursor Position (line, column) - shows the current line and column position of the insertion cursor. For example, Cursor Position(1,4) indicates that the cursor is presently located at column 4 of line 1. Supports Update - checkbox allows you to enable or disable updates for the current transformation target. If Supports Update is selected, the editor shows four tabs at the bottom for the Select, Update, Insert and Delete transformations. If Supports Update is cleared, all updates are disabled and only the Select transformation is displayed. Reconcile - allows you to resolve any discrepancies between the transformation symbols and the target attributes. Clicking this button will display the Reconcile Virtual Target Attributes dialog box in which you can resolve discrepancies. Save/Validate - saves edits to the current transformation and validates the transformation SQL. Any Warning or Error messages will be displayed at the bottom of the editor in the messages area. If the SQL validates without error, the message area is not displayed. Criteria Builder - allows you to build a criteria clause in your transformation. The button will enable if the cursor position is within a query that allows a criteria. Pressing the button will launch the Criteria Builder dialog. If the Criteria Builder is launched inside an existing criteria, that criteria will be displayed for edit, otherwise the Criteria Builder will be initially empty. Expression Builder - allows you to build an expression within your transformation. The button will enable if the cursor position is at a location that allows an expression. Pressing the button will launch the Expression Builder dialog. If the Expression Builder is launched inside an existing expression, that expression will be displayed for edit, otherwise the Expression Builder will be initially empty. Expand Select * - allows you to expand a SELECT * clause into a SELECT clause which contains all of the SELECT symbols. The button will enable only if the cursor is within a query that contains a SELECT * clause that can be expanded. Increase Font Size - increases the font size of all editor text by 1. Decrease Font Size - decreases the font size of all editor text by 1. Show/Hide Messages - toggles the display of the message area at the bottom of the transformation editor. Optimize SQL - when toggled ON, will use the short names of all SQL symbols that can be optimized. Some symbol names may remain fully qualified in the event of a duplicate name or if the optimizer is unable to optimize it. When the action is toggled OFF, all symbol names will be fully qualified. Import SQL Text - allows you to import an SQL statement from a text file on your file system. Pressing this button will display an import dialog in which you can navigate to the file. Export SQL Text - allows you to export the currently displayed SQL statement into a text file on your file system. Pressing this button will display an export dialog in which you can choose the location for export. Close X - closes the transformation editor. The Transformation Editor context menu can be displayed by clicking the right mouse button within the editor's text area. The context menu is show below: Figure 10.14. Transformation Editor context menu Following is a summary of the context menu actions: Cut - Copy - Paste - Typical text editor actions to cut, copy or paste text within the editor. Undo - Redo - Allows you to Undo or Redo the action. Find - Displays a Find and Replace Dialog which allows you to search and replace text within the transformation. Apply Template... - Displays the Choose an SQL Template Dialog, which allows you to choose a starting SQL Template from a list of common SQL patterns. See View Table wizard section for a description of this dialog. Create Function... - Opens Choose Function Type dialog where you can create Source Function or User Defined Function . 10.3.2. Using the Reconciler The Transformation Editor's Reconciler offers you a quick, graphical means to reconcile the Target View attributes and the Transformation SQL. As you make changes, the overall status will appear at the top of the dialog to assist you in successfully completing your edits. To launch the Reconciler, click the Reconcile Transformation button in the Transformation Editor. Figure 10.15. Reconciler Dialog To summarize the different sections of the dialog: Target Attributes - SQL Symbol Table: This table shows the target attributes in the left column and the SQL Symbols in the right column. The SQL Symbols are the symbols that are projected from the SQL transformation. A symbol is referred to as being bound to a target attribute when it is displayed to the attribute. If a target attribute is unbound, its row is highlighted in red. The transformation is not valid until all attributes have a corresponding SQL symbol binding. Here are a few things you can do in the table section: Lock Target Attributes: To lock the target attribute ordering, select the Lock Target Attributes checkbox. This will lock the attributes in place. Re-Order Attributes: To change the ordering of the target attributes, use the Top, Up, Swap, Down, and Bottom controls beneath the table. Select single or multiple table rows, then click the desired action button. Delete Attributes: To delete one or more of the target attributes, select the table row(s) that you want to delete and then click the Delete button. Resolve Types: If an Attribute-SQL Symbol binding has a datatype conflict, a message will be displayed. To assist in resolving the datatype conflict, a Datatype Resolver Dialog is provided. Click on the table row, then click the Type Resolver... button to display the dialog. See Using Datatype Resolver section for further information. Unmatched SQL Symbols list: This list is to the right of the attribute-symbol binding table, and shows the SQL symbols from the transformation SQL that are not bound to a target table attribute. Here are a few things you can do in the list section: Add SQL Symbols: To add SQL Symbols to the list, click the Add button. You will be presented with a dialog showing all available symbols from your transformation source tables. Click on the symbols that you want to add, then click OK. Remove or Clear Symbols: To remove one or more of the SQL symbols, select the list items then click the Remove button. To clear the entire SQL symbols list, click the Clear button. Sort Symbols: By default, the symbols are shown in the order that they appear in the SQL query. To show them alphabetically in the list, click the Sort button. Binding Controls: The Binding Controls are located between the Attribute-Symbol table and the Unmatched SQL Symbols list. Use these buttons to define the Attribute-Symbol bindings. Here are a few things you can do with the binding controls: Bind: This button will bind an SQL Symbol to a target attribute. Select an Unmatched SQL symbol and select a target attribute, then click Bind to establish the binding. Unbind: This button will unbind an Attribute-Symbol binding. Select an already bound attribute in the table, then click Unbind. The SQL Symbol will be released to the Unmatched Symbols list. New: This button will create a new target attribute, using an Unmatched SQL Symbol. Select an Unmatched Symbol from the list, then click New. A new target attribute will be added to the bottom of the Attribute-Symbol table, bound to the selected SQL symbol. Null: This button allows you to bind null to a target attribute instead of binding an SQL Symbol to it. Select a row in the Attribute-Symbol table, then click Null. The target attribute will be bound to null. If it was originally bound to an SQL Symbol, the symbol will be released to the Unmatched Symbol list. Function: This button allows you to define an expression instead of an SQL Symbol for the binding. To define the expression, select a row in the Attribute-Symbol table, then click the Function button. The Expression Builder Dialog will display, allowing you to define any type of expression. See Using Expression Builder section for further information about the Expression Builder. SQL Display: The current transformation SQL is shown at the bottom of the Reconciler dialog. As you add/remove SQL symbols and make other changes, you can see the SQL display change to reflect those changes. When you click OK, this SQL will be your new transformation SQL. If desired, the SQL Display can be hidden by clearing the Show SQL Display checkbox. Once you are finished defining the bindings and resolving datatypes, click OK to accept the changes. The transformation SQL will change to reflect your edits. 10.3.3. Using the Datatype Resolver This dialog is accessible from the Reconciler dialog (See Using Reconciler section) and offers you a quick way to resolve datatype conflicts between a target attribute and its SQL Symbol. You can resolve the conflicts in the datatype bindings either by converting all source SQL symbol datatypes or by changing all target column datatypes. Figure 10.16. Datatype Resolver Dialog To summarize the different sections of the dialog: Convert all source SQL symbol datatypes: Click this button to apply a CONVERT function to all of SQL symbols in the table so that their datatype is compatible with the corresponding attribute datatype. Change all target column datatypes: If the suggested datatype is not acceptable, click this button to choose your own datatype from the datatype dialog. Source SQL Symbol - Matched Datatype Table: This table shows all SQL Symbol datatype information for the selected binding. Select on a table row to populate the lower panel. Selected Binding Info: The lower panel shows the binding information for the selected SQL symbol. hows all SQL Symbol datatype information for the selected binding. Select on a table row to populate the lower panel. Once you are finished resolving datatypes, click OK to accept the changes. You are directed back to the Reconciler Dialog, which will be updated to reflect your edits. 10.3.4. Using the Criteria Builder The Transformation Editor's Criteria Builder offers you a quick, graphical means to build criteria clauses in your transformations based on meta objects in your diagram. If you launch the Criteria Builder with your cursor within an existing criteria in your transformation SQL, the builder will open in Edit mode. If your cursor is not in an existing criteria location, the builder will open in create mode and allow you to create it from scratch. This procedure provides an example of building a criteria clause using the Criteria Builder. When building your own criteria, you can mix and match the values and constants with whatever logic you need to build powerful and complex criteria. To use the Criteria Builder: In the Transformation Editor , click the Launch Criteria Builder button. The Criteria Builder displays. Figure 10.17. Editing String Property The two tabs at the top, Tree View and SQL View, show the current contents of the criteria you have built. The Criteria Editor at the bottom allows you to build a criteria clause. To build a criteria clause, you must add information to the left side of the predicate, select a comparison operator, and add a value to the right side. The radio buttons on either side of the Predicate Editor let you choose what type of content to place in that side of your predicate. Click the radio button of the type of content you want to place in your criteria. You can click: Attribute to add an attribute to the predicate. If you click the Attribute radio button, the Predicate Editor looks like this: Figure 10.18. Attribute Panel From the tree, select the attribute you want to add to the expression. You can select an attribute from any of the source classes in the transformation. Constant to add a hardwired constant value to the predicate. If you click this radio button, the Predicate Editor looks like this: Figure 10.19. Constants Panel Select the datatype for this constant from the Type drop-down list and enter the value in the Value edit box. Function to add a function. Figure 10.20. Functions Click the Edit button to use the Expression Builder to construct a function to use in the predicate of your SQL Criterion. Set a value left side of the predicate and, when necessary, the right side of the predicate. If the right side of the predicate does not require a value of some sort, the Criteria Builder will not let you enter one. Click Apply . When you have created both a Left Expression and a Right Expression in the Predicate Editor, click Apply to add the criterion to the tree view at the top of the dialog box. The criteria clause displays in the Criteria tree. You can create complex criteria by joining other criteria with this one. To join criteria with this one, select the criteria in the Criteria tree and click: Delete to remove the selected criterion. AND to create a new criterion that must also be true. OR to create a new criterion that can be true instead of the selected criterion. NOT to establish negative criterion. If you join a criterion to the one you just completed, you build the expression the same way, using the Expression Editors panel and the Predicate Editor panel. You can create complex, nested criteria by judicious use of the AND and OR buttons. Once you have created the complete criteria you want, click OK to add it to your transformation. 10.3.5. Using the Expression Builder The Transformation Editor's Expression Builder offers you a quick, graphical means to build expressions in your transformations. This Expression Builder lets you create: Attributes by selecting an attribute. Constants by selecting the datatype and value. Functions from both the standard Teiid Designer SQL functions and your enterprise's custom user defined functions. If you select a function before you launch the Expression Builder, you can use the Expression Builder to edit the selected function; otherwise, you can create a new function from scratch. Note The functions made available through the expression builder are described in the Teiid Reference Guide. To use the Expression Builder: In the Transformation Editor , click the location where you want to insert the function. Click the Expression Builder button. The SQL Expression Builder displays. Figure 10.21. Expression Builder The two tabs at the top, Tree View and SQL View , show the current contents of the expression you have built. To build an expression, you must specify the type of expression you want to build and populate it. In most cases, you will use the Expression Builder to construct a complex expression. Click the Function radio button to add a function. Note You can simply add constants and attributes as expressions by themselves using the Attribute or Constant radio buttons; however, the Expression Editor is most useful for functions. The Expression Editor displays the Function editor. Figure 10.22. Function Panel Selected From the Category drop-down list, choose the type of function you want to add. By default, the Teiid Designer offers the following categories: Conversion for functions that convert one datatype into another. Datetime for functions that handle date or time information. Miscellaneous for other functions. Numeric for mathematical and other numeric functions. String for string manipulation functions. Note Any additional categories represent those containing user defined functions your site has created. From the Function drop-down list, select the function you want. The table beneath the drop-down lists displays the number of arguments required for this function. Click Apply . Your function displays in the tree at the top. Sub nodes display for each argument you need to set for this function. Figure 10.23. New Blank Function Created You need to set an attribute or constant value for each sub node in the tree to specify the arguments this function needs. You can also nest another function in the tree using the Function editor. Figure 10.24. Nested Function Example Click each sub node in the tree and use the editors at the bottom of the dialog box to apply an attribute, constant, or function value to it. When you have added values to all nodes, as shown below, click OK to add this expression to your query or Cancel to close the dialog box without inserting the expression. If the OK button does not enable, you have not added a value to all nodes in the tree. You can also nest functions within your expressions by selecting an argument and selecting a function for that argument. The nested function displays in the tree beneath your root function and its arguments display as well. Using the Expression Builder and nested functions, you can create complex logic within your query transformations. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/sect-transformation_editor |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/using_source-to-image_for_openshift_with_red_hat_build_of_openjdk_8/making-open-source-more-inclusive |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.