title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
url
stringlengths
79
342
1.3. LVS Scheduling Overview
1.3. LVS Scheduling Overview One of the advantages of using LVS is its ability to perform flexible, IP-level load balancing on the real server pool. This flexibility is due to the variety of scheduling algorithms an administrator can choose from when configuring LVS. LVS load balancing is superior to less flexible methods, such as Round-Robin DNS where the hierarchical nature of DNS and the caching by client machines can lead to load imbalances. Additionally, the low-level filtering employed by the LVS router has advantages over application-level request forwarding because balancing loads at the network packet level causes minimal computational overhead and allows for greater scalability. Using scheduling, the active router can take into account the real servers' activity and, optionally, an administrator-assigned weight factor when routing service requests. Using assigned weights gives arbitrary priorities to individual machines. Using this form of scheduling, it is possible to create a group of real servers using a variety of hardware and software combinations and the active router can evenly load each real server. The scheduling mechanism for LVS is provided by a collection of kernel patches called IP Virtual Server or IPVS modules. These modules enable layer 4 ( L4 ) transport layer switching, which is designed to work well with multiple servers on a single IP address. To track and route packets to the real servers efficiently, IPVS builds an IPVS table in the kernel. This table is used by the active LVS router to redirect requests from a virtual server address to and returning from real servers in the pool. The IPVS table is constantly updated by a utility called ipvsadm - adding and removing cluster members depending on their availability. 1.3.1. Scheduling Algorithms The structure that the IPVS table takes depends on the scheduling algorithm that the administrator chooses for any given virtual server. To allow for maximum flexibility in the types of services you can cluster and how these services are scheduled, Red Hat Enterprise Linux provides the following scheduling algorithms listed below. For instructions on how to assign scheduling algorithms refer to Section 4.6.1, "The VIRTUAL SERVER Subsection" . Round-Robin Scheduling Distributes each request sequentially around the pool of real servers. Using this algorithm, all the real servers are treated as equals without regard to capacity or load. This scheduling model resembles round-robin DNS but is more granular due to the fact that it is network-connection based and not host-based. LVS round-robin scheduling also does not suffer the imbalances caused by cached DNS queries. Weighted Round-Robin Scheduling Distributes each request sequentially around the pool of real servers but gives more jobs to servers with greater capacity. Capacity is indicated by a user-assigned weight factor, which is then adjusted upward or downward by dynamic load information. Refer to Section 1.3.2, "Server Weight and Scheduling" for more on weighting real servers. Weighted round-robin scheduling is a preferred choice if there are significant differences in the capacity of real servers in the pool. However, if the request load varies dramatically, the more heavily weighted server may answer more than its share of requests. Least-Connection Distributes more requests to real servers with fewer active connections. Because it keeps track of live connections to the real servers through the IPVS table, least-connection is a type of dynamic scheduling algorithm, making it a better choice if there is a high degree of variation in the request load. It is best suited for a real server pool where each member node has roughly the same capacity. If a group of servers have different capabilities, weighted least-connection scheduling is a better choice. Weighted Least-Connections (default) Distributes more requests to servers with fewer active connections relative to their capacities. Capacity is indicated by a user-assigned weight, which is then adjusted upward or downward by dynamic load information. The addition of weighting makes this algorithm ideal when the real server pool contains hardware of varying capacity. Refer to Section 1.3.2, "Server Weight and Scheduling" for more on weighting real servers. Locality-Based Least-Connection Scheduling Distributes more requests to servers with fewer active connections relative to their destination IPs. This algorithm is designed for use in a proxy-cache server cluster. It routes the packets for an IP address to the server for that address unless that server is above its capacity and has a server in its half load, in which case it assigns the IP address to the least loaded real server. Locality-Based Least-Connection Scheduling with Replication Scheduling Distributes more requests to servers with fewer active connections relative to their destination IPs. This algorithm is also designed for use in a proxy-cache server cluster. It differs from Locality-Based Least-Connection Scheduling by mapping the target IP address to a subset of real server nodes. Requests are then routed to the server in this subset with the lowest number of connections. If all the nodes for the destination IP are above capacity, it replicates a new server for that destination IP address by adding the real server with the least connections from the overall pool of real servers to the subset of real servers for that destination IP. The most loaded node is then dropped from the real server subset to prevent over-replication. Destination Hash Scheduling Distributes requests to the pool of real servers by looking up the destination IP in a static hash table. This algorithm is designed for use in a proxy-cache server cluster. Source Hash Scheduling Distributes requests to the pool of real servers by looking up the source IP in a static hash table. This algorithm is designed for LVS routers with multiple firewalls.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/virtual_server_administration/s1-lvs-scheduling-vsa
function::user_mode
function::user_mode Name function::user_mode - Determines if probe point occurs in user-mode. Synopsis Arguments None General Syntax user_mode: long Return 1 if the probe point occurred in user-mode.
[ "function user_mode:long()" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-user-mode
Chapter 60. JmxTransOutputDefinitionTemplate schema reference
Chapter 60. JmxTransOutputDefinitionTemplate schema reference Used in: JmxTransSpec Property Property type Description outputType string Template for setting the format of the data that will be pushed.For more information see JmxTrans OutputWriters . host string The DNS/hostname of the remote host that the data is pushed to. port integer The port of the remote host that the data is pushed to. flushDelayInSeconds integer How many seconds the JmxTrans waits before pushing a new set of data out. typeNames string array Template for filtering data to be included in response to a wildcard query. For more information see JmxTrans queries . name string Template for setting the name of the output definition. This is used to identify where to send the results of queries should be sent.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-jmxtransoutputdefinitiontemplate-reference
Chapter 12. Keyless authentication with robot accounts
Chapter 12. Keyless authentication with robot accounts In versions of Red Hat Quay, robot account tokens were valid for the lifetime of the token unless deleted or regenerated. Tokens that do not expire have security implications for users who do not want to store long-term passwords or manage the deletion, or regeneration, or new authentication tokens. With Red Hat Quay 3.13, Red Hat Quay administrators are provided the ability to exchange external OIDC tokens for short-lived, or ephemeral robot account tokens with either Red Hat Single Sign-On (based on the Keycloak project) or Microsoft Entra ID. This allows robot accounts to leverage tokens that last one hour, which are are refreshed regularly and can be used to authenticate individual transactions. This feature greatly enhances the security of your Red Hat Quay registry by mitigating the possibility of robot token exposure by removing the tokens after one hour. Configuring keyless authentication with robot accounts is a multi-step procedure that requires setting a robot federation, generating an OAuth2 token from your OIDC provider, and exchanging the OAuth2 token for a robot account access token. 12.1. Generating an OAuth2 token with Red Hat Sign Sign-On The following procedure shows you how to generate an OAuth2 token using Red Hat Single Sign-On. Depending on your OIDC provider, these steps will vary. Procedure On the Red Hat Single Sign-On UI: Click Clients and then the name of the application or service that can request authentication of a user. On the Settings page of your client, ensure that the following options are set or enabled: Client ID Valid redirect URI Client authentication Authorization Standard flow Direct access grants Note Settings can differ depending on your setup. On the Credentials page, store the Client Secret for future use. On the Users page, click Add user and enter a username, for example, service-account-quaydev . Then, click Create . Click the name of of the user, for example service-account-quaydev on the Users page. Click the Credentials tab Set password and provide a password for the user. If warranted, you can make this password temporary by selecting the Temporary option. Click the Realm settings tab OpenID Endpoint Configuration . Store the /protocol/openid-connect/token endpoint. For example: http://localhost:8080/realms/master/protocol/openid-connect/token On a web browser, navigate to the following URL: http://<keycloak_url>/realms/<realm_name>/protocol/openid-connect/auth?response_type=code&client_id=<client_id> When prompted, log in with the service-account-quaydev user and the temporary password you set. Complete the login by providing the required information and setting a permanent password if necessary. You are redirected to the URI address provided for your client. For example: https://localhost:3000/cb?session_state=5c9bce22-6b85-4654-b716-e9bbb3e755bc&iss=http%3A%2F%2Flocalhost%3A8080%2Frealms%2Fmaster&code=ea5b76eb-47a5-4e5d-8f71-0892178250db.5c9bce22-6b85-4654-b716-e9bbb3e755bc.cdffafbc-20fb-42b9-b254-866017057f43 Take note of the code provided in the address. For example: code=ea5b76eb-47a5-4e5d-8f71-0892178250db.5c9bce22-6b85-4654-b716-e9bbb3e755bc.cdffafbc-20fb-42b9-b254-866017057f43 Note This is a temporary code that can only be used one time. If necessary, you can refresh the page or revisit the URL to obtain another code. On your terminal, use the following curl -X POST command to generate a temporary OAuth2 access token: USD curl -X POST "http://localhost:8080/realms/master/protocol/openid-connect/token" 1 -H "Content-Type: application/x-www-form-urlencoded" \ -d "client_id=quaydev" 2 -d "client_secret=g8gPsBLxVrLo2PjmZkYBdKvcB9C7fmBz" 3 -d "grant_type=authorization_code" -d "code=ea5b76eb-47a5-4e5d-8f71-0892178250db.5c9bce22-6b85-4654-b716-e9bbb3e755bc.cdffafbc-20fb-42b9-b254-866017057f43" 4 1 The protocol/openid-connect/token endpoint found on the Realm settings page of the Red Hat Single Sign-On UI. 2 The Client ID used for this procedure. 3 The Client Secret for the Client ID. 4 The code returned from the redirect URI. Example output {"access_token":"eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJTVmExVHZ6eDd2cHVmc1dkZmc1SHdua1ZDcVlOM01DN1N5T016R0QwVGhVIn0...", "expires_in":60,"refresh_expires_in":1800,"refresh_token":"eyJhbGciOiJIUzUxMiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJiNTBlZTVkMS05OTc1LTQwMzUtYjNkNy1lMWQ5ZTJmMjg0MTEifQ.oBDx6B3pUkXQO8m-M3hYE7v-w25ak6y70CQd5J8f5EuldhvTwpWrC1K7yOglvs09dQxtq8ont12rKIoCIi4WXw","token_type":"Bearer","not-before-policy":0,"session_state":"5c9bce22-6b85-4654-b716-e9bbb3e755bc","scope":"profile email"} Store the access_token from the previously step, as it will be exchanged for a Red Hat Quay robot account token in the following procedure. 12.2. Setting up a robot account federation by using the Red Hat Quay v2 UI The following procedure shows you how to set up a robot account federation by using the Red Hat Quay v2 UI. This procedure uses Red Hat Single Sign-On, which is based on the Keycloak project. These steps, and the information used to set up a robot account federation, will vary depending on your OIDC provider. Prerequisites You have created an organization. The following example uses fed_test . You have created a robot account. The following example uses fest_test+robot1 . You have configured a OIDC for your Red Hat Quay deployment. The following example uses Red Hat Single Sign-On. Procedure On the Red Hat Single Sign-On main page: Select the appropriate realm that is authenticated for use with Red Hat Quay. Store the issuer URL, for example, https://keycloak-auth-realm.quayadmin.org/realms/quayrealm . Click Users the name of the user to be linked with the robot account for authentication. You must use the same user account that you used when generating the OAuth2 access token. On the Details page, store the ID of the user, for example, 449e14f8-9eb5-4d59-a63e-b7a77c75f770 . Note The information collected in this step will vary depending on your OIDC provider. For example, with Red Hat Single Sign-On, the ID of a user is used as the Subject to set up the robot account federation in a subsequent step. For a different OIDC provider, like Microsoft Entra ID, this information is stored as the Subject . On your Red Hat Quay registry: Navigate to Organizations and click the name of your organization, for example, fed_test . Click Robot Accounts . Click the menu kebab Set robot federation . Click the + symbol. In the popup window, include the following information: Issuer URL : https://keycloak-auth-realm.quayadmin.org/realms/quayrealm . For Red Hat Single Sign-On, this is the the URL of your Red Hat Single Sign-On realm. This might vary depending on your OIDC provider. Subject : 449e14f8-9eb5-4d59-a63e-b7a77c75f770 . For Red Hat Single Sign-On, the Subject is the ID of your Red Hat Single Sign-On user. This varies depending on your OIDC provider. For example, if you are using Microsoft Entra ID, the Subject will be the Subject or your Entra ID user. Click Save . 12.3. Exchanging an OAuth2 access token for a Red Hat Quay robot account token The following procedure leverages the access token generated in the procedure to create a new Red Hat Quay robot account token. The new Red Hat Quay robot account token is used for authentication between your OIDC provider and Red Hat Quay. Note The following example uses a Python script to exchange the OAuth2 access token for a Red Hat Quay robot account token. Prerequisites You have the python3 CLI tool installed. Procedure Save the following Python script in a .py file, for example, robot_fed_token_auth.py import requests import os TOKEN=os.environ.get('TOKEN') robot_user = "fed-test+robot1" def get_quay_robot_token(fed_token): URL = "https://<quay-server.example.com>/oauth2/federation/robot/token" response = requests.get(URL, auth=(robot_user,fed_token)) 1 print(response) print(response.text) if __name__ == "__main__": get_quay_robot_token(TOKEN) 1 If your Red Hat Quay deployment is using custom SSL/TLS certificates, the response must be response = requests.get(URL,auth=(robot_user,fed_token),verify=False) , which includes the verify=False flag. Export the OAuth2 access token as TOKEN . For example: USD export TOKEN = eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJTVmExVHZ6eDd2cHVmc1dkZmc1SHdua1ZDcVlOM01DN1N5T016R0QwVGhVIn0... Run the robot_fed_token_auth.py script by entering the following command: USD python3 robot_fed_token_auth.py Example output <Response [200]> {"token": "291cmNlX2FjY2VzcyI6eyJhY2NvdW50Ijp7InJvbGVzIjpbIm1hbmFnZS1hY2NvdW50IiwibWFuYWdlLWFjY291bnQtbGlua3MiLCJ2aWV3LXByb2ZpbGUiXX19LCJzY29wZSI6InByb2ZpbGUgZW1haWwiLCJlbWFpbF92ZXJpZ..."} Important This token expires after one hour. After one hour, a new token must be generated. Export the robot account access token as QUAY_TOKEN . For example: USD export QUAY_TOKEN=291cmNlX2FjY2VzcyI6eyJhY2NvdW50Ijp7InJvbGVzIjpbIm1hbmFnZS1hY2NvdW50IiwibWFuYWdlLWFjY291bnQtbGlua3MiLCJ2aWV3LXByb2ZpbGUiXX19LCJzY29wZSI6InByb2ZpbGUgZW1haWwiLCJlbWFpbF92ZXJpZ 12.4. Pushing and pulling images After you have generated a new robot account access token and exported it, you can log in and the robot account using the access token and push and pull images. Prerequisites You have exported the OAuth2 access token into a new robot account access token. Procedure Log in to your Red Hat Quay registry using the fest_test+robot1 robot account and the QUAY_TOKEN access token. For example: USD podman login <quay-server.example.com> -u fed_test+robot1 -p USDQUAY_TOKEN Pull an image from a Red Hat Quay repository for which the robot account has the proper permissions. For example: USD podman pull <quay-server.example.com/<repository_name>/<image_name>> Example output Getting image source signatures Copying blob 900e6061671b done Copying config 8135583d97 done Writing manifest to image destination Storing signatures 8135583d97feb82398909c9c97607159e6db2c4ca2c885c0b8f590ee0f9fe90d 0.57user 0.11system 0:00.99elapsed 68%CPU (0avgtext+0avgdata 78716maxresident)k 800inputs+15424outputs (18major+6528minor)pagefaults 0swaps Attempt to pull an image from a Red Hat Quay repository for which the robot account does not have the proper permissions. For example: USD podman pull <quay-server.example.com/<different_repository_name>/<image_name>> Example output Error: initializing source docker://quay-server.example.com/example_repository/busybox:latest: reading manifest in quay-server.example.com/example_repository/busybox: unauthorized: access to the requested resource is not authorized After one hour, the credentials for this robot account are set to expire. Afterwards, you must generate a new access token for this robot account.
[ "http://localhost:8080/realms/master/protocol/openid-connect/token", "http://<keycloak_url>/realms/<realm_name>/protocol/openid-connect/auth?response_type=code&client_id=<client_id>", "https://localhost:3000/cb?session_state=5c9bce22-6b85-4654-b716-e9bbb3e755bc&iss=http%3A%2F%2Flocalhost%3A8080%2Frealms%2Fmaster&code=ea5b76eb-47a5-4e5d-8f71-0892178250db.5c9bce22-6b85-4654-b716-e9bbb3e755bc.cdffafbc-20fb-42b9-b254-866017057f43", "code=ea5b76eb-47a5-4e5d-8f71-0892178250db.5c9bce22-6b85-4654-b716-e9bbb3e755bc.cdffafbc-20fb-42b9-b254-866017057f43", "curl -X POST \"http://localhost:8080/realms/master/protocol/openid-connect/token\" 1 -H \"Content-Type: application/x-www-form-urlencoded\" -d \"client_id=quaydev\" 2 -d \"client_secret=g8gPsBLxVrLo2PjmZkYBdKvcB9C7fmBz\" 3 -d \"grant_type=authorization_code\" -d \"code=ea5b76eb-47a5-4e5d-8f71-0892178250db.5c9bce22-6b85-4654-b716-e9bbb3e755bc.cdffafbc-20fb-42b9-b254-866017057f43\" 4", "{\"access_token\":\"eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJTVmExVHZ6eDd2cHVmc1dkZmc1SHdua1ZDcVlOM01DN1N5T016R0QwVGhVIn0...\", \"expires_in\":60,\"refresh_expires_in\":1800,\"refresh_token\":\"eyJhbGciOiJIUzUxMiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJiNTBlZTVkMS05OTc1LTQwMzUtYjNkNy1lMWQ5ZTJmMjg0MTEifQ.oBDx6B3pUkXQO8m-M3hYE7v-w25ak6y70CQd5J8f5EuldhvTwpWrC1K7yOglvs09dQxtq8ont12rKIoCIi4WXw\",\"token_type\":\"Bearer\",\"not-before-policy\":0,\"session_state\":\"5c9bce22-6b85-4654-b716-e9bbb3e755bc\",\"scope\":\"profile email\"}", "import requests import os TOKEN=os.environ.get('TOKEN') robot_user = \"fed-test+robot1\" def get_quay_robot_token(fed_token): URL = \"https://<quay-server.example.com>/oauth2/federation/robot/token\" response = requests.get(URL, auth=(robot_user,fed_token)) 1 print(response) print(response.text) if __name__ == \"__main__\": get_quay_robot_token(TOKEN)", "export TOKEN = eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJTVmExVHZ6eDd2cHVmc1dkZmc1SHdua1ZDcVlOM01DN1N5T016R0QwVGhVIn0", "python3 robot_fed_token_auth.py", "<Response [200]> {\"token\": \"291cmNlX2FjY2VzcyI6eyJhY2NvdW50Ijp7InJvbGVzIjpbIm1hbmFnZS1hY2NvdW50IiwibWFuYWdlLWFjY291bnQtbGlua3MiLCJ2aWV3LXByb2ZpbGUiXX19LCJzY29wZSI6InByb2ZpbGUgZW1haWwiLCJlbWFpbF92ZXJpZ...\"}", "export QUAY_TOKEN=291cmNlX2FjY2VzcyI6eyJhY2NvdW50Ijp7InJvbGVzIjpbIm1hbmFnZS1hY2NvdW50IiwibWFuYWdlLWFjY291bnQtbGlua3MiLCJ2aWV3LXByb2ZpbGUiXX19LCJzY29wZSI6InByb2ZpbGUgZW1haWwiLCJlbWFpbF92ZXJpZ", "podman login <quay-server.example.com> -u fed_test+robot1 -p USDQUAY_TOKEN", "podman pull <quay-server.example.com/<repository_name>/<image_name>>", "Getting image source signatures Copying blob 900e6061671b done Copying config 8135583d97 done Writing manifest to image destination Storing signatures 8135583d97feb82398909c9c97607159e6db2c4ca2c885c0b8f590ee0f9fe90d 0.57user 0.11system 0:00.99elapsed 68%CPU (0avgtext+0avgdata 78716maxresident)k 800inputs+15424outputs (18major+6528minor)pagefaults 0swaps", "podman pull <quay-server.example.com/<different_repository_name>/<image_name>>", "Error: initializing source docker://quay-server.example.com/example_repository/busybox:latest: reading manifest in quay-server.example.com/example_repository/busybox: unauthorized: access to the requested resource is not authorized" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/manage_red_hat_quay/keyless-authentication-robot-accounts
Chapter 1. Ansible plug-ins for Red Hat Developer Hub
Chapter 1. Ansible plug-ins for Red Hat Developer Hub 1.1. Red Hat Developer Hub Red Hat Developer Hub (RHDH) serves as an open developer platform designed for building developer portals. 1.2. Ansible plug-ins for Red Hat Developer Hub Ansible plug-ins for Red Hat Developer Hub deliver an Ansible-first Red Hat Developer Hub user experience that simplifies the automation experience for Ansible users of all skill levels. The Ansible plug-ins provide curated content and features to accelerate Ansible learner onboarding and streamline Ansible use case adoption across your organization. The Ansible plug-ins provide: A customized home page and navigation tailored to Ansible users. Curated Ansible learning paths to help users new to Ansible. Software templates for creating Ansible playbook and collection projects that follow best practices. Links to supported development environments and tools with opinionated configurations. 1.3. Architecture
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/installing_ansible_plug-ins_for_red_hat_developer_hub/rhdh-intro_aap-plugin-rhdh-installing
Chapter 2. access
Chapter 2. access This chapter describes the commands under the access command. 2.1. access rule delete Delete access rule(s) Usage: Table 2.1. Positional arguments Value Summary <access-rule> Access rule(s) to delete (name or id) Table 2.2. Command arguments Value Summary -h, --help Show this help message and exit 2.2. access rule list List access rules Usage: Table 2.3. Command arguments Value Summary -h, --help Show this help message and exit --user <user> User whose access rules to list (name or id) --user-domain <user-domain> Domain the user belongs to (name or id). this can be used in case collisions between user names exist. Table 2.4. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 2.5. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 2.6. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 2.7. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 2.3. access rule show Display access rule details Usage: Table 2.8. Positional arguments Value Summary <access-rule> Access rule to display (name or id) Table 2.9. Command arguments Value Summary -h, --help Show this help message and exit Table 2.10. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 2.11. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 2.12. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 2.13. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 2.4. access token create Create an access token Usage: Table 2.14. Command arguments Value Summary -h, --help Show this help message and exit --consumer-key <consumer-key> Consumer key (required) --consumer-secret <consumer-secret> Consumer secret (required) --request-key <request-key> Request token to exchange for access token (required) --request-secret <request-secret> Secret associated with <request-key> (required) --verifier <verifier> Verifier associated with <request-key> (required) Table 2.15. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 2.16. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 2.17. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 2.18. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack access rule delete [-h] <access-rule> [<access-rule> ...]", "openstack access rule list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--user <user>] [--user-domain <user-domain>]", "openstack access rule show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <access-rule>", "openstack access token create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --consumer-key <consumer-key> --consumer-secret <consumer-secret> --request-key <request-key> --request-secret <request-secret> --verifier <verifier>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/command_line_interface_reference/access
Chapter 65. Kubernetes Deployments
Chapter 65. Kubernetes Deployments Since Camel 2.20 Both producer and consumer are supported The Kubernetes Deployments component is one of the Kubernetes Components which provides a producer to execute Kubernetes Deployments operations and a consumer to consume events related to Deployments objects. 65.1. Dependencies When using kubernetes-deployments with Red Hat build of Apache Camel for Spring Boot, use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency> 65.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 65.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 65.2.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 65.3. Component Options The Kubernetes Deployments component supports 4 options, which are listed below. Name Description Default Type kubernetesClient (common) Autowired To use an existing kubernetes client. KubernetesClient bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 65.4. Endpoint Options The Kubernetes Deployments endpoint is configured using URI syntax: with the following path and query parameters: 65.4.1. Path Parameters (1 parameters) Name Description Default Type masterUrl (common) Required Kubernetes Master url. String 65.4.2. Query Parameters (33 parameters) Name Description Default Type apiVersion (common) The Kubernetes API Version to use. String dnsDomain (common) The dns domain, used for ServiceCall EIP. String kubernetesClient (common) Default KubernetesClient to use if provided. KubernetesClient namespace (common) The namespace. String portName (common) The port name, used for ServiceCall EIP. String portProtocol (common) The port protocol, used for ServiceCall EIP. tcp String crdGroup (consumer) The Consumer CRD Resource Group we would like to watch. String crdName (consumer) The Consumer CRD Resource name we would like to watch. String crdPlural (consumer) The Consumer CRD Resource Plural we would like to watch. String crdScope (consumer) The Consumer CRD Resource Scope we would like to watch. String crdVersion (consumer) The Consumer CRD Resource Version we would like to watch. String labelKey (consumer) The Consumer Label key when watching at some resources. String labelValue (consumer) The Consumer Label value when watching at some resources. String poolSize (consumer) The Consumer pool size. 1 int resourceName (consumer) The Consumer Resource Name we would like to watch. String bridgeErrorHandler (consumer (advanced)) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut ExchangePattern operation (producer) Producer operation to do on Kubernetes. String lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean connectionTimeout (advanced) Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer caCertData (security) The CA Cert Data. String caCertFile (security) The CA Cert File. String clientCertData (security) The Client Cert Data. String clientCertFile (security) The Client Cert File. String clientKeyAlgo (security) The Key Algorithm used by the client. String clientKeyData (security) The Client Key data. String clientKeyFile (security) The Client Key file. String clientKeyPassphrase (security) The Client Key Passphrase. String oauthToken (security) The Auth Token. String password (security) Password to connect to Kubernetes. String trustCerts (security) Define if the certs we used are trusted anyway or not. Boolean username (security) Username to connect to Kubernetes. String 65.5. Message Headers The Kubernetes Deployments component supports 8 message header(s), which is/are listed below: Name Description Default Type CamelKubernetesOperation (producer) Constant: KUBERNETES_OPERATION The Producer operation. String CamelKubernetesNamespaceName (producer) Constant: KUBERNETES_NAMESPACE_NAME The namespace name. String CamelKubernetesDeploymentsLabels (producer) Constant: KUBERNETES_DEPLOYMENTS_LABELS The deployment labels. Map CamelKubernetesDeploymentName (producer) Constant: KUBERNETES_DEPLOYMENT_NAME The deployment name. String CamelKubernetesDeploymentSpec (producer) Constant: KUBERNETES_DEPLOYMENT_SPEC The spec for a deployment. DeploymentSpec CamelKubernetesDeploymentReplicas (producer) Constant: KUBERNETES_DEPLOYMENT_REPLICAS The desired instance count. Integer CamelKubernetesEventAction (consumer) Constant: KUBERNETES_EVENT_ACTION Action watched by the consumer. Enum values: ADDED MODIFIED DELETED ERROR BOOKMARK Action CamelKubernetesEventTimestamp (consumer) Constant: KUBERNETES_EVENT_TIMESTAMP Timestamp of the action watched by the consumer. long 65.6. Supported producer operation listDeployments listDeploymentsByLabels getDeployment createDeployment updateDeployment deleteDeployment scaleDeployment 65.7. Kubernetes Deployments Producer Examples listDeployments: this operation lists the deployments on a kubernetes cluster. from("direct:list"). toF("kubernetes-deployments:///?kubernetesClient=#kubernetesClient&operation=listDeployments"). to("mock:result"); This operation returns a List of Deployment from your cluster. listDeploymentsByLabels: this operation lists the deployments by labels on a kubernetes cluster. from("direct:listByLabels").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put("key1", "value1"); labels.put("key2", "value2"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_DEPLOYMENTS_LABELS, labels); } }); toF("kubernetes-deployments:///?kubernetesClient=#kubernetesClient&operation=listDeploymentsByLabels"). to("mock:result"); This operation returns a List of Deployments from your cluster, using a label selector (with key1 and key2, with value value1 and value2). 65.7.1. Kubernetes Deployments Consumer Example fromF("kubernetes-deployments://%s?oauthToken=%s&namespace=default&resourceName=test", host, authToken).process(new KubernertesProcessor()).to("mock:result"); public class KubernertesProcessor implements Processor { @Override public void process(Exchange exchange) throws Exception { Message in = exchange.getIn(); Deployment dp = exchange.getIn().getBody(Deployment.class); log.info("Got event with configmap name: " + dp.getMetadata().getName() + " and action " + in.getHeader(KubernetesConstants.KUBERNETES_EVENT_ACTION)); } } This consumer returns a list of events on the namespace default for the deployment test. 65.8. Spring Boot Auto-Configuration The component supports 102 options, which are listed below. Name Description Default Type camel.cluster.kubernetes.attributes Custom service attributes. Map camel.cluster.kubernetes.cluster-labels Set the labels used to identify the pods composing the cluster. Map camel.cluster.kubernetes.config-map-name Set the name of the ConfigMap used to do optimistic locking (defaults to 'leaders'). String camel.cluster.kubernetes.connection-timeout-millis Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer camel.cluster.kubernetes.enabled Sets if the Kubernetes cluster service should be enabled or not, default is false. false Boolean camel.cluster.kubernetes.id Cluster Service ID. String camel.cluster.kubernetes.jitter-factor A jitter factor to apply in order to prevent all pods to call Kubernetes APIs in the same instant. Double camel.cluster.kubernetes.kubernetes-namespace Set the name of the Kubernetes namespace containing the pods and the configmap (autodetected by default). String camel.cluster.kubernetes.lease-duration-millis The default duration of the lease for the current leader. Long camel.cluster.kubernetes.master-url Set the URL of the Kubernetes master (read from Kubernetes client properties by default). String camel.cluster.kubernetes.order Service lookup order/priority. Integer camel.cluster.kubernetes.pod-name Set the name of the current pod (autodetected from container host name by default). String camel.cluster.kubernetes.renew-deadline-millis The deadline after which the leader must stop its services because it may have lost the leadership. Long camel.cluster.kubernetes.retry-period-millis The time between two subsequent attempts to check and acquire the leadership. It is randomized using the jitter factor. Long camel.component.kubernetes-config-maps.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-config-maps.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-config-maps.enabled Whether to enable auto configuration of the kubernetes-config-maps component. This is enabled by default. Boolean camel.component.kubernetes-config-maps.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-config-maps.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-custom-resources.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-custom-resources.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-custom-resources.enabled Whether to enable auto configuration of the kubernetes-custom-resources component. This is enabled by default. Boolean camel.component.kubernetes-custom-resources.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-custom-resources.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-deployments.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-deployments.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-deployments.enabled Whether to enable auto configuration of the kubernetes-deployments component. This is enabled by default. Boolean camel.component.kubernetes-deployments.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-deployments.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-events.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-events.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-events.enabled Whether to enable auto configuration of the kubernetes-events component. This is enabled by default. Boolean camel.component.kubernetes-events.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-events.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-hpa.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-hpa.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-hpa.enabled Whether to enable auto configuration of the kubernetes-hpa component. This is enabled by default. Boolean camel.component.kubernetes-hpa.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-hpa.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-job.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-job.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-job.enabled Whether to enable auto configuration of the kubernetes-job component. This is enabled by default. Boolean camel.component.kubernetes-job.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-job.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-namespaces.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-namespaces.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-namespaces.enabled Whether to enable auto configuration of the kubernetes-namespaces component. This is enabled by default. Boolean camel.component.kubernetes-namespaces.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-namespaces.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-nodes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-nodes.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-nodes.enabled Whether to enable auto configuration of the kubernetes-nodes component. This is enabled by default. Boolean camel.component.kubernetes-nodes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-nodes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes-claims.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes-claims.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes-claims component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes-claims.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes-claims.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-pods.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-pods.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-pods.enabled Whether to enable auto configuration of the kubernetes-pods component. This is enabled by default. Boolean camel.component.kubernetes-pods.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-pods.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-replication-controllers.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-replication-controllers.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-replication-controllers.enabled Whether to enable auto configuration of the kubernetes-replication-controllers component. This is enabled by default. Boolean camel.component.kubernetes-replication-controllers.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-replication-controllers.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-resources-quota.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-resources-quota.enabled Whether to enable auto configuration of the kubernetes-resources-quota component. This is enabled by default. Boolean camel.component.kubernetes-resources-quota.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-resources-quota.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-secrets.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-secrets.enabled Whether to enable auto configuration of the kubernetes-secrets component. This is enabled by default. Boolean camel.component.kubernetes-secrets.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-secrets.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-service-accounts.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-service-accounts.enabled Whether to enable auto configuration of the kubernetes-service-accounts component. This is enabled by default. Boolean camel.component.kubernetes-service-accounts.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-service-accounts.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-services.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-services.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-services.enabled Whether to enable auto configuration of the kubernetes-services component. This is enabled by default. Boolean camel.component.kubernetes-services.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-services.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-build-configs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-build-configs.enabled Whether to enable auto configuration of the openshift-build-configs component. This is enabled by default. Boolean camel.component.openshift-build-configs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-build-configs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-builds.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-builds.enabled Whether to enable auto configuration of the openshift-builds component. This is enabled by default. Boolean camel.component.openshift-builds.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-builds.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-deploymentconfigs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-deploymentconfigs.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.openshift-deploymentconfigs.enabled Whether to enable auto configuration of the openshift-deploymentconfigs component. This is enabled by default. Boolean camel.component.openshift-deploymentconfigs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-deploymentconfigs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency>", "kubernetes-deployments:masterUrl", "from(\"direct:list\"). toF(\"kubernetes-deployments:///?kubernetesClient=#kubernetesClient&operation=listDeployments\"). to(\"mock:result\");", "from(\"direct:listByLabels\").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put(\"key1\", \"value1\"); labels.put(\"key2\", \"value2\"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_DEPLOYMENTS_LABELS, labels); } }); toF(\"kubernetes-deployments:///?kubernetesClient=#kubernetesClient&operation=listDeploymentsByLabels\"). to(\"mock:result\");", "fromF(\"kubernetes-deployments://%s?oauthToken=%s&namespace=default&resourceName=test\", host, authToken).process(new KubernertesProcessor()).to(\"mock:result\"); public class KubernertesProcessor implements Processor { @Override public void process(Exchange exchange) throws Exception { Message in = exchange.getIn(); Deployment dp = exchange.getIn().getBody(Deployment.class); log.info(\"Got event with configmap name: \" + dp.getMetadata().getName() + \" and action \" + in.getHeader(KubernetesConstants.KUBERNETES_EVENT_ACTION)); } }" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-kubernetes-deployments-component-starter
Chapter 1. Installation methods
Chapter 1. Installation methods You can install an OpenShift Container Platform cluster on vSphere using a variety of different installation methods. Each method has qualities that can make them more suitable for different use cases, such as installing a cluster in a disconnected environment or installing a cluster with minimal configuration and provisioning. 1.1. Assisted Installer You can install OpenShift Container Platform with the Assisted Installer . This method requires no setup for the installer and is ideal for connected environments like vSphere. Installing with the Assisted Installer also provides integration with vSphere, enabling autoscaling. See Installing an on-premise cluster using the Assisted Installer for additional details. 1.2. Agent-based Installer You can install an OpenShift Container Platform cluster on vSphere using the Agent-based Installer. The Agent-based Installer can be used to boot an on-premises server in a disconnected environment by using a bootable image. With the Agent-based Installer, users also have the flexibility to provision infrastructure, customize network configurations, and customize installations within a disconnected environment. See Preparing to install with the Agent-based Installer for additional details. 1.3. Installer-provisioned infrastructure installation You can install OpenShift Container Platform on vSphere by using installer-provisioned infrastructure. Installer-provisioned infrastructure allows the installation program to preconfigure and automate the provisioning of resources required by OpenShift Container Platform. Installer-provisioned infrastructure is useful for installing in environments with disconnected networks, where the installation program provisions the underlying infrastructure for the cluster. Installing a cluster on vSphere : You can install OpenShift Container Platform on vSphere by using installer-provisioned infrastructure installation with no customization. Installing a cluster on vSphere with customizations : You can install OpenShift Container Platform on vSphere by using installer-provisioned infrastructure installation with the default customization options. Installing a cluster on vSphere with network customizations : You can install OpenShift Container Platform on installer-provisioned vSphere infrastructure, with network customizations. You can customize your OpenShift Container Platform network configuration during installation, so that your cluster can coexist with your existing IP address allocations and adhere to your network requirements. Installing a cluster on vSphere in a restricted network : You can install a cluster on VMware vSphere infrastructure in a restricted network by creating an internal mirror of the installation release content. You can use this method to deploy OpenShift Container Platform on an internal network that is not visible to the internet. 1.4. User-provisioned infrastructure installation You can install OpenShift Container Platform on vSphere by using user-provisioned infrastructure. User-provisioned infrastructure requires the user to provision all resources required by OpenShift Container Platform. If you do not use infrastructure that the installation program provisions, you must manage and maintain the cluster resources yourself. Installing a cluster on vSphere with user-provisioned infrastructure : You can install OpenShift Container Platform on VMware vSphere infrastructure that you provision. Installing a cluster on vSphere with network customizations with user-provisioned infrastructure : You can install OpenShift Container Platform on VMware vSphere infrastructure that you provision with customized network configuration options. Installing a cluster on vSphere in a restricted network with user-provisioned infrastructure : OpenShift Container Platform can be installed on VMware vSphere infrastructure that you provision in a restricted network. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the vSphere platform and the installation process of OpenShift Container Platform. Use the user-provisioned infrastructure installation instructions as a guide; you are free to create the required resources through other methods. 1.5. Additional resources Installation process
null
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_vsphere/preparing-to-install-on-vsphere
Chapter 30. ec2
Chapter 30. ec2 This chapter describes the commands under the ec2 command. 30.1. ec2 credentials create Create EC2 credentials Usage: Table 30.1. Command arguments Value Summary -h, --help Show this help message and exit --project <project> Create credentials in project (name or id; default: current authenticated project) --user <user> Create credentials for user (name or id; default: current authenticated user) --user-domain <user-domain> Domain the user belongs to (name or id). this can be used in case collisions between user names exist. --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. Table 30.2. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 30.3. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 30.4. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 30.5. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 30.2. ec2 credentials delete Delete EC2 credentials Usage: Table 30.6. Positional arguments Value Summary <access-key> Credentials access key(s) Table 30.7. Command arguments Value Summary -h, --help Show this help message and exit --user <user> Delete credentials for user (name or id) --user-domain <user-domain> Domain the user belongs to (name or id). this can be used in case collisions between user names exist. 30.3. ec2 credentials list List EC2 credentials Usage: Table 30.8. Command arguments Value Summary -h, --help Show this help message and exit --user <user> Filter list by user (name or id) --user-domain <user-domain> Domain the user belongs to (name or id). this can be used in case collisions between user names exist. Table 30.9. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 30.10. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 30.11. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 30.12. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 30.4. ec2 credentials show Display EC2 credentials details Usage: Table 30.13. Positional arguments Value Summary <access-key> Credentials access key Table 30.14. Command arguments Value Summary -h, --help Show this help message and exit --user <user> Show credentials for user (name or id) --user-domain <user-domain> Domain the user belongs to (name or id). this can be used in case collisions between user names exist. Table 30.15. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 30.16. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 30.17. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 30.18. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack ec2 credentials create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--project <project>] [--user <user>] [--user-domain <user-domain>] [--project-domain <project-domain>]", "openstack ec2 credentials delete [-h] [--user <user>] [--user-domain <user-domain>] <access-key> [<access-key> ...]", "openstack ec2 credentials list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--user <user>] [--user-domain <user-domain>]", "openstack ec2 credentials show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--user <user>] [--user-domain <user-domain>] <access-key>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/command_line_interface_reference/ec2
Security and compliance
Security and compliance OpenShift Container Platform 4.15 Learning about and managing security for OpenShift Container Platform Red Hat OpenShift Documentation Team
[ "variant: openshift version: 4.15.0 metadata: name: 51-worker-rh-registry-trust labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/containers/policy.json mode: 0644 overwrite: true contents: inline: | { \"default\": [ { \"type\": \"insecureAcceptAnything\" } ], \"transports\": { \"docker\": { \"registry.access.redhat.com\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ], \"registry.redhat.io\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ] }, \"docker-daemon\": { \"\": [ { \"type\": \"insecureAcceptAnything\" } ] } } }", "butane 51-worker-rh-registry-trust.bu -o 51-worker-rh-registry-trust.yaml", "oc apply -f 51-worker-rh-registry-trust.yaml", "oc get mc", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 00-worker a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-master-container-runtime a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-master-kubelet a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-worker-container-runtime a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-worker-kubelet a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 51-master-rh-registry-trust 3.2.0 13s 51-worker-rh-registry-trust 3.2.0 53s 1 99-master-generated-crio-seccomp-use-default 3.2.0 25m 99-master-generated-registries a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 99-master-ssh 3.2.0 28m 99-worker-generated-crio-seccomp-use-default 3.2.0 25m 99-worker-generated-registries a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 99-worker-ssh 3.2.0 28m rendered-master-af1e7ff78da0a9c851bab4be2777773b a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 8s rendered-master-cd51fd0c47e91812bfef2765c52ec7e6 a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 24m rendered-worker-2b52f75684fbc711bd1652dd86fd0b82 a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 24m rendered-worker-be3b3bce4f4aa52a62902304bac9da3c a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 48s 2", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-af1e7ff78da0a9c851bab4be2777773b True False False 3 3 3 0 30m worker rendered-worker-be3b3bce4f4aa52a62902304bac9da3c False True False 3 0 0 0 30m 1", "oc debug node/<node_name>", "sh-4.2# chroot /host", "docker: registry.redhat.io: sigstore: https://registry.redhat.io/containers/sigstore", "docker: registry.access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore", "oc describe machineconfigpool/worker", "Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool Metadata: Creation Timestamp: 2019-12-19T02:02:12Z Generation: 3 Resource Version: 16229 Self Link: /apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker UID: 92697796-2203-11ea-b48c-fa163e3940e5 Spec: Configuration: Name: rendered-worker-f6819366eb455a401c42f8d96ab25c02 Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 51-worker-rh-registry-trust API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Machine Config Selector: Match Labels: machineconfiguration.openshift.io/role: worker Node Selector: Match Labels: node-role.kubernetes.io/worker: Paused: false Status: Conditions: Last Transition Time: 2019-12-19T02:03:27Z Message: Reason: Status: False Type: RenderDegraded Last Transition Time: 2019-12-19T02:03:43Z Message: Reason: Status: False Type: NodeDegraded Last Transition Time: 2019-12-19T02:03:43Z Message: Reason: Status: False Type: Degraded Last Transition Time: 2019-12-19T02:28:23Z Message: Reason: Status: False Type: Updated Last Transition Time: 2019-12-19T02:28:23Z Message: All nodes are updating to rendered-worker-f6819366eb455a401c42f8d96ab25c02 Reason: Status: True Type: Updating Configuration: Name: rendered-worker-d9b3f4ffcfd65c30dcf591a0e8cf9b2e Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Degraded Machine Count: 0 Machine Count: 1 Observed Generation: 3 Ready Machine Count: 0 Unavailable Machine Count: 1 Updated Machine Count: 0 Events: <none>", "oc describe machineconfigpool/worker", "Last Transition Time: 2019-12-19T04:53:09Z Message: All nodes are updated with rendered-worker-f6819366eb455a401c42f8d96ab25c02 Reason: Status: True Type: Updated Last Transition Time: 2019-12-19T04:53:09Z Message: Reason: Status: False Type: Updating Configuration: Name: rendered-worker-f6819366eb455a401c42f8d96ab25c02 Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 51-worker-rh-registry-trust API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Degraded Machine Count: 0 Machine Count: 3 Observed Generation: 4 Ready Machine Count: 3 Unavailable Machine Count: 0 Updated Machine Count: 3", "oc debug node/<node> -- chroot /host cat /etc/containers/policy.json", "Starting pod/<node>-debug To use host binaries, run `chroot /host` { \"default\": [ { \"type\": \"insecureAcceptAnything\" } ], \"transports\": { \"docker\": { \"registry.access.redhat.com\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ], \"registry.redhat.io\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ] }, \"docker-daemon\": { \"\": [ { \"type\": \"insecureAcceptAnything\" } ] } } }", "oc debug node/<node> -- chroot /host cat /etc/containers/registries.d/registry.redhat.io.yaml", "Starting pod/<node>-debug To use host binaries, run `chroot /host` docker: registry.redhat.io: sigstore: https://registry.redhat.io/containers/sigstore", "oc debug node/<node> -- chroot /host cat /etc/containers/registries.d/registry.access.redhat.com.yaml", "Starting pod/<node>-debug To use host binaries, run `chroot /host` docker: registry.access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore", "oc adm release info quay.io/openshift-release-dev/ocp-release@sha256:2309578b68c5666dad62aed696f1f9d778ae1a089ee461060ba7b9514b7ca417 -o pullspec 1 quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9aafb914d5d7d0dec4edd800d02f811d7383a7d49e500af548eab5d00c1bffdb 2", "oc adm release info <release_version> \\ 1", "--- Pull From: quay.io/openshift-release-dev/ocp-release@sha256:e73ab4b33a9c3ff00c9f800a38d69853ca0c4dfa5a88e3df331f66df8f18ec55 ---", "curl -o pub.key https://access.redhat.com/security/data/fd431d51.txt", "curl -o signature-1 https://mirror.openshift.com/pub/openshift-v4/signatures/openshift-release-dev/ocp-release/sha256%<sha_from_version>/signature-1 \\ 1", "skopeo inspect --raw docker://<quay_link_to_release> > manifest.json \\ 1", "skopeo standalone-verify manifest.json quay.io/openshift-release-dev/ocp-release:<release_number>-<arch> any signature-1 --public-key-file pub.key", "Signature verified using fingerprint 567E347AD0044ADE55BA8A5F199E2F91FD431D51, digest sha256:e73ab4b33a9c3ff00c9f800a38d69853ca0c4dfa5a88e3df331f66df8f18ec55", "quality.images.openshift.io/<qualityType>.<providerId>: {}", "quality.images.openshift.io/vulnerability.blackduck: {} quality.images.openshift.io/vulnerability.jfrog: {} quality.images.openshift.io/license.blackduck: {} quality.images.openshift.io/vulnerability.openscap: {}", "{ \"name\": \"OpenSCAP\", \"description\": \"OpenSCAP vulnerability score\", \"timestamp\": \"2016-09-08T05:04:46Z\", \"reference\": \"https://www.open-scap.org/930492\", \"compliant\": true, \"scannerVersion\": \"1.2\", \"summary\": [ { \"label\": \"critical\", \"data\": \"4\", \"severityIndex\": 3, \"reference\": null }, { \"label\": \"important\", \"data\": \"12\", \"severityIndex\": 2, \"reference\": null }, { \"label\": \"moderate\", \"data\": \"8\", \"severityIndex\": 1, \"reference\": null }, { \"label\": \"low\", \"data\": \"26\", \"severityIndex\": 0, \"reference\": null } ] }", "{ \"name\": \"Red Hat Ecosystem Catalog\", \"description\": \"Container health index\", \"timestamp\": \"2016-09-08T05:04:46Z\", \"reference\": \"https://access.redhat.com/errata/RHBA-2016:1566\", \"compliant\": null, \"scannerVersion\": \"1.2\", \"summary\": [ { \"label\": \"Health index\", \"data\": \"B\", \"severityIndex\": 1, \"reference\": null } ] }", "oc annotate image <image> quality.images.openshift.io/vulnerability.redhatcatalog='{ \"name\": \"Red Hat Ecosystem Catalog\", \"description\": \"Container health index\", \"timestamp\": \"2020-06-01T05:04:46Z\", \"compliant\": null, \"scannerVersion\": \"1.2\", \"reference\": \"https://access.redhat.com/errata/RHBA-2020:2347\", \"summary\": \"[ { \"label\": \"Health index\", \"data\": \"B\", \"severityIndex\": 1, \"reference\": null } ]\" }'", "annotations: images.openshift.io/deny-execution: true", "curl -X PATCH -H \"Authorization: Bearer <token>\" -H \"Content-Type: application/merge-patch+json\" https://<openshift_server>:6443/apis/image.openshift.io/v1/images/<image_id> --data '{ <image_annotation> }'", "{ \"metadata\": { \"annotations\": { \"quality.images.openshift.io/vulnerability.redhatcatalog\": \"{ 'name': 'Red Hat Ecosystem Catalog', 'description': 'Container health index', 'timestamp': '2020-06-01T05:04:46Z', 'compliant': null, 'reference': 'https://access.redhat.com/errata/RHBA-2020:2347', 'summary': [{'label': 'Health index', 'data': '4', 'severityIndex': 1, 'reference': null}] }\" } } }", "oc create secret generic secret-npmrc --from-file=.npmrc=~/.npmrc", "source: git: uri: https://github.com/sclorg/nodejs-ex.git secrets: - destinationDir: . secret: name: secret-npmrc", "oc new-build openshift/nodejs-010-centos7~https://github.com/sclorg/nodejs-ex.git --build-secret secret-npmrc", "oc set triggers deploy/deployment-example --from-image=example:latest --containers=web", "{ \"default\": [{\"type\": \"reject\"}], \"transports\": { \"docker\": { \"access.redhat.com\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ] }, \"atomic\": { \"172.30.1.1:5000/openshift\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ], \"172.30.1.1:5000/production\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/example.com/pubkey\" } ], \"172.30.1.1:5000\": [{\"type\": \"reject\"}] } } }", "docker: access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore", "oc get event -n default | grep Node", "1h 20h 3 origin-node-1.example.local Node Normal NodeHasDiskPressure", "oc get events -n default -o json | jq '.items[] | select(.involvedObject.kind == \"Node\" and .reason == \"NodeHasDiskPressure\")'", "{ \"apiVersion\": \"v1\", \"count\": 3, \"involvedObject\": { \"kind\": \"Node\", \"name\": \"origin-node-1.example.local\", \"uid\": \"origin-node-1.example.local\" }, \"kind\": \"Event\", \"reason\": \"NodeHasDiskPressure\", }", "oc get events --all-namespaces -o json | jq '[.items[] | select(.involvedObject.kind == \"Pod\" and .reason == \"Pulling\")] | length'", "4", "oc create configmap custom-ca --from-file=ca-bundle.crt=</path/to/example-ca.crt> \\ 1 -n openshift-config", "oc patch proxy/cluster --type=merge --patch='{\"spec\":{\"trustedCA\":{\"name\":\"custom-ca\"}}}'", "oc create secret tls <secret> \\ 1 --cert=</path/to/cert.crt> \\ 2 --key=</path/to/cert.key> \\ 3 -n openshift-ingress", "oc patch ingresscontroller.operator default --type=merge -p '{\"spec\":{\"defaultCertificate\": {\"name\": \"<secret>\"}}}' \\ 1 -n openshift-ingress-operator", "oc login -u kubeadmin -p <password> https://FQDN:6443", "oc config view --flatten > kubeconfig-newapi", "oc create secret tls <secret> \\ 1 --cert=</path/to/cert.crt> \\ 2 --key=</path/to/cert.key> \\ 3 -n openshift-config", "oc patch apiserver cluster --type=merge -p '{\"spec\":{\"servingCerts\": {\"namedCertificates\": [{\"names\": [\"<FQDN>\"], 1 \"servingCertificate\": {\"name\": \"<secret>\"}}]}}}' 2", "oc get apiserver cluster -o yaml", "spec: servingCerts: namedCertificates: - names: - <FQDN> servingCertificate: name: <secret>", "oc get clusteroperators kube-apiserver", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE kube-apiserver 4.15.0 True False False 145m", "for I in USD(oc get ns -o jsonpath='{range .items[*]} {.metadata.name}{\"\\n\"} {end}'); do oc delete pods --all -n USDI; sleep 1; done", "oc annotate service <service_name> \\ 1 service.beta.openshift.io/serving-cert-secret-name=<secret_name> 2", "oc annotate service test1 service.beta.openshift.io/serving-cert-secret-name=test1", "oc describe service <service_name>", "Annotations: service.beta.openshift.io/serving-cert-secret-name: <service_name> service.beta.openshift.io/serving-cert-signed-by: openshift-service-serving-signer@1556850837", "oc annotate configmap <config_map_name> \\ 1 service.beta.openshift.io/inject-cabundle=true", "oc annotate configmap test1 service.beta.openshift.io/inject-cabundle=true", "oc get configmap <config_map_name> -o yaml", "apiVersion: v1 data: service-ca.crt: | -----BEGIN CERTIFICATE-----", "oc annotate apiservice <api_service_name> \\ 1 service.beta.openshift.io/inject-cabundle=true", "oc annotate apiservice test1 service.beta.openshift.io/inject-cabundle=true", "oc get apiservice <api_service_name> -o yaml", "apiVersion: apiregistration.k8s.io/v1 kind: APIService metadata: annotations: service.beta.openshift.io/inject-cabundle: \"true\" spec: caBundle: <CA_BUNDLE>", "oc annotate crd <crd_name> \\ 1 service.beta.openshift.io/inject-cabundle=true", "oc annotate crd test1 service.beta.openshift.io/inject-cabundle=true", "oc get crd <crd_name> -o yaml", "apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: service.beta.openshift.io/inject-cabundle: \"true\" spec: conversion: strategy: Webhook webhook: clientConfig: caBundle: <CA_BUNDLE>", "oc annotate mutatingwebhookconfigurations <mutating_webhook_name> \\ 1 service.beta.openshift.io/inject-cabundle=true", "oc annotate mutatingwebhookconfigurations test1 service.beta.openshift.io/inject-cabundle=true", "oc get mutatingwebhookconfigurations <mutating_webhook_name> -o yaml", "apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration metadata: annotations: service.beta.openshift.io/inject-cabundle: \"true\" webhooks: - myWebhook: - v1beta1 clientConfig: caBundle: <CA_BUNDLE>", "oc annotate validatingwebhookconfigurations <validating_webhook_name> \\ 1 service.beta.openshift.io/inject-cabundle=true", "oc annotate validatingwebhookconfigurations test1 service.beta.openshift.io/inject-cabundle=true", "oc get validatingwebhookconfigurations <validating_webhook_name> -o yaml", "apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration metadata: annotations: service.beta.openshift.io/inject-cabundle: \"true\" webhooks: - myWebhook: - v1beta1 clientConfig: caBundle: <CA_BUNDLE>", "oc describe service <service_name>", "service.beta.openshift.io/serving-cert-secret-name: <secret>", "oc delete secret <secret> 1", "oc get secret <service_name>", "NAME TYPE DATA AGE <service.name> kubernetes.io/tls 2 1s", "oc get secrets/signing-key -n openshift-service-ca -o template='{{index .data \"tls.crt\"}}' | base64 --decode | openssl x509 -noout -enddate", "oc delete secret/signing-key -n openshift-service-ca", "for I in USD(oc get ns -o jsonpath='{range .items[*]} {.metadata.name}{\"\\n\"} {end}'); do oc delete pods --all -n USDI; sleep 1; done", "apiVersion: v1 kind: ConfigMap metadata: name: user-ca-bundle namespace: openshift-config data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- Custom CA certificate bundle. -----END CERTIFICATE-----", "oc create configmap custom-ca --from-file=ca-bundle.crt=</path/to/example-ca.crt> \\ 1 -n openshift-config", "oc patch proxy/cluster --type=merge --patch='{\"spec\":{\"trustedCA\":{\"name\":\"custom-ca\"}}}'", "apiVersion: v1 kind: ConfigMap metadata: name: user-ca-bundle namespace: openshift-config data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- Custom CA certificate bundle. -----END CERTIFICATE-----", "cat install-config.yaml", "proxy: httpProxy: http://<username:[email protected]:123/> httpsProxy: http://<username:[email protected]:123/> noProxy: <123.example.com,10.88.0.0/16> additionalTrustBundle: | -----BEGIN CERTIFICATE----- <MY_HTTPS_PROXY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 50-examplecorp-ca-cert spec: config: ignition: version: 3.1.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVORENDQXh5Z0F3SUJBZ0lKQU51bkkwRDY2MmNuTUEwR0NTcUdTSWIzRFFFQkN3VUFNSUdsTVFzd0NRWUQKV1FRR0V3SlZVekVYTUJVR0ExVUVDQXdPVG05eWRHZ2dRMkZ5YjJ4cGJtRXhFREFPQmdOVkJBY01CMUpoYkdWcApBMmd4RmpBVUJnTlZCQW9NRFZKbFpDQklZWFFzSUVsdVl5NHhFekFSQmdOVkJBc01DbEpsWkNCSVlYUWdTVlF4Ckh6QVpCZ05WQkFNTUVsSmxaQ0JJWVhRZ1NWUWdVbTl2ZENCRFFURWhNQjhHQ1NxR1NJYjNEUUVKQVJZU2FXNW0KWGpDQnBURUxNQWtHQTFVRUJoTUNWVk14RnpBVkJnTlZCQWdNRGs1dmNuUm9JRU5oY205c2FXNWhNUkF3RGdZRApXUVFIREFkU1lXeGxhV2RvTVJZd0ZBWURWUVFLREExU1pXUWdTR0YwTENCSmJtTXVNUk13RVFZRFZRUUxEQXBTCkFXUWdTR0YwSUVsVU1Sc3dHUVlEVlFRRERCSlNaV1FnU0dGMElFbFVJRkp2YjNRZ1EwRXhJVEFmQmdrcWhraUcKMHcwQkNRRVdFbWx1Wm05elpXTkFjbVZrYUdGMExtTnZiVENDQVNJd0RRWUpLb1pJaHZjTkFRRUJCUUFEZ2dFUApCRENDQVFvQ2dnRUJBTFF0OU9KUWg2R0M1TFQxZzgwcU5oMHU1MEJRNHNaL3laOGFFVHh0KzVsblBWWDZNSEt6CmQvaTdsRHFUZlRjZkxMMm55VUJkMmZRRGsxQjBmeHJza2hHSUlaM2lmUDFQczRsdFRrdjhoUlNvYjNWdE5xU28KSHhrS2Z2RDJQS2pUUHhEUFdZeXJ1eTlpckxaaW9NZmZpM2kvZ0N1dDBaV3RBeU8zTVZINXFXRi9lbkt3Z1BFUwpZOXBvK1RkQ3ZSQi9SVU9iQmFNNzYxRWNyTFNNMUdxSE51ZVNmcW5obzNBakxRNmRCblBXbG82MzhabTFWZWJLCkNFTHloa0xXTVNGa0t3RG1uZTBqUTAyWTRnMDc1dkNLdkNzQ0F3RUFBYU5qTUdFd0hRWURWUjBPQkJZRUZIN1IKNXlDK1VlaElJUGV1TDhacXczUHpiZ2NaTUI4R0ExVWRJd1FZTUJhQUZIN1I0eUMrVWVoSUlQZXVMOFpxdzNQegpjZ2NaTUE4R0ExVWRFd0VCL3dRRk1BTUJBZjh3RGdZRFZSMFBBUUgvQkFRREFnR0dNQTBHQ1NxR1NJYjNEUUVCCkR3VUFBNElCQVFCRE52RDJWbTlzQTVBOUFsT0pSOCtlbjVYejloWGN4SkI1cGh4Y1pROGpGb0cwNFZzaHZkMGUKTUVuVXJNY2ZGZ0laNG5qTUtUUUNNNFpGVVBBaWV5THg0ZjUySHVEb3BwM2U1SnlJTWZXK0tGY05JcEt3Q3NhawpwU29LdElVT3NVSks3cUJWWnhjckl5ZVFWMnFjWU9lWmh0UzV3QnFJd09BaEZ3bENFVDdaZTU4UUhtUzQ4c2xqCjVlVGtSaml2QWxFeHJGektjbGpDNGF4S1Fsbk92VkF6eitHbTMyVTB4UEJGNEJ5ZVBWeENKVUh3MVRzeVRtZWwKU3hORXA3eUhvWGN3bitmWG5hK3Q1SldoMWd4VVp0eTMKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= mode: 0644 overwrite: true path: /etc/pki/ca-trust/source/anchors/examplecorp-ca.crt", "oc annotate -n openshift-kube-apiserver-operator secret kube-apiserver-to-kubelet-signer auth.openshift.io/certificate-not-after-", "oc delete pods -l compliance.openshift.io/scan-name=ocp4-cis", "oc delete pods -l compliance.openshift.io/scan-name=ocp4-cis", "oc adm must-gather --image=USD(oc get csv compliance-operator.v1.6.0 -o=jsonpath='{.spec.relatedImages[?(@.name==\"must-gather\")].image}')", "oc get profile.compliance -n openshift-compliance", "NAME AGE VERSION ocp4-cis 3h49m 1.5.0 ocp4-cis-1-4 3h49m 1.4.0 ocp4-cis-1-5 3h49m 1.5.0 ocp4-cis-node 3h49m 1.5.0 ocp4-cis-node-1-4 3h49m 1.4.0 ocp4-cis-node-1-5 3h49m 1.5.0 ocp4-e8 3h49m ocp4-high 3h49m Revision 4 ocp4-high-node 3h49m Revision 4 ocp4-high-node-rev-4 3h49m Revision 4 ocp4-high-rev-4 3h49m Revision 4 ocp4-moderate 3h49m Revision 4 ocp4-moderate-node 3h49m Revision 4 ocp4-moderate-node-rev-4 3h49m Revision 4 ocp4-moderate-rev-4 3h49m Revision 4 ocp4-nerc-cip 3h49m ocp4-nerc-cip-node 3h49m ocp4-pci-dss 3h49m 3.2.1 ocp4-pci-dss-3-2 3h49m 3.2.1 ocp4-pci-dss-4-0 3h49m 4.0.0 ocp4-pci-dss-node 3h49m 3.2.1 ocp4-pci-dss-node-3-2 3h49m 3.2.1 ocp4-pci-dss-node-4-0 3h49m 4.0.0 ocp4-stig 3h49m V2R1 ocp4-stig-node 3h49m V2R1 ocp4-stig-node-v1r1 3h49m V1R1 ocp4-stig-node-v2r1 3h49m V2R1 ocp4-stig-v1r1 3h49m V1R1 ocp4-stig-v2r1 3h49m V2R1 rhcos4-e8 3h49m rhcos4-high 3h49m Revision 4 rhcos4-high-rev-4 3h49m Revision 4 rhcos4-moderate 3h49m Revision 4 rhcos4-moderate-rev-4 3h49m Revision 4 rhcos4-nerc-cip 3h49m rhcos4-stig 3h49m V2R1 rhcos4-stig-v1r1 3h49m V1R1 rhcos4-stig-v2r1 3h49m V2R1", "oc get -n openshift-compliance -oyaml profiles.compliance rhcos4-e8", "apiVersion: compliance.openshift.io/v1alpha1 description: 'This profile contains configuration checks for Red Hat Enterprise Linux CoreOS that align to the Australian Cyber Security Centre (ACSC) Essential Eight. A copy of the Essential Eight in Linux Environments guide can be found at the ACSC website: https://www.cyber.gov.au/acsc/view-all-content/publications/hardening-linux-workstations-and-servers' id: xccdf_org.ssgproject.content_profile_e8 kind: Profile metadata: annotations: compliance.openshift.io/image-digest: pb-rhcos4hrdkm compliance.openshift.io/product: redhat_enterprise_linux_coreos_4 compliance.openshift.io/product-type: Node creationTimestamp: \"2022-10-19T12:06:49Z\" generation: 1 labels: compliance.openshift.io/profile-bundle: rhcos4 name: rhcos4-e8 namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ProfileBundle name: rhcos4 uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d resourceVersion: \"43699\" uid: 86353f70-28f7-40b4-bf0e-6289ec33675b rules: - rhcos4-accounts-no-uid-except-zero - rhcos4-audit-rules-dac-modification-chmod - rhcos4-audit-rules-dac-modification-chown - rhcos4-audit-rules-execution-chcon - rhcos4-audit-rules-execution-restorecon - rhcos4-audit-rules-execution-semanage - rhcos4-audit-rules-execution-setfiles - rhcos4-audit-rules-execution-setsebool - rhcos4-audit-rules-execution-seunshare - rhcos4-audit-rules-kernel-module-loading-delete - rhcos4-audit-rules-kernel-module-loading-finit - rhcos4-audit-rules-kernel-module-loading-init - rhcos4-audit-rules-login-events - rhcos4-audit-rules-login-events-faillock - rhcos4-audit-rules-login-events-lastlog - rhcos4-audit-rules-login-events-tallylog - rhcos4-audit-rules-networkconfig-modification - rhcos4-audit-rules-sysadmin-actions - rhcos4-audit-rules-time-adjtimex - rhcos4-audit-rules-time-clock-settime - rhcos4-audit-rules-time-settimeofday - rhcos4-audit-rules-time-stime - rhcos4-audit-rules-time-watch-localtime - rhcos4-audit-rules-usergroup-modification - rhcos4-auditd-data-retention-flush - rhcos4-auditd-freq - rhcos4-auditd-local-events - rhcos4-auditd-log-format - rhcos4-auditd-name-format - rhcos4-auditd-write-logs - rhcos4-configure-crypto-policy - rhcos4-configure-ssh-crypto-policy - rhcos4-no-empty-passwords - rhcos4-selinux-policytype - rhcos4-selinux-state - rhcos4-service-auditd-enabled - rhcos4-sshd-disable-empty-passwords - rhcos4-sshd-disable-gssapi-auth - rhcos4-sshd-disable-rhosts - rhcos4-sshd-disable-root-login - rhcos4-sshd-disable-user-known-hosts - rhcos4-sshd-do-not-permit-user-env - rhcos4-sshd-enable-strictmodes - rhcos4-sshd-print-last-log - rhcos4-sshd-set-loglevel-info - rhcos4-sysctl-kernel-dmesg-restrict - rhcos4-sysctl-kernel-kptr-restrict - rhcos4-sysctl-kernel-randomize-va-space - rhcos4-sysctl-kernel-unprivileged-bpf-disabled - rhcos4-sysctl-kernel-yama-ptrace-scope - rhcos4-sysctl-net-core-bpf-jit-harden title: Australian Cyber Security Centre (ACSC) Essential Eight", "oc get -n openshift-compliance -oyaml rules rhcos4-audit-rules-login-events", "apiVersion: compliance.openshift.io/v1alpha1 checkType: Node description: |- The audit system already collects login information for all users and root. If the auditd daemon is configured to use the augenrules program to read audit rules during daemon startup (the default), add the following lines to a file with suffix.rules in the directory /etc/audit/rules.d in order to watch for attempted manual edits of files involved in storing logon events: -w /var/log/tallylog -p wa -k logins -w /var/run/faillock -p wa -k logins -w /var/log/lastlog -p wa -k logins If the auditd daemon is configured to use the auditctl utility to read audit rules during daemon startup, add the following lines to /etc/audit/audit.rules file in order to watch for unattempted manual edits of files involved in storing logon events: -w /var/log/tallylog -p wa -k logins -w /var/run/faillock -p wa -k logins -w /var/log/lastlog -p wa -k logins id: xccdf_org.ssgproject.content_rule_audit_rules_login_events kind: Rule metadata: annotations: compliance.openshift.io/image-digest: pb-rhcos4hrdkm compliance.openshift.io/rule: audit-rules-login-events control.compliance.openshift.io/NIST-800-53: AU-2(d);AU-12(c);AC-6(9);CM-6(a) control.compliance.openshift.io/PCI-DSS: Req-10.2.3 policies.open-cluster-management.io/controls: AU-2(d),AU-12(c),AC-6(9),CM-6(a),Req-10.2.3 policies.open-cluster-management.io/standards: NIST-800-53,PCI-DSS creationTimestamp: \"2022-10-19T12:07:08Z\" generation: 1 labels: compliance.openshift.io/profile-bundle: rhcos4 name: rhcos4-audit-rules-login-events namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ProfileBundle name: rhcos4 uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d resourceVersion: \"44819\" uid: 75872f1f-3c93-40ca-a69d-44e5438824a4 rationale: Manual editing of these files may indicate nefarious activity, such as an attacker attempting to remove evidence of an intrusion. severity: medium title: Record Attempts to Alter Logon and Logout Events warning: Manual editing of these files may indicate nefarious activity, such as an attacker attempting to remove evidence of an intrusion.", "apiVersion: compliance.openshift.io/v1alpha1 kind: ProfileBundle name: <profile bundle name> namespace: openshift-compliance status: dataStreamStatus: VALID 1", "apiVersion: compliance.openshift.io/v1alpha1 description: <description of the profile> id: xccdf_org.ssgproject.content_profile_moderate 1 kind: Profile metadata: annotations: compliance.openshift.io/product: <product name> compliance.openshift.io/product-type: Node 2 creationTimestamp: \"YYYY-MM-DDTMM:HH:SSZ\" generation: 1 labels: compliance.openshift.io/profile-bundle: <profile bundle name> name: rhcos4-moderate namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ProfileBundle name: <profile bundle name> uid: <uid string> resourceVersion: \"<version number>\" selfLink: /apis/compliance.openshift.io/v1alpha1/namespaces/openshift-compliance/profiles/rhcos4-moderate uid: <uid string> rules: 3 - rhcos4-account-disable-post-pw-expiration - rhcos4-accounts-no-uid-except-zero - rhcos4-audit-rules-dac-modification-chmod - rhcos4-audit-rules-dac-modification-chown title: <title of the profile>", "apiVersion: compliance.openshift.io/v1alpha1 checkType: Platform 1 description: <description of the rule> id: xccdf_org.ssgproject.content_rule_configure_network_policies_namespaces 2 instructions: <manual instructions for the scan> kind: Rule metadata: annotations: compliance.openshift.io/rule: configure-network-policies-namespaces control.compliance.openshift.io/CIS-OCP: 5.3.2 control.compliance.openshift.io/NERC-CIP: CIP-003-3 R4;CIP-003-3 R4.2;CIP-003-3 R5;CIP-003-3 R6;CIP-004-3 R2.2.4;CIP-004-3 R3;CIP-007-3 R2;CIP-007-3 R2.1;CIP-007-3 R2.2;CIP-007-3 R2.3;CIP-007-3 R5.1;CIP-007-3 R6.1 control.compliance.openshift.io/NIST-800-53: AC-4;AC-4(21);CA-3(5);CM-6;CM-6(1);CM-7;CM-7(1);SC-7;SC-7(3);SC-7(5);SC-7(8);SC-7(12);SC-7(13);SC-7(18) labels: compliance.openshift.io/profile-bundle: ocp4 name: ocp4-configure-network-policies-namespaces namespace: openshift-compliance rationale: <description of why this rule is checked> severity: high 3 title: <summary of the rule>", "apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: rhcos4-with-usb spec: extends: rhcos4-moderate 1 title: <title of the tailored profile> disableRules: - name: <name of a rule object to be disabled> rationale: <description of why this rule is checked> status: id: xccdf_compliance.openshift.io_profile_rhcos4-with-usb 2 outputRef: name: rhcos4-with-usb-tp 3 namespace: openshift-compliance state: READY 4", "compliance.openshift.io/product-type: Platform/Node", "apiVersion: compliance.openshift.io/v1alpha1 autoApplyRemediations: true 1 autoUpdateRemediations: true 2 kind: ScanSetting maxRetryOnTimeout: 3 metadata: creationTimestamp: \"2022-10-18T20:21:00Z\" generation: 1 name: default-auto-apply namespace: openshift-compliance resourceVersion: \"38840\" uid: 8cb0967d-05e0-4d7a-ac1c-08a7f7e89e84 rawResultStorage: nodeSelector: node-role.kubernetes.io/master: \"\" pvAccessModes: - ReadWriteOnce rotation: 3 3 size: 1Gi 4 tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists roles: 5 - master - worker scanTolerations: - operator: Exists schedule: 0 1 * * * 6 showNotApplicable: false strictNodeScan: true timeout: 30m", "apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: <name of the scan> profiles: 1 # Node checks - name: rhcos4-with-usb kind: TailoredProfile apiGroup: compliance.openshift.io/v1alpha1 # Cluster checks - name: ocp4-moderate kind: Profile apiGroup: compliance.openshift.io/v1alpha1 settingsRef: 2 name: my-companys-constraints kind: ScanSetting apiGroup: compliance.openshift.io/v1alpha1", "oc get compliancesuites", "apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: <name of the scan> spec: autoApplyRemediations: false 1 schedule: \"0 1 * * *\" 2 scans: 3 - name: workers-scan scanType: Node profile: xccdf_org.ssgproject.content_profile_moderate content: ssg-rhcos4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc rule: \"xccdf_org.ssgproject.content_rule_no_netrc_files\" nodeSelector: node-role.kubernetes.io/worker: \"\" status: Phase: DONE 4 Result: NON-COMPLIANT 5 scanStatuses: - name: workers-scan phase: DONE result: NON-COMPLIANT", "oc get events --field-selector involvedObject.kind=ComplianceSuite,involvedObject.name=<name of the suite>", "apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceScan metadata: name: <name of the scan> spec: scanType: Node 1 profile: xccdf_org.ssgproject.content_profile_moderate 2 content: ssg-ocp4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc... 3 rule: \"xccdf_org.ssgproject.content_rule_no_netrc_files\" 4 nodeSelector: 5 node-role.kubernetes.io/worker: \"\" status: phase: DONE 6 result: NON-COMPLIANT 7", "get events --field-selector involvedObject.kind=ComplianceScan,involvedObject.name=<name of the suite>", "apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceCheckResult metadata: labels: compliance.openshift.io/check-severity: medium compliance.openshift.io/check-status: FAIL compliance.openshift.io/suite: example-compliancesuite compliance.openshift.io/scan-name: workers-scan name: workers-scan-no-direct-root-logins namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ComplianceScan name: workers-scan description: <description of scan check> instructions: <manual instructions for the scan> id: xccdf_org.ssgproject.content_rule_no_direct_root_logins severity: medium 1 status: FAIL 2", "get compliancecheckresults -l compliance.openshift.io/suite=workers-compliancesuite", "apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceRemediation metadata: labels: compliance.openshift.io/suite: example-compliancesuite compliance.openshift.io/scan-name: workers-scan machineconfiguration.openshift.io/role: worker name: workers-scan-disable-users-coredumps namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ComplianceCheckResult name: workers-scan-disable-users-coredumps uid: <UID> spec: apply: false 1 object: current: 2 apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:,%2A%20%20%20%20%20hard%20%20%20core%20%20%20%200 filesystem: root mode: 420 path: /etc/security/limits.d/75-disable_users_coredumps.conf outdated: {} 3", "get complianceremediations -l compliance.openshift.io/suite=workers-compliancesuite", "get compliancecheckresults -l 'compliance.openshift.io/check-status in (FAIL),compliance.openshift.io/automated-remediation'", "get compliancecheckresults -l 'compliance.openshift.io/check-status in (FAIL),!compliance.openshift.io/automated-remediation'", "apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: \"true\" pod-security.kubernetes.io/enforce: privileged 1 name: openshift-compliance", "oc create -f namespace-object.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: compliance-operator namespace: openshift-compliance spec: targetNamespaces: - openshift-compliance", "oc create -f operator-group-object.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: compliance-operator-sub namespace: openshift-compliance spec: channel: \"stable\" installPlanApproval: Automatic name: compliance-operator source: redhat-operators sourceNamespace: openshift-marketplace", "oc create -f subscription-object.yaml", "oc get csv -n openshift-compliance", "oc get deploy -n openshift-compliance", "apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: \"true\" pod-security.kubernetes.io/enforce: privileged 1 name: openshift-compliance", "oc create -f namespace-object.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: compliance-operator namespace: openshift-compliance spec: targetNamespaces: - openshift-compliance", "oc create -f operator-group-object.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: compliance-operator-sub namespace: openshift-compliance spec: channel: \"stable\" installPlanApproval: Automatic name: compliance-operator source: redhat-operators sourceNamespace: openshift-marketplace config: nodeSelector: node-role.kubernetes.io/worker: \"\" 1", "oc create -f subscription-object.yaml", "oc get csv -n openshift-compliance", "oc get deploy -n openshift-compliance", "apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: \"true\" pod-security.kubernetes.io/enforce: privileged 1 name: openshift-compliance", "oc create -f namespace-object.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: compliance-operator namespace: openshift-compliance spec: targetNamespaces: - openshift-compliance", "oc create -f operator-group-object.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: compliance-operator-sub namespace: openshift-compliance spec: channel: \"stable\" installPlanApproval: Automatic name: compliance-operator source: redhat-operators sourceNamespace: openshift-marketplace config: nodeSelector: node-role.kubernetes.io/worker: \"\" env: - name: PLATFORM value: \"HyperShift\"", "oc create -f subscription-object.yaml", "oc get csv -n openshift-compliance", "oc get deploy -n openshift-compliance", "apiVersion: compliance.openshift.io/v1alpha1 kind: ProfileBundle metadata: creationTimestamp: \"2022-10-19T12:06:30Z\" finalizers: - profilebundle.finalizers.compliance.openshift.io generation: 1 name: rhcos4 namespace: openshift-compliance resourceVersion: \"46741\" uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d spec: contentFile: ssg-rhcos4-ds.xml 1 contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:900e... 2 status: conditions: - lastTransitionTime: \"2022-10-19T12:07:51Z\" message: Profile bundle successfully parsed reason: Valid status: \"True\" type: Ready dataStreamStatus: VALID", "oc -n openshift-compliance get profilebundles rhcos4 -oyaml", "apiVersion: compliance.openshift.io/v1alpha1 kind: ProfileBundle metadata: creationTimestamp: \"2022-10-19T12:06:30Z\" finalizers: - profilebundle.finalizers.compliance.openshift.io generation: 1 name: rhcos4 namespace: openshift-compliance resourceVersion: \"46741\" uid: 22350850-af4a-4f5c-9a42-5e7b68b82d7d spec: contentFile: ssg-rhcos4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:900e... 1 status: conditions: - lastTransitionTime: \"2022-10-19T12:07:51Z\" message: Profile bundle successfully parsed reason: Valid status: \"True\" type: Ready dataStreamStatus: VALID", "oc delete ssb --all -n openshift-compliance", "oc delete ss --all -n openshift-compliance", "oc delete suite --all -n openshift-compliance", "oc delete scan --all -n openshift-compliance", "oc delete profilebundle.compliance --all -n openshift-compliance", "oc delete sub --all -n openshift-compliance", "oc delete csv --all -n openshift-compliance", "oc delete project openshift-compliance", "project.project.openshift.io \"openshift-compliance\" deleted", "oc get project/openshift-compliance", "Error from server (NotFound): namespaces \"openshift-compliance\" not found", "oc explain scansettings", "oc explain scansettingbindings", "oc describe scansettings default -n openshift-compliance", "Name: default Namespace: openshift-compliance Labels: <none> Annotations: <none> API Version: compliance.openshift.io/v1alpha1 Kind: ScanSetting Max Retry On Timeout: 3 Metadata: Creation Timestamp: 2024-07-16T14:56:42Z Generation: 2 Resource Version: 91655682 UID: 50358cf1-57a8-4f69-ac50-5c7a5938e402 Raw Result Storage: Node Selector: node-role.kubernetes.io/master: Pv Access Modes: ReadWriteOnce 1 Rotation: 3 2 Size: 1Gi 3 Storage Class Name: standard 4 Tolerations: Effect: NoSchedule Key: node-role.kubernetes.io/master Operator: Exists Effect: NoExecute Key: node.kubernetes.io/not-ready Operator: Exists Toleration Seconds: 300 Effect: NoExecute Key: node.kubernetes.io/unreachable Operator: Exists Toleration Seconds: 300 Effect: NoSchedule Key: node.kubernetes.io/memory-pressure Operator: Exists Roles: master 5 worker 6 Scan Tolerations: 7 Operator: Exists Schedule: 0 1 * * * 8 Show Not Applicable: false Strict Node Scan: true Suspend: false Timeout: 30m Events: <none>", "Name: default-auto-apply Namespace: openshift-compliance Labels: <none> Annotations: <none> API Version: compliance.openshift.io/v1alpha1 Auto Apply Remediations: true 1 Auto Update Remediations: true 2 Kind: ScanSetting Metadata: Creation Timestamp: 2022-10-18T20:21:00Z Generation: 1 Managed Fields: API Version: compliance.openshift.io/v1alpha1 Fields Type: FieldsV1 fieldsV1: f:autoApplyRemediations: f:autoUpdateRemediations: f:rawResultStorage: .: f:nodeSelector: .: f:node-role.kubernetes.io/master: f:pvAccessModes: f:rotation: f:size: f:tolerations: f:roles: f:scanTolerations: f:schedule: f:showNotApplicable: f:strictNodeScan: Manager: compliance-operator Operation: Update Time: 2022-10-18T20:21:00Z Resource Version: 38840 UID: 8cb0967d-05e0-4d7a-ac1c-08a7f7e89e84 Raw Result Storage: Node Selector: node-role.kubernetes.io/master: Pv Access Modes: ReadWriteOnce Rotation: 3 Size: 1Gi Tolerations: Effect: NoSchedule Key: node-role.kubernetes.io/master Operator: Exists Effect: NoExecute Key: node.kubernetes.io/not-ready Operator: Exists Toleration Seconds: 300 Effect: NoExecute Key: node.kubernetes.io/unreachable Operator: Exists Toleration Seconds: 300 Effect: NoSchedule Key: node.kubernetes.io/memory-pressure Operator: Exists Roles: master worker Scan Tolerations: Operator: Exists Schedule: 0 1 * * * Show Not Applicable: false Strict Node Scan: true Events: <none>", "apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: cis-compliance namespace: openshift-compliance profiles: - name: ocp4-cis-node kind: Profile apiGroup: compliance.openshift.io/v1alpha1 - name: ocp4-cis kind: Profile apiGroup: compliance.openshift.io/v1alpha1 settingsRef: name: default kind: ScanSetting apiGroup: compliance.openshift.io/v1alpha1", "oc create -f <file-name>.yaml -n openshift-compliance", "oc get compliancescan -w -n openshift-compliance", "apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: storageClassName: standard rotation: 10 size: 10Gi roles: - worker - master scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *'", "apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: rs-on-workers namespace: openshift-compliance rawResultStorage: nodeSelector: node-role.kubernetes.io/worker: \"\" 1 pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - operator: Exists 2 roles: - worker - master scanTolerations: - operator: Exists schedule: 0 1 * * *", "oc create -f rs-workers.yaml", "oc get scansettings rs-on-workers -n openshift-compliance -o yaml", "apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: creationTimestamp: \"2021-11-19T19:36:36Z\" generation: 1 name: rs-on-workers namespace: openshift-compliance resourceVersion: \"48305\" uid: 43fdfc5f-15a7-445a-8bbc-0e4a160cd46e rawResultStorage: nodeSelector: node-role.kubernetes.io/worker: \"\" pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - operator: Exists roles: - worker - master scanTolerations: - operator: Exists schedule: 0 1 * * * strictNodeScan: true", "oc get hostedcluster -A", "NAMESPACE NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE local-cluster 79136a1bdb84b3c13217 4.13.5 79136a1bdb84b3c13217-admin-kubeconfig Completed True False The hosted control plane is available", "apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: hypershift-cisk57aw88gry namespace: openshift-compliance spec: description: This profile test required rules extends: ocp4-cis 1 title: Management namespace profile setValues: - name: ocp4-hypershift-cluster rationale: This value is used for HyperShift version detection value: 79136a1bdb84b3c13217 2 - name: ocp4-hypershift-namespace-prefix rationale: This value is used for HyperShift control plane namespace detection value: local-cluster 3", "oc create -n openshift-compliance -f mgmt-tp.yaml", "spec.containers[].resources.limits.cpu spec.containers[].resources.limits.memory spec.containers[].resources.limits.hugepages-<size> spec.containers[].resources.requests.cpu spec.containers[].resources.requests.memory spec.containers[].resources.requests.hugepages-<size>", "apiVersion: v1 kind: Pod metadata: name: frontend spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: app image: images.my-company.example/app:v4 resources: requests: 1 memory: \"64Mi\" cpu: \"250m\" limits: 2 memory: \"128Mi\" cpu: \"500m\" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] - name: log-aggregator image: images.my-company.example/log-aggregator:v6 resources: requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: new-profile annotations: compliance.openshift.io/product-type: Node 1 spec: extends: ocp4-cis-node 2 description: My custom profile 3 title: Custom profile 4 enableRules: - name: ocp4-etcd-unique-ca rationale: We really need to enable this disableRules: - name: ocp4-file-groupowner-cni-conf rationale: This does not apply to the cluster", "oc get rules.compliance -n openshift-compliance -l compliance.openshift.io/profile-bundle=rhcos4", "oc get variables.compliance -n openshift-compliance -l compliance.openshift.io/profile-bundle=rhcos4", "apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: nist-moderate-modified spec: extends: rhcos4-moderate description: NIST moderate profile title: My modified NIST moderate profile disableRules: - name: rhcos4-file-permissions-var-log-messages rationale: The file contains logs of error messages in the system - name: rhcos4-account-disable-post-pw-expiration rationale: No need to check this as it comes from the IdP setValues: - name: rhcos4-var-selinux-state rationale: Organizational requirements value: permissive", "apiVersion: compliance.openshift.io/v1alpha1 kind: TailoredProfile metadata: name: ocp4-manual-scc-check spec: extends: ocp4-cis description: This profile extends ocp4-cis by forcing the SCC check to always return MANUAL title: OCP4 CIS profile with manual SCC check manualRules: - name: ocp4-scc-limit-container-allowed-capabilities rationale: We use third party software that installs its own SCC with extra privileges", "oc create -n openshift-compliance -f new-profile-node.yaml 1", "tailoredprofile.compliance.openshift.io/nist-moderate-modified created", "apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: nist-moderate-modified profiles: - apiGroup: compliance.openshift.io/v1alpha1 kind: Profile name: ocp4-moderate - apiGroup: compliance.openshift.io/v1alpha1 kind: TailoredProfile name: nist-moderate-modified settingsRef: apiGroup: compliance.openshift.io/v1alpha1 kind: ScanSetting name: default", "oc create -n openshift-compliance -f new-scansettingbinding.yaml", "scansettingbinding.compliance.openshift.io/nist-moderate-modified created", "oc get compliancesuites nist-moderate-modified -o json -n openshift-compliance | jq '.status.scanStatuses[].resultsStorage'", "{ \"name\": \"ocp4-moderate\", \"namespace\": \"openshift-compliance\" } { \"name\": \"nist-moderate-modified-master\", \"namespace\": \"openshift-compliance\" } { \"name\": \"nist-moderate-modified-worker\", \"namespace\": \"openshift-compliance\" }", "oc get pvc -n openshift-compliance rhcos4-moderate-worker", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE rhcos4-moderate-worker Bound pvc-548f6cfe-164b-42fe-ba13-a07cfbc77f3a 1Gi RWO gp2 92m", "oc create -n openshift-compliance -f pod.yaml", "apiVersion: \"v1\" kind: Pod metadata: name: pv-extract spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: pv-extract-pod image: registry.access.redhat.com/ubi9/ubi command: [\"sleep\", \"3000\"] volumeMounts: - mountPath: \"/workers-scan-results\" name: workers-scan-vol securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: workers-scan-vol persistentVolumeClaim: claimName: rhcos4-moderate-worker", "oc cp pv-extract:/workers-scan-results -n openshift-compliance .", "oc delete pod pv-extract -n openshift-compliance", "oc get -n openshift-compliance compliancecheckresults -l compliance.openshift.io/suite=workers-compliancesuite", "oc get -n openshift-compliance compliancecheckresults -l compliance.openshift.io/scan=workers-scan", "oc get -n openshift-compliance compliancecheckresults -l 'compliance.openshift.io/check-status=FAIL,compliance.openshift.io/automated-remediation'", "oc get compliancecheckresults -n openshift-compliance -l 'compliance.openshift.io/check-status=FAIL,compliance.openshift.io/check-severity=high'", "NAME STATUS SEVERITY nist-moderate-modified-master-configure-crypto-policy FAIL high nist-moderate-modified-master-coreos-pti-kernel-argument FAIL high nist-moderate-modified-master-disable-ctrlaltdel-burstaction FAIL high nist-moderate-modified-master-disable-ctrlaltdel-reboot FAIL high nist-moderate-modified-master-enable-fips-mode FAIL high nist-moderate-modified-master-no-empty-passwords FAIL high nist-moderate-modified-master-selinux-state FAIL high nist-moderate-modified-worker-configure-crypto-policy FAIL high nist-moderate-modified-worker-coreos-pti-kernel-argument FAIL high nist-moderate-modified-worker-disable-ctrlaltdel-burstaction FAIL high nist-moderate-modified-worker-disable-ctrlaltdel-reboot FAIL high nist-moderate-modified-worker-enable-fips-mode FAIL high nist-moderate-modified-worker-no-empty-passwords FAIL high nist-moderate-modified-worker-selinux-state FAIL high ocp4-moderate-configure-network-policies-namespaces FAIL high ocp4-moderate-fips-mode-enabled-on-all-nodes FAIL high", "oc get -n openshift-compliance compliancecheckresults -l 'compliance.openshift.io/check-status=FAIL,!compliance.openshift.io/automated-remediation'", "spec: apply: false current: object: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/sysctl.d/75-sysctl_net_ipv4_conf_all_accept_redirects.conf mode: 0644 contents: source: data:,net.ipv4.conf.all.accept_redirects%3D0 outdated: {} status: applicationState: NotApplied", "echo \"net.ipv4.conf.all.accept_redirects%3D0\" | python3 -c \"import sys, urllib.parse; print(urllib.parse.unquote(''.join(sys.stdin.readlines())))\"", "net.ipv4.conf.all.accept_redirects=0", "oc get nodes -n openshift-compliance", "NAME STATUS ROLES AGE VERSION ip-10-0-128-92.us-east-2.compute.internal Ready master 5h21m v1.28.5 ip-10-0-158-32.us-east-2.compute.internal Ready worker 5h17m v1.28.5 ip-10-0-166-81.us-east-2.compute.internal Ready worker 5h17m v1.28.5 ip-10-0-171-170.us-east-2.compute.internal Ready master 5h21m v1.28.5 ip-10-0-197-35.us-east-2.compute.internal Ready master 5h22m v1.28.5", "oc -n openshift-compliance label node ip-10-0-166-81.us-east-2.compute.internal node-role.kubernetes.io/<machine_config_pool_name>=", "node/ip-10-0-166-81.us-east-2.compute.internal labeled", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: <machine_config_pool_name> labels: pools.operator.machineconfiguration.openshift.io/<machine_config_pool_name>: '' 1 spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,<machine_config_pool_name>]} nodeSelector: matchLabels: node-role.kubernetes.io/<machine_config_pool_name>: \"\"", "oc get mcp -w", "apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: rotation: 3 size: 1Gi roles: - worker - master - example scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *'", "apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: cis namespace: openshift-compliance profiles: - apiGroup: compliance.openshift.io/v1alpha1 kind: Profile name: ocp4-cis - apiGroup: compliance.openshift.io/v1alpha1 kind: Profile name: ocp4-cis-node settingsRef: apiGroup: compliance.openshift.io/v1alpha1 kind: ScanSetting name: default", "oc get rules -o json | jq '.items[] | select(.checkType == \"Platform\") | select(.metadata.name | contains(\"ocp4-kubelet-\")) | .metadata.name'", "oc label mcp <sub-pool-name> pools.operator.machineconfiguration.openshift.io/<sub-pool-name>=", "oc -n openshift-compliance patch complianceremediations/<scan-name>-sysctl-net-ipv4-conf-all-accept-redirects --patch '{\"spec\":{\"apply\":true}}' --type=merge", "oc edit image.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Image metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2020-09-10T10:12:54Z\" generation: 2 name: cluster resourceVersion: \"363096\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: 2dcb614e-2f8a-4a23-ba9a-8e33cd0ff77e spec: allowedRegistriesForImport: - domainName: registry.redhat.io status: externalRegistryHostnames: - default-route-openshift-image-registry.apps.user-cluster-09-10-12-07.devcluster.openshift.com internalRegistryHostname: image-registry.openshift-image-registry.svc:5000", "oc -n openshift-compliance annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=", "oc -n openshift-compliance get complianceremediations -l complianceoperator.openshift.io/outdated-remediation=", "NAME STATE workers-scan-no-empty-passwords Outdated", "oc -n openshift-compliance patch complianceremediations workers-scan-no-empty-passwords --type json -p '[{\"op\":\"remove\", \"path\":/spec/outdated}]'", "oc get -n openshift-compliance complianceremediations workers-scan-no-empty-passwords", "NAME STATE workers-scan-no-empty-passwords Applied", "oc -n openshift-compliance patch complianceremediations/rhcos4-moderate-worker-sysctl-net-ipv4-conf-all-accept-redirects --patch '{\"spec\":{\"apply\":false}}' --type=merge", "oc -n openshift-compliance get remediation \\ one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available -o yaml", "apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceRemediation metadata: annotations: compliance.openshift.io/xccdf-value-used: var-kubelet-evictionhard-imagefs-available creationTimestamp: \"2022-01-05T19:52:27Z\" generation: 1 labels: compliance.openshift.io/scan-name: one-rule-tp-node-master 1 compliance.openshift.io/suite: one-rule-ssb-node name: one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available namespace: openshift-compliance ownerReferences: - apiVersion: compliance.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: ComplianceCheckResult name: one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available uid: fe8e1577-9060-4c59-95b2-3e2c51709adc resourceVersion: \"84820\" uid: 5339d21a-24d7-40cb-84d2-7a2ebb015355 spec: apply: true current: object: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig spec: kubeletConfig: evictionHard: imagefs.available: 10% 2 outdated: {} type: Configuration status: applicationState: Applied", "oc -n openshift-compliance patch complianceremediations/one-rule-tp-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available -p '{\"spec\":{\"apply\":false}}' --type=merge", "oc -n openshift-compliance get kubeletconfig --selector compliance.openshift.io/scan-name=one-rule-tp-node-master", "NAME AGE compliance-operator-kubelet-master 2m34s", "oc edit -n openshift-compliance KubeletConfig compliance-operator-kubelet-master", "oc -n openshift-compliance annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=", "apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: workers-compliancesuite spec: scans: - name: workers-scan profile: xccdf_org.ssgproject.content_profile_moderate content: ssg-rhcos4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc debug: true rule: xccdf_org.ssgproject.content_rule_no_direct_root_logins nodeSelector: node-role.kubernetes.io/worker: \"\"", "apiVersion: compliance.openshift.io/v1alpha1 strictNodeScan: true metadata: name: default namespace: openshift-compliance priorityClass: compliance-high-priority 1 kind: ScanSetting showNotApplicable: false rawResultStorage: nodeSelector: node-role.kubernetes.io/master: '' pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists schedule: 0 1 * * * roles: - master - worker scanTolerations: - operator: Exists", "oc -n openshift-compliance create configmap nist-moderate-modified --from-file=tailoring.xml=/path/to/the/tailoringFile.xml", "apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: workers-compliancesuite spec: debug: true scans: - name: workers-scan profile: xccdf_org.ssgproject.content_profile_moderate content: ssg-rhcos4-ds.xml contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc debug: true tailoringConfigMap: name: nist-moderate-modified nodeSelector: node-role.kubernetes.io/worker: \"\"", "oc -n openshift-compliance annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=", "oc get mc", "75-worker-scan-chronyd-or-ntpd-specify-remote-server 75-worker-scan-configure-usbguard-auditbackend 75-worker-scan-service-usbguard-enabled 75-worker-scan-usbguard-allow-hid-and-hub", "apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: storageClassName: standard rotation: 10 size: 10Gi roles: - worker - master scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *'", "oc -n openshift-compliance annotate compliancesuites/workers-compliancesuite compliance.openshift.io/apply-remediations=", "oc -n openshift-compliance annotate compliancesuites/workers-compliancesuite compliance.openshift.io/remove-outdated=", "allowHostDirVolumePlugin: false allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: true allowPrivilegedContainer: false allowedCapabilities: null apiVersion: security.openshift.io/v1 defaultAddCapabilities: null fsGroup: type: MustRunAs kind: SecurityContextConstraints metadata: name: restricted-adjusted-compliance priority: 30 1 readOnlyRootFilesystem: false requiredDropCapabilities: - KILL - SETUID - SETGID - MKNOD runAsUser: type: MustRunAsRange seLinuxContext: type: MustRunAs supplementalGroups: type: RunAsAny users: - system:serviceaccount:openshift-compliance:api-resource-collector 2 volumes: - configMap - downwardAPI - emptyDir - persistentVolumeClaim - projected - secret", "oc create -n openshift-compliance -f restricted-adjusted-compliance.yaml", "securitycontextconstraints.security.openshift.io/restricted-adjusted-compliance created", "oc get -n openshift-compliance scc restricted-adjusted-compliance", "NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES restricted-adjusted-compliance false <no value> MustRunAs MustRunAsRange MustRunAs RunAsAny 30 false [\"configMap\",\"downwardAPI\",\"emptyDir\",\"persistentVolumeClaim\",\"projected\",\"secret\"]", "oc get events -n openshift-compliance", "oc describe -n openshift-compliance compliancescan/cis-compliance", "oc -n openshift-compliance logs compliance-operator-775d7bddbd-gj58f | jq -c 'select(.logger == \"profilebundlectrl\")'", "date -d @1596184628.955853 --utc", "oc get -n openshift-compliance profilebundle.compliance", "oc get -n openshift-compliance profile.compliance", "oc logs -n openshift-compliance -lprofile-bundle=ocp4 -c profileparser", "oc get -n openshift-compliance deployments,pods -lprofile-bundle=ocp4", "oc logs -n openshift-compliance pods/<pod-name>", "oc describe -n openshift-compliance pod/<pod-name> -c profileparser", "apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: my-companys-constraints debug: true For each role, a separate scan will be created pointing to a node-role specified in roles roles: - worker --- apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: my-companys-compliance-requirements profiles: # Node checks - name: rhcos4-e8 kind: Profile apiGroup: compliance.openshift.io/v1alpha1 # Cluster checks - name: ocp4-e8 kind: Profile apiGroup: compliance.openshift.io/v1alpha1 settingsRef: name: my-companys-constraints kind: ScanSetting apiGroup: compliance.openshift.io/v1alpha1", "Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuiteCreated 9m52s scansettingbindingctrl ComplianceSuite openshift-compliance/my-companys-compliance-requirements created", "oc get cronjobs", "NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE <cron_name> 0 1 * * * False 0 <none> 151m", "oc -n openshift-compliance get cm -l compliance.openshift.io/scan-name=rhcos4-e8-worker,complianceoperator.openshift.io/scan-script=", "oc get pvc -n openshift-compliance -lcompliance.openshift.io/scan-name=rhcos4-e8-worker", "oc get pods -lcompliance.openshift.io/scan-name=rhcos4-e8-worker,workload=scanner --show-labels", "NAME READY STATUS RESTARTS AGE LABELS rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod 0/2 Completed 0 39m compliance.openshift.io/scan-name=rhcos4-e8-worker,targetNode=ip-10-0-169-90.eu-north-1.compute.internal,workload=scanner", "oc describe cm/rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod", "Name: rhcos4-e8-worker-ip-10-0-169-90.eu-north-1.compute.internal-pod Namespace: openshift-compliance Labels: compliance.openshift.io/scan-name-scan=rhcos4-e8-worker complianceoperator.openshift.io/scan-result= Annotations: compliance-remediations/processed: compliance.openshift.io/scan-error-msg: compliance.openshift.io/scan-result: NON-COMPLIANT OpenSCAP-scan-result/node: ip-10-0-169-90.eu-north-1.compute.internal Data ==== exit-code: ---- 2 results: ---- <?xml version=\"1.0\" encoding=\"UTF-8\"?>", "oc get compliancecheckresults -lcompliance.openshift.io/scan-name=rhcos4-e8-worker", "NAME STATUS SEVERITY rhcos4-e8-worker-accounts-no-uid-except-zero PASS high rhcos4-e8-worker-audit-rules-dac-modification-chmod FAIL medium", "oc get complianceremediations -lcompliance.openshift.io/scan-name=rhcos4-e8-worker", "NAME STATE rhcos4-e8-worker-audit-rules-dac-modification-chmod NotApplied rhcos4-e8-worker-audit-rules-dac-modification-chown NotApplied rhcos4-e8-worker-audit-rules-execution-chcon NotApplied rhcos4-e8-worker-audit-rules-execution-restorecon NotApplied rhcos4-e8-worker-audit-rules-execution-semanage NotApplied rhcos4-e8-worker-audit-rules-execution-setfiles NotApplied", "oc -n openshift-compliance annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=", "oc patch complianceremediations/rhcos4-e8-worker-audit-rules-dac-modification-chmod --patch '{\"spec\":{\"apply\":true}}' --type=merge", "oc get mc | grep 75-", "75-rhcos4-e8-worker-my-companys-compliance-requirements 3.2.0 2m46s", "oc describe mc/75-rhcos4-e8-worker-my-companys-compliance-requirements", "Name: 75-rhcos4-e8-worker-my-companys-compliance-requirements Labels: machineconfiguration.openshift.io/role=worker Annotations: remediation/rhcos4-e8-worker-audit-rules-dac-modification-chmod:", "oc -n openshift-compliance annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=", "oc -n openshift-compliance get compliancecheckresults/rhcos4-e8-worker-audit-rules-dac-modification-chmod", "NAME STATUS SEVERITY rhcos4-e8-worker-audit-rules-dac-modification-chmod PASS medium", "oc logs -l workload=<workload_name> -c <container_name>", "spec: config: resources: limits: memory: 500Mi", "oc patch sub compliance-operator -nopenshift-compliance --patch-file co-memlimit-patch.yaml --type=merge", "kind: Subscription metadata: name: compliance-operator namespace: openshift-compliance spec: package: package-name channel: stable config: resources: requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\"", "oc get pod ocp4-pci-dss-api-checks-pod -w", "NAME READY STATUS RESTARTS AGE ocp4-pci-dss-api-checks-pod 0/2 Init:1/2 8 (5m56s ago) 25m ocp4-pci-dss-api-checks-pod 0/2 Init:OOMKilled 8 (6m19s ago) 26m", "timeout: 30m strictNodeScan: true metadata: name: default namespace: openshift-compliance kind: ScanSetting showNotApplicable: false rawResultStorage: nodeSelector: node-role.kubernetes.io/master: '' pvAccessModes: - ReadWriteOnce rotation: 3 size: 1Gi tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists schedule: 0 1 * * * roles: - master - worker apiVersion: compliance.openshift.io/v1alpha1 maxRetryOnTimeout: 3 scanTolerations: - operator: Exists scanLimits: memory: 1024Mi 1", "oc apply -f scansetting.yaml", "apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSetting metadata: name: default namespace: openshift-compliance rawResultStorage: rotation: 3 size: 1Gi roles: - worker - master scanTolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists schedule: '0 1 * * *' timeout: '10m0s' 1 maxRetryOnTimeout: 3 2", "podman run --rm -v ~/.local/bin:/mnt/out:Z registry.redhat.io/compliance/oc-compliance-rhel8:stable /bin/cp /usr/bin/oc-compliance /mnt/out/", "W0611 20:35:46.486903 11354 manifest.go:440] Chose linux/amd64 manifest from the manifest list.", "oc compliance fetch-raw <object-type> <object-name> -o <output-path>", "oc compliance fetch-raw scansettingbindings my-binding -o /tmp/", "Fetching results for my-binding scans: ocp4-cis, ocp4-cis-node-worker, ocp4-cis-node-master Fetching raw compliance results for scan 'ocp4-cis'.... The raw compliance results are available in the following directory: /tmp/ocp4-cis Fetching raw compliance results for scan 'ocp4-cis-node-worker'........ The raw compliance results are available in the following directory: /tmp/ocp4-cis-node-worker Fetching raw compliance results for scan 'ocp4-cis-node-master'... The raw compliance results are available in the following directory: /tmp/ocp4-cis-node-master", "ls /tmp/ocp4-cis-node-master/", "ocp4-cis-node-master-ip-10-0-128-89.ec2.internal-pod.xml.bzip2 ocp4-cis-node-master-ip-10-0-150-5.ec2.internal-pod.xml.bzip2 ocp4-cis-node-master-ip-10-0-163-32.ec2.internal-pod.xml.bzip2", "bunzip2 -c resultsdir/worker-scan/worker-scan-stage-459-tqkg7-compute-0-pod.xml.bzip2 > resultsdir/worker-scan/worker-scan-ip-10-0-170-231.us-east-2.compute.internal-pod.xml", "ls resultsdir/worker-scan/", "worker-scan-ip-10-0-170-231.us-east-2.compute.internal-pod.xml worker-scan-stage-459-tqkg7-compute-0-pod.xml.bzip2 worker-scan-stage-459-tqkg7-compute-1-pod.xml.bzip2", "oc compliance rerun-now scansettingbindings my-binding", "Rerunning scans from 'my-binding': ocp4-cis Re-running scan 'openshift-compliance/ocp4-cis'", "oc compliance bind [--dry-run] -N <binding name> [-S <scansetting name>] <objtype/objname> [..<objtype/objname>]", "oc get profile.compliance -n openshift-compliance", "NAME AGE VERSION ocp4-cis 3h49m 1.5.0 ocp4-cis-1-4 3h49m 1.4.0 ocp4-cis-1-5 3h49m 1.5.0 ocp4-cis-node 3h49m 1.5.0 ocp4-cis-node-1-4 3h49m 1.4.0 ocp4-cis-node-1-5 3h49m 1.5.0 ocp4-e8 3h49m ocp4-high 3h49m Revision 4 ocp4-high-node 3h49m Revision 4 ocp4-high-node-rev-4 3h49m Revision 4 ocp4-high-rev-4 3h49m Revision 4 ocp4-moderate 3h49m Revision 4 ocp4-moderate-node 3h49m Revision 4 ocp4-moderate-node-rev-4 3h49m Revision 4 ocp4-moderate-rev-4 3h49m Revision 4 ocp4-nerc-cip 3h49m ocp4-nerc-cip-node 3h49m ocp4-pci-dss 3h49m 3.2.1 ocp4-pci-dss-3-2 3h49m 3.2.1 ocp4-pci-dss-4-0 3h49m 4.0.0 ocp4-pci-dss-node 3h49m 3.2.1 ocp4-pci-dss-node-3-2 3h49m 3.2.1 ocp4-pci-dss-node-4-0 3h49m 4.0.0 ocp4-stig 3h49m V2R1 ocp4-stig-node 3h49m V2R1 ocp4-stig-node-v1r1 3h49m V1R1 ocp4-stig-node-v2r1 3h49m V2R1 ocp4-stig-v1r1 3h49m V1R1 ocp4-stig-v2r1 3h49m V2R1 rhcos4-e8 3h49m rhcos4-high 3h49m Revision 4 rhcos4-high-rev-4 3h49m Revision 4 rhcos4-moderate 3h49m Revision 4 rhcos4-moderate-rev-4 3h49m Revision 4 rhcos4-nerc-cip 3h49m rhcos4-stig 3h49m V2R1 rhcos4-stig-v1r1 3h49m V1R1 rhcos4-stig-v2r1 3h49m V2R1", "oc get scansettings -n openshift-compliance", "NAME AGE default 10m default-auto-apply 10m", "oc compliance bind -N my-binding profile/ocp4-cis profile/ocp4-cis-node", "Creating ScanSettingBinding my-binding", "oc compliance controls profile ocp4-cis-node", "+-----------+----------+ | FRAMEWORK | CONTROLS | +-----------+----------+ | CIS-OCP | 1.1.1 | + +----------+ | | 1.1.10 | + +----------+ | | 1.1.11 | + +----------+", "oc compliance fetch-fixes profile ocp4-cis -o /tmp", "No fixes to persist for rule 'ocp4-api-server-api-priority-flowschema-catch-all' 1 No fixes to persist for rule 'ocp4-api-server-api-priority-gate-enabled' No fixes to persist for rule 'ocp4-api-server-audit-log-maxbackup' Persisted rule fix to /tmp/ocp4-api-server-audit-log-maxsize.yaml No fixes to persist for rule 'ocp4-api-server-audit-log-path' No fixes to persist for rule 'ocp4-api-server-auth-mode-no-aa' No fixes to persist for rule 'ocp4-api-server-auth-mode-node' No fixes to persist for rule 'ocp4-api-server-auth-mode-rbac' No fixes to persist for rule 'ocp4-api-server-basic-auth' No fixes to persist for rule 'ocp4-api-server-bind-address' No fixes to persist for rule 'ocp4-api-server-client-ca' Persisted rule fix to /tmp/ocp4-api-server-encryption-provider-cipher.yaml Persisted rule fix to /tmp/ocp4-api-server-encryption-provider-config.yaml", "head /tmp/ocp4-api-server-audit-log-maxsize.yaml", "apiVersion: config.openshift.io/v1 kind: APIServer metadata: name: cluster spec: maximumFileSizeMegabytes: 100", "oc get complianceremediations -n openshift-compliance", "NAME STATE ocp4-cis-api-server-encryption-provider-cipher NotApplied ocp4-cis-api-server-encryption-provider-config NotApplied", "oc compliance fetch-fixes complianceremediations ocp4-cis-api-server-encryption-provider-cipher -o /tmp", "Persisted compliance remediation fix to /tmp/ocp4-cis-api-server-encryption-provider-cipher.yaml", "head /tmp/ocp4-cis-api-server-encryption-provider-cipher.yaml", "apiVersion: config.openshift.io/v1 kind: APIServer metadata: name: cluster spec: encryption: type: aescbc", "oc compliance view-result ocp4-cis-scheduler-no-bind-address", "oc create -f <file-name>.yaml", "apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: \"true\" pod-security.kubernetes.io/enforce: privileged 1 name: openshift-file-integrity", "oc create -f <file-name>.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: file-integrity-operator namespace: openshift-file-integrity spec: targetNamespaces: - openshift-file-integrity", "oc create -f <file-name>.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: file-integrity-operator namespace: openshift-file-integrity spec: channel: \"stable\" installPlanApproval: Automatic name: file-integrity-operator source: redhat-operators sourceNamespace: openshift-marketplace", "oc get csv -n openshift-file-integrity", "oc get deploy -n openshift-file-integrity", "apiVersion: fileintegrity.openshift.io/v1alpha1 kind: FileIntegrity metadata: name: worker-fileintegrity namespace: openshift-file-integrity spec: nodeSelector: 1 node-role.kubernetes.io/worker: \"\" tolerations: 2 - key: \"myNode\" operator: \"Exists\" effect: \"NoSchedule\" config: 3 name: \"myconfig\" namespace: \"openshift-file-integrity\" key: \"config\" gracePeriod: 20 4 maxBackups: 5 5 initialDelay: 60 6 debug: false status: phase: Active 7", "oc apply -f worker-fileintegrity.yaml -n openshift-file-integrity", "oc get fileintegrities -n openshift-file-integrity", "NAME AGE worker-fileintegrity 14s", "oc get fileintegrities/worker-fileintegrity -o jsonpath=\"{ .status.phase }\"", "Active", "oc get fileintegritynodestatuses", "NAME AGE worker-fileintegrity-ip-10-0-130-192.ec2.internal 101s worker-fileintegrity-ip-10-0-147-133.ec2.internal 109s worker-fileintegrity-ip-10-0-165-160.ec2.internal 102s", "oc get fileintegritynodestatuses.fileintegrity.openshift.io -ojsonpath='{.items[*].results}' | jq", "oc get fileintegritynodestatuses -w", "NAME NODE STATUS example-fileintegrity-ip-10-0-134-186.us-east-2.compute.internal ip-10-0-134-186.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-150-230.us-east-2.compute.internal ip-10-0-150-230.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-169-137.us-east-2.compute.internal ip-10-0-169-137.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-180-200.us-east-2.compute.internal ip-10-0-180-200.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-194-66.us-east-2.compute.internal ip-10-0-194-66.us-east-2.compute.internal Failed example-fileintegrity-ip-10-0-222-188.us-east-2.compute.internal ip-10-0-222-188.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-134-186.us-east-2.compute.internal ip-10-0-134-186.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-222-188.us-east-2.compute.internal ip-10-0-222-188.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-194-66.us-east-2.compute.internal ip-10-0-194-66.us-east-2.compute.internal Failed example-fileintegrity-ip-10-0-150-230.us-east-2.compute.internal ip-10-0-150-230.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-180-200.us-east-2.compute.internal ip-10-0-180-200.us-east-2.compute.internal Succeeded", "[ { \"condition\": \"Succeeded\", \"lastProbeTime\": \"2020-09-15T12:45:57Z\" } ] [ { \"condition\": \"Succeeded\", \"lastProbeTime\": \"2020-09-15T12:46:03Z\" } ] [ { \"condition\": \"Succeeded\", \"lastProbeTime\": \"2020-09-15T12:45:48Z\" } ]", "oc debug node/ip-10-0-130-192.ec2.internal", "Creating debug namespace/openshift-debug-node-ldfbj Starting pod/ip-10-0-130-192ec2internal-debug To use host binaries, run `chroot /host` Pod IP: 10.0.130.192 If you don't see a command prompt, try pressing enter. sh-4.2# echo \"# integrity test\" >> /host/etc/resolv.conf sh-4.2# exit Removing debug pod Removing debug namespace/openshift-debug-node-ldfbj", "oc get fileintegritynodestatuses.fileintegrity.openshift.io/worker-fileintegrity-ip-10-0-130-192.ec2.internal -ojsonpath='{.results}' | jq -r", "oc get fileintegritynodestatuses.fileintegrity.openshift.io -ojsonpath='{.items[*].results}' | jq", "[ { \"condition\": \"Succeeded\", \"lastProbeTime\": \"2020-09-15T12:54:14Z\" }, { \"condition\": \"Failed\", \"filesChanged\": 1, \"lastProbeTime\": \"2020-09-15T12:57:20Z\", \"resultConfigMapName\": \"aide-ds-worker-fileintegrity-ip-10-0-130-192.ec2.internal-failed\", \"resultConfigMapNamespace\": \"openshift-file-integrity\" } ]", "oc describe cm aide-ds-worker-fileintegrity-ip-10-0-130-192.ec2.internal-failed", "Name: aide-ds-worker-fileintegrity-ip-10-0-130-192.ec2.internal-failed Namespace: openshift-file-integrity Labels: file-integrity.openshift.io/node=ip-10-0-130-192.ec2.internal file-integrity.openshift.io/owner=worker-fileintegrity file-integrity.openshift.io/result-log= Annotations: file-integrity.openshift.io/files-added: 0 file-integrity.openshift.io/files-changed: 1 file-integrity.openshift.io/files-removed: 0 Data integritylog: ------ AIDE 0.15.1 found differences between database and filesystem!! Start timestamp: 2020-09-15 12:58:15 Summary: Total number of files: 31553 Added files: 0 Removed files: 0 Changed files: 1 --------------------------------------------------- Changed files: --------------------------------------------------- changed: /hostroot/etc/resolv.conf --------------------------------------------------- Detailed information about changes: --------------------------------------------------- File: /hostroot/etc/resolv.conf SHA512 : sTQYpB/AL7FeoGtu/1g7opv6C+KT1CBJ , qAeM+a8yTgHPnIHMaRlS+so61EN8VOpg Events: <none>", "oc get cm <failure-cm-name> -o json | jq -r '.data.integritylog' | base64 -d | gunzip", "oc get events --field-selector reason=FileIntegrityStatus", "LAST SEEN TYPE REASON OBJECT MESSAGE 97s Normal FileIntegrityStatus fileintegrity/example-fileintegrity Pending 67s Normal FileIntegrityStatus fileintegrity/example-fileintegrity Initializing 37s Normal FileIntegrityStatus fileintegrity/example-fileintegrity Active", "oc get events --field-selector reason=NodeIntegrityStatus", "LAST SEEN TYPE REASON OBJECT MESSAGE 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-134-173.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-168-238.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-169-175.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-152-92.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-158-144.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-131-30.ec2.internal 87m Warning NodeIntegrityStatus fileintegrity/example-fileintegrity node ip-10-0-152-92.ec2.internal has changed! a:1,c:1,r:0 \\ log:openshift-file-integrity/aide-ds-example-fileintegrity-ip-10-0-152-92.ec2.internal-failed", "oc get events --field-selector reason=NodeIntegrityStatus", "LAST SEEN TYPE REASON OBJECT MESSAGE 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-134-173.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-168-238.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-169-175.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-152-92.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-158-144.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-131-30.ec2.internal 87m Warning NodeIntegrityStatus fileintegrity/example-fileintegrity node ip-10-0-152-92.ec2.internal has changed! a:1,c:1,r:0 \\ log:openshift-file-integrity/aide-ds-example-fileintegrity-ip-10-0-152-92.ec2.internal-failed 40m Warning NodeIntegrityStatus fileintegrity/example-fileintegrity node ip-10-0-152-92.ec2.internal has changed! a:3,c:1,r:0 \\ log:openshift-file-integrity/aide-ds-example-fileintegrity-ip-10-0-152-92.ec2.internal-failed", "oc explain fileintegrity.spec", "oc explain fileintegrity.spec.config", "oc describe cm/worker-fileintegrity", "@@define DBDIR /hostroot/etc/kubernetes @@define LOGDIR /hostroot/etc/kubernetes database=file:@@{DBDIR}/aide.db.gz database_out=file:@@{DBDIR}/aide.db.gz gzip_dbout=yes verbose=5 report_url=file:@@{LOGDIR}/aide.log report_url=stdout PERMS = p+u+g+acl+selinux+xattrs CONTENT_EX = sha512+ftype+p+u+g+n+acl+selinux+xattrs /hostroot/boot/ CONTENT_EX /hostroot/root/\\..* PERMS /hostroot/root/ CONTENT_EX", "oc extract cm/worker-fileintegrity --keys=aide.conf", "vim aide.conf", "/hostroot/etc/kubernetes/static-pod-resources !/hostroot/etc/kubernetes/aide.* !/hostroot/etc/kubernetes/manifests !/hostroot/etc/docker/certs.d !/hostroot/etc/selinux/targeted !/hostroot/etc/openvswitch/conf.db", "!/opt/mydaemon/", "/hostroot/etc/ CONTENT_EX", "oc create cm master-aide-conf --from-file=aide.conf", "apiVersion: fileintegrity.openshift.io/v1alpha1 kind: FileIntegrity metadata: name: master-fileintegrity namespace: openshift-file-integrity spec: nodeSelector: node-role.kubernetes.io/master: \"\" config: name: master-aide-conf namespace: openshift-file-integrity", "oc describe cm/master-fileintegrity | grep /opt/mydaemon", "!/hostroot/opt/mydaemon", "oc annotate fileintegrities/worker-fileintegrity file-integrity.openshift.io/re-init=", "ls -lR /host/etc/kubernetes/aide.* -rw-------. 1 root root 1839782 Sep 17 15:08 /host/etc/kubernetes/aide.db.gz -rw-------. 1 root root 1839783 Sep 17 14:30 /host/etc/kubernetes/aide.db.gz.backup-20200917T15_07_38 -rw-------. 1 root root 73728 Sep 17 15:07 /host/etc/kubernetes/aide.db.gz.backup-20200917T15_07_55 -rw-r--r--. 1 root root 0 Sep 17 15:08 /host/etc/kubernetes/aide.log -rw-------. 1 root root 613 Sep 17 15:07 /host/etc/kubernetes/aide.log.backup-20200917T15_07_38 -rw-r--r--. 1 root root 0 Sep 17 15:07 /host/etc/kubernetes/aide.log.backup-20200917T15_07_55", "oc -n openshift-file-integrity get ds/aide-worker-fileintegrity", "oc -n openshift-file-integrity get pods -lapp=aide-worker-fileintegrity", "oc -n openshift-file-integrity logs pod/aide-worker-fileintegrity-mr8x6", "Starting the AIDE runner daemon initializing AIDE db initialization finished running aide check", "oc get fileintegrities/worker-fileintegrity -o jsonpath=\"{ .status }\"", "oc -n openshift-file-integrity get pods -lapp=aide-worker-fileintegrity", "apiVersion: v1 kind: Namespace metadata: name: openshift-security-profiles labels: openshift.io/cluster-monitoring: \"true\"", "oc create -f namespace-object.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: security-profiles-operator namespace: openshift-security-profiles", "oc create -f operator-group-object.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: security-profiles-operator-sub namespace: openshift-security-profiles spec: channel: release-alpha-rhel-8 installPlanApproval: Automatic name: security-profiles-operator source: redhat-operators sourceNamespace: openshift-marketplace", "oc create -f subscription-object.yaml", "oc get csv -n openshift-security-profiles", "oc get deploy -n openshift-security-profiles", "oc -n openshift-security-profiles patch spod spod --type=merge -p '{\"spec\":{\"verbosity\":1}}'", "securityprofilesoperatordaemon.security-profiles-operator.x-k8s.io/spod patched", "oc new-project my-namespace", "apiVersion: security-profiles-operator.x-k8s.io/v1beta1 kind: SeccompProfile metadata: namespace: my-namespace name: profile1 spec: defaultAction: SCMP_ACT_LOG", "apiVersion: v1 kind: Pod metadata: name: test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: Localhost localhostProfile: operator/my-namespace/profile1.json containers: - name: test-container image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "oc -n my-namespace get seccompprofile profile1 --output wide", "NAME STATUS AGE SECCOMPPROFILE.LOCALHOSTPROFILE profile1 Installed 14s operator/my-namespace/profile1.json", "oc get sp profile1 --output=jsonpath='{.status.localhostProfile}'", "operator/my-namespace/profile1.json", "spec: template: spec: securityContext: seccompProfile: type: Localhost localhostProfile: operator/my-namespace/profile1.json", "oc -n my-namespace patch deployment myapp --patch-file patch.yaml --type=merge", "deployment.apps/myapp patched", "oc -n my-namespace get deployment myapp --output=jsonpath='{.spec.template.spec.securityContext}' | jq .", "{ \"seccompProfile\": { \"localhostProfile\": \"operator/my-namespace/profile1.json\", \"type\": \"localhost\" } }", "apiVersion: security-profiles-operator.x-k8s.io/v1alpha1 kind: ProfileBinding metadata: namespace: my-namespace name: nginx-binding spec: profileRef: kind: SeccompProfile 1 name: profile 2 image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 3", "oc label ns my-namespace spo.x-k8s.io/enable-binding=true", "apiVersion: v1 kind: Pod metadata: name: test-pod spec: containers: - name: test-container image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21", "oc create -f test-pod.yaml", "oc get pod test-pod -o jsonpath='{.spec.containers[*].securityContext.seccompProfile}'", "{\"localhostProfile\":\"operator/my-namespace/profile.json\",\"type\":\"Localhost\"}", "oc new-project my-namespace", "oc label ns my-namespace spo.x-k8s.io/enable-recording=true", "apiVersion: security-profiles-operator.x-k8s.io/v1alpha1 kind: ProfileRecording metadata: namespace: my-namespace name: test-recording spec: kind: SeccompProfile recorder: logs podSelector: matchLabels: app: my-app", "apiVersion: v1 kind: Pod metadata: namespace: my-namespace name: my-pod labels: app: my-app spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: nginx image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 ports: - containerPort: 8080 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] - name: redis image: quay.io/security-profiles-operator/redis:6.2.1 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "oc -n my-namespace get pods", "NAME READY STATUS RESTARTS AGE my-pod 2/2 Running 0 18s", "oc -n openshift-security-profiles logs --since=1m --selector name=spod -c log-enricher", "I0523 14:19:08.747313 430694 enricher.go:445] log-enricher \"msg\"=\"audit\" \"container\"=\"redis\" \"executable\"=\"/usr/local/bin/redis-server\" \"namespace\"=\"my-namespace\" \"node\"=\"xiyuan-23-5g2q9-worker-eastus2-6rpgf\" \"pid\"=656802 \"pod\"=\"my-pod\" \"syscallID\"=0 \"syscallName\"=\"read\" \"timestamp\"=\"1684851548.745:207179\" \"type\"=\"seccomp\"", "oc -n my-namepace delete pod my-pod", "oc get seccompprofiles -lspo.x-k8s.io/recording-id=test-recording -n my-namespace", "NAME STATUS AGE test-recording-nginx Installed 2m48s test-recording-redis Installed 2m48s", "apiVersion: security-profiles-operator.x-k8s.io/v1alpha1 kind: ProfileRecording metadata: # The name of the Recording is the same as the resulting SeccompProfile CRD # after reconciliation. name: test-recording namespace: my-namespace spec: kind: SeccompProfile recorder: logs mergeStrategy: containers podSelector: matchLabels: app: sp-record", "oc label ns my-namespace security.openshift.io/scc.podSecurityLabelSync=false pod-security.kubernetes.io/enforce=privileged pod-security.kubernetes.io/audit=privileged pod-security.kubernetes.io/warn=privileged --overwrite=true", "apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deploy namespace: my-namespace spec: replicas: 3 selector: matchLabels: app: sp-record template: metadata: labels: app: sp-record spec: serviceAccountName: spo-record-sa containers: - name: nginx-record image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 ports: - containerPort: 8080", "oc delete deployment nginx-deploy -n my-namespace", "oc delete profilerecording test-recording -n my-namespace", "oc get seccompprofiles -lspo.x-k8s.io/recording-id=test-recording -n my-namespace", "NAME STATUS AGE test-recording-nginx-record Installed 55s", "oc get seccompprofiles test-recording-nginx-record -o yaml", "oc new-project nginx-deploy", "apiVersion: security-profiles-operator.x-k8s.io/v1alpha2 kind: SelinuxProfile metadata: name: nginx-secure namespace: nginx-deploy spec: allow: '@self': tcp_socket: - listen http_cache_port_t: tcp_socket: - name_bind node_t: tcp_socket: - node_bind inherit: - kind: System name: container", "oc wait --for=condition=ready -n nginx-deploy selinuxprofile nginx-secure", "selinuxprofile.security-profiles-operator.x-k8s.io/nginx-secure condition met", "oc -n openshift-security-profiles rsh -c selinuxd ds/spod", "cat /etc/selinux.d/nginx-secure_nginx-deploy.cil", "(block nginx-secure_nginx-deploy (blockinherit container) (allow process nginx-secure_nginx-deploy.process ( tcp_socket ( listen ))) (allow process http_cache_port_t ( tcp_socket ( name_bind ))) (allow process node_t ( tcp_socket ( node_bind ))) )", "semodule -l | grep nginx-secure", "nginx-secure_nginx-deploy", "oc label ns nginx-deploy security.openshift.io/scc.podSecurityLabelSync=false", "oc label ns nginx-deploy --overwrite=true pod-security.kubernetes.io/enforce=privileged", "oc get selinuxprofile.security-profiles-operator.x-k8s.io/nginx-secure -n nginx-deploy -ojsonpath='{.status.usage}'", "nginx-secure_nginx-deploy.process", "apiVersion: v1 kind: Pod metadata: name: nginx-secure namespace: nginx-deploy spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - image: nginxinc/nginx-unprivileged:1.21 name: nginx securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] seLinuxOptions: # NOTE: This uses an appropriate SELinux type type: nginx-secure_nginx-deploy.process", "apiVersion: security-profiles-operator.x-k8s.io/v1alpha2 kind: SelinuxProfile metadata: name: nginx-secure namespace: nginx-deploy spec: permissive: true", "apiVersion: security-profiles-operator.x-k8s.io/v1alpha1 kind: ProfileBinding metadata: namespace: my-namespace name: nginx-binding spec: profileRef: kind: SelinuxProfile 1 name: profile 2 image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 3", "oc label ns my-namespace spo.x-k8s.io/enable-binding=true", "apiVersion: v1 kind: Pod metadata: name: test-pod spec: containers: - name: test-container image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21", "oc create -f test-pod.yaml", "oc get pod test-pod -o jsonpath='{.spec.containers[*].securityContext.seLinuxOptions.type}'", "profile_nginx-binding.process", "oc new-project nginx-secure", "kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: spo-nginx namespace: nginx-secure subjects: - kind: ServiceAccount name: spo-deploy-test roleRef: kind: Role name: spo-nginx apiGroup: rbac.authorization.k8s.io", "apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: creationTimestamp: null name: spo-nginx namespace: nginx-secure rules: - apiGroups: - security.openshift.io resources: - securitycontextconstraints resourceNames: - privileged verbs: - use", "apiVersion: v1 kind: ServiceAccount metadata: creationTimestamp: null name: spo-deploy-test namespace: nginx-secure", "apiVersion: apps/v1 kind: Deployment metadata: name: selinux-test namespace: nginx-secure metadata: labels: app: selinux-test spec: replicas: 3 selector: matchLabels: app: selinux-test template: metadata: labels: app: selinux-test spec: serviceAccountName: spo-deploy-test securityContext: seLinuxOptions: type: nginx-secure_nginx-secure.process 1 containers: - name: nginx-unpriv image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 ports: - containerPort: 8080", "oc new-project my-namespace", "oc label ns my-namespace spo.x-k8s.io/enable-recording=true", "apiVersion: security-profiles-operator.x-k8s.io/v1alpha1 kind: ProfileRecording metadata: namespace: my-namespace name: test-recording spec: kind: SelinuxProfile recorder: logs podSelector: matchLabels: app: my-app", "apiVersion: v1 kind: Pod metadata: namespace: my-namespace name: my-pod labels: app: my-app spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: nginx image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 ports: - containerPort: 8080 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] - name: redis image: quay.io/security-profiles-operator/redis:6.2.1 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "oc -n my-namespace get pods", "NAME READY STATUS RESTARTS AGE my-pod 2/2 Running 0 18s", "oc -n openshift-security-profiles logs --since=1m --selector name=spod -c log-enricher", "I0517 13:55:36.383187 348295 enricher.go:376] log-enricher \"msg\"=\"audit\" \"container\"=\"redis\" \"namespace\"=\"my-namespace\" \"node\"=\"ip-10-0-189-53.us-east-2.compute.internal\" \"perm\"=\"name_bind\" \"pod\"=\"my-pod\" \"profile\"=\"test-recording_redis_6kmrb_1684331729\" \"scontext\"=\"system_u:system_r:selinuxrecording.process:s0:c4,c27\" \"tclass\"=\"tcp_socket\" \"tcontext\"=\"system_u:object_r:redis_port_t:s0\" \"timestamp\"=\"1684331735.105:273965\" \"type\"=\"selinux\"", "oc -n my-namepace delete pod my-pod", "oc get selinuxprofiles -lspo.x-k8s.io/recording-id=test-recording -n my-namespace", "NAME USAGE STATE test-recording-nginx test-recording-nginx_my-namespace.process Installed test-recording-redis test-recording-redis_my-namespace.process Installed", "apiVersion: security-profiles-operator.x-k8s.io/v1alpha1 kind: ProfileRecording metadata: # The name of the Recording is the same as the resulting SelinuxProfile CRD # after reconciliation. name: test-recording namespace: my-namespace spec: kind: SelinuxProfile recorder: logs mergeStrategy: containers podSelector: matchLabels: app: sp-record", "oc label ns my-namespace security.openshift.io/scc.podSecurityLabelSync=false pod-security.kubernetes.io/enforce=privileged pod-security.kubernetes.io/audit=privileged pod-security.kubernetes.io/warn=privileged --overwrite=true", "apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deploy namespace: my-namespace spec: replicas: 3 selector: matchLabels: app: sp-record template: metadata: labels: app: sp-record spec: serviceAccountName: spo-record-sa containers: - name: nginx-record image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 ports: - containerPort: 8080", "oc delete deployment nginx-deploy -n my-namespace", "oc delete profilerecording test-recording -n my-namespace", "oc get selinuxprofiles -lspo.x-k8s.io/recording-id=test-recording -n my-namespace", "NAME USAGE STATE test-recording-nginx-record test-recording-nginx-record_my-namespace.process Installed", "oc get selinuxprofiles test-recording-nginx-record -o yaml", "oc -n openshift-security-profiles patch spod spod --type merge -p '{\"spec\":{\"allowedSyscalls\": [\"exit\", \"exit_group\", \"futex\", \"nanosleep\"]}}'", "apiVersion: security-profiles-operator.x-k8s.io/v1beta1 kind: SeccompProfile metadata: namespace: my-namespace name: example-name spec: defaultAction: SCMP_ACT_ERRNO baseProfileName: runc-v1.0.0 syscalls: - action: SCMP_ACT_ALLOW names: - exit_group", "oc -n openshift-security-profiles patch spod spod --type=merge -p '{\"spec\":{\"enableMemoryOptimization\":true}}'", "apiVersion: v1 kind: Pod metadata: name: my-recording-pod labels: spo.x-k8s.io/enable-recording: \"true\"", "oc -n openshift-security-profiles patch spod spod --type merge -p '{\"spec\":{\"daemonResourceRequirements\": { \"requests\": {\"memory\": \"256Mi\", \"cpu\": \"250m\"}, \"limits\": {\"memory\": \"512Mi\", \"cpu\": \"500m\"}}}}'", "oc -n openshift-security-profiles patch spod spod --type=merge -p '{\"spec\":{\"priorityClassName\":\"my-priority-class\"}}'", "securityprofilesoperatordaemon.openshift-security-profiles.x-k8s.io/spod patched", "oc get svc/metrics -n openshift-security-profiles", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE metrics ClusterIP 10.0.0.228 <none> 443/TCP 43s", "oc run --rm -i --restart=Never --image=registry.fedoraproject.org/fedora-minimal:latest -n openshift-security-profiles metrics-test -- bash -c 'curl -ks -H \"Authorization: Bearer USD(cat /var/run/secrets/kubernetes.io/serviceaccount/token)\" https://metrics.openshift-security-profiles/metrics-spod'", "HELP security_profiles_operator_seccomp_profile_total Counter about seccomp profile operations. TYPE security_profiles_operator_seccomp_profile_total counter security_profiles_operator_seccomp_profile_total{operation=\"delete\"} 1 security_profiles_operator_seccomp_profile_total{operation=\"update\"} 2", "oc get clusterrolebinding spo-metrics-client -o wide", "NAME ROLE AGE USERS GROUPS SERVICEACCOUNTS spo-metrics-client ClusterRole/spo-metrics-client 35m openshift-security-profiles/default", "oc -n openshift-security-profiles patch spod spod --type=merge -p '{\"spec\":{\"enableLogEnricher\":true}}'", "securityprofilesoperatordaemon.security-profiles-operator.x-k8s.io/spod patched", "oc -n openshift-security-profiles logs -f ds/spod log-enricher", "I0623 12:51:04.257814 1854764 deleg.go:130] setup \"msg\"=\"starting component: log-enricher\" \"buildDate\"=\"1980-01-01T00:00:00Z\" \"compiler\"=\"gc\" \"gitCommit\"=\"unknown\" \"gitTreeState\"=\"clean\" \"goVersion\"=\"go1.16.2\" \"platform\"=\"linux/amd64\" \"version\"=\"0.4.0-dev\" I0623 12:51:04.257890 1854764 enricher.go:44] log-enricher \"msg\"=\"Starting log-enricher on node: 127.0.0.1\" I0623 12:51:04.257898 1854764 enricher.go:46] log-enricher \"msg\"=\"Connecting to local GRPC server\" I0623 12:51:04.258061 1854764 enricher.go:69] log-enricher \"msg\"=\"Reading from file /var/log/audit/audit.log\" 2021/06/23 12:51:04 Seeked /var/log/audit/audit.log - &{Offset:0 Whence:2}", "apiVersion: security-profiles-operator.x-k8s.io/v1beta1 kind: SeccompProfile metadata: name: log namespace: default spec: defaultAction: SCMP_ACT_LOG", "apiVersion: v1 kind: Pod metadata: name: log-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: Localhost localhostProfile: operator/default/log.json containers: - name: log-container image: quay.io/security-profiles-operator/test-nginx-unprivileged:1.21 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "oc -n openshift-security-profiles logs -f ds/spod log-enricher", "... I0623 12:59:11.479869 1854764 enricher.go:111] log-enricher \"msg\"=\"audit\" \"container\"=\"log-container\" \"executable\"=\"/\" \"namespace\"=\"default\" \"node\"=\"127.0.0.1\" \"pid\"=1905792 \"pod\"=\"log-pod\" \"syscallID\"=3 \"syscallName\"=\"close\" \"timestamp\"=\"1624453150.205:1061\" \"type\"=\"seccomp\" I0623 12:59:11.487323 1854764 enricher.go:111] log-enricher \"msg\"=\"audit\" \"container\"=\"log-container\" \"executable\"=\"/\" \"namespace\"=\"default\" \"node\"=\"127.0.0.1\" \"pid\"=1905792 \"pod\"=\"log-pod\" \"syscallID\"=157 \"syscallName\"=\"prctl\" \"timestamp\"=\"1624453150.205:1062\" \"type\"=\"seccomp\" I0623 12:59:11.492157 1854764 enricher.go:111] log-enricher \"msg\"=\"audit\" \"container\"=\"log-container\" \"executable\"=\"/\" \"namespace\"=\"default\" \"node\"=\"127.0.0.1\" \"pid\"=1905792 \"pod\"=\"log-pod\" \"syscallID\"=157 \"syscallName\"=\"prctl\" \"timestamp\"=\"1624453150.205:1063\" \"type\"=\"seccomp\" ... I0623 12:59:20.258523 1854764 enricher.go:111] log-enricher \"msg\"=\"audit\" \"container\"=\"log-container\" \"executable\"=\"/usr/sbin/nginx\" \"namespace\"=\"default\" \"node\"=\"127.0.0.1\" \"pid\"=1905792 \"pod\"=\"log-pod\" \"syscallID\"=12 \"syscallName\"=\"brk\" \"timestamp\"=\"1624453150.235:2873\" \"type\"=\"seccomp\" I0623 12:59:20.263349 1854764 enricher.go:111] log-enricher \"msg\"=\"audit\" \"container\"=\"log-container\" \"executable\"=\"/usr/sbin/nginx\" \"namespace\"=\"default\" \"node\"=\"127.0.0.1\" \"pid\"=1905792 \"pod\"=\"log-pod\" \"syscallID\"=21 \"syscallName\"=\"access\" \"timestamp\"=\"1624453150.235:2874\" \"type\"=\"seccomp\" I0623 12:59:20.354091 1854764 enricher.go:111] log-enricher \"msg\"=\"audit\" \"container\"=\"log-container\" \"executable\"=\"/usr/sbin/nginx\" \"namespace\"=\"default\" \"node\"=\"127.0.0.1\" \"pid\"=1905792 \"pod\"=\"log-pod\" \"syscallID\"=257 \"syscallName\"=\"openat\" \"timestamp\"=\"1624453150.235:2875\" \"type\"=\"seccomp\" I0623 12:59:20.358844 1854764 enricher.go:111] log-enricher \"msg\"=\"audit\" \"container\"=\"log-container\" \"executable\"=\"/usr/sbin/nginx\" \"namespace\"=\"default\" \"node\"=\"127.0.0.1\" \"pid\"=1905792 \"pod\"=\"log-pod\" \"syscallID\"=5 \"syscallName\"=\"fstat\" \"timestamp\"=\"1624453150.235:2876\" \"type\"=\"seccomp\" I0623 12:59:20.363510 1854764 enricher.go:111] log-enricher \"msg\"=\"audit\" \"container\"=\"log-container\" \"executable\"=\"/usr/sbin/nginx\" \"namespace\"=\"default\" \"node\"=\"127.0.0.1\" \"pid\"=1905792 \"pod\"=\"log-pod\" \"syscallID\"=9 \"syscallName\"=\"mmap\" \"timestamp\"=\"1624453150.235:2877\" \"type\"=\"seccomp\" I0623 12:59:20.454127 1854764 enricher.go:111] log-enricher \"msg\"=\"audit\" \"container\"=\"log-container\" \"executable\"=\"/usr/sbin/nginx\" \"namespace\"=\"default\" \"node\"=\"127.0.0.1\" \"pid\"=1905792 \"pod\"=\"log-pod\" \"syscallID\"=3 \"syscallName\"=\"close\" \"timestamp\"=\"1624453150.235:2878\" \"type\"=\"seccomp\" I0623 12:59:20.458654 1854764 enricher.go:111] log-enricher \"msg\"=\"audit\" \"container\"=\"log-container\" \"executable\"=\"/usr/sbin/nginx\" \"namespace\"=\"default\" \"node\"=\"127.0.0.1\" \"pid\"=1905792 \"pod\"=\"log-pod\" \"syscallID\"=257 \"syscallName\"=\"openat\" \"timestamp\"=\"1624453150.235:2879\" \"type\"=\"seccomp\" ...", "spec: webhookOptions: - name: recording.spo.io objectSelector: matchExpressions: - key: spo-record operator: In values: - \"true\"", "oc -n openshift-security-profiles patch spod spod -p USD(cat /tmp/spod-wh.patch) --type=merge", "oc get MutatingWebhookConfiguration spo-mutating-webhook-configuration -oyaml", "oc -n openshift-security-profiles logs openshift-security-profiles-<id>", "I1019 19:34:14.942464 1 main.go:90] setup \"msg\"=\"starting openshift-security-profiles\" \"buildDate\"=\"2020-10-19T19:31:24Z\" \"compiler\"=\"gc\" \"gitCommit\"=\"a3ef0e1ea6405092268c18f240b62015c247dd9d\" \"gitTreeState\"=\"dirty\" \"goVersion\"=\"go1.15.1\" \"platform\"=\"linux/amd64\" \"version\"=\"0.2.0-dev\" I1019 19:34:15.348389 1 listener.go:44] controller-runtime/metrics \"msg\"=\"metrics server is starting to listen\" \"addr\"=\":8080\" I1019 19:34:15.349076 1 main.go:126] setup \"msg\"=\"starting manager\" I1019 19:34:15.349449 1 internal.go:391] controller-runtime/manager \"msg\"=\"starting metrics server\" \"path\"=\"/metrics\" I1019 19:34:15.350201 1 controller.go:142] controller \"msg\"=\"Starting EventSource\" \"controller\"=\"profile\" \"reconcilerGroup\"=\"security-profiles-operator.x-k8s.io\" \"reconcilerKind\"=\"SeccompProfile\" \"source\"={\"Type\":{\"metadata\":{\"creationTimestamp\":null},\"spec\":{\"defaultAction\":\"\"}}} I1019 19:34:15.450674 1 controller.go:149] controller \"msg\"=\"Starting Controller\" \"controller\"=\"profile\" \"reconcilerGroup\"=\"security-profiles-operator.x-k8s.io\" \"reconcilerKind\"=\"SeccompProfile\" I1019 19:34:15.450757 1 controller.go:176] controller \"msg\"=\"Starting workers\" \"controller\"=\"profile\" \"reconcilerGroup\"=\"security-profiles-operator.x-k8s.io\" \"reconcilerKind\"=\"SeccompProfile\" \"worker count\"=1 I1019 19:34:15.453102 1 profile.go:148] profile \"msg\"=\"Reconciled profile from SeccompProfile\" \"namespace\"=\"openshift-security-profiles\" \"profile\"=\"nginx-1.19.1\" \"name\"=\"nginx-1.19.1\" \"resource version\"=\"728\" I1019 19:34:15.453618 1 profile.go:148] profile \"msg\"=\"Reconciled profile from SeccompProfile\" \"namespace\"=\"openshift-security-profiles\" \"profile\"=\"openshift-security-profiles\" \"name\"=\"openshift-security-profiles\" \"resource version\"=\"729\"", "oc exec -t -n openshift-security-profiles openshift-security-profiles-<id> -- ls /var/lib/kubelet/seccomp/operator/my-namespace/my-workload", "profile-block.json profile-complain.json", "oc delete MutatingWebhookConfiguration spo-mutating-webhook-configuration", "oc get packagemanifests -n openshift-marketplace | grep tang", "tang-operator Red Hat", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: tang-operator namespace: openshift-operators spec: channel: stable 1 installPlanApproval: Automatic name: tang-operator 2 source: redhat-operators 3 sourceNamespace: openshift-marketplace 4", "oc apply -f tang-operator.yaml", "oc -n openshift-operators get pods", "NAME READY STATUS RESTARTS AGE tang-operator-controller-manager-694b754bd6-4zk7x 2/2 Running 0 12s", "oc -n nbde describe tangserver", "... Status: Active Keys: File Name: QS82aXnPKA4XpfHr3umbA0r2iTbRcpWQ0VI2Qdhi6xg Generated: 2022-02-08 15:44:17.030090484 +0000 sha1: PvYQKtrTuYsMV2AomUeHrUWkCGg sha256: QS82aXnPKA4XpfHr3umbA0r2iTbRcpWQ0VI2Qdhi6xg ...", "apiVersion: daemons.redhat.com/v1alpha1 kind: TangServer metadata: name: tangserver namespace: nbde finalizers: - finalizer.daemons.tangserver.redhat.com spec: replicas: 1 hiddenKeys: - sha1: \"PvYQKtrTuYsMV2AomUeHrUWkCGg\" 1", "oc apply -f minimal-keyretrieve-rotate-tangserver.yaml", "oc -n nbde describe tangserver", "... Spec: Hidden Keys: sha1: PvYQKtrTuYsMV2AomUeHrUWkCGg Replicas: 1 Status: Active Keys: File Name: T-0wx1HusMeWx4WMOk4eK97Q5u4dY5tamdDs7_ughnY.jwk Generated: 2023-10-25 15:38:18.134939752 +0000 sha1: vVxkNCNq7gygeeA9zrHrbc3_NZ4 sha256: T-0wx1HusMeWx4WMOk4eK97Q5u4dY5tamdDs7_ughnY Hidden Keys: File Name: .QS82aXnPKA4XpfHr3umbA0r2iTbRcpWQ0VI2Qdhi6xg.jwk Generated: 2023-10-25 15:37:29.126928965 +0000 Hidden: 2023-10-25 15:38:13.515467436 +0000 sha1: PvYQKtrTuYsMV2AomUeHrUWkCGg sha256: QS82aXnPKA4XpfHr3umbA0r2iTbRcpWQ0VI2Qdhi6xg ...", "oc -n nbde describe tangserver", "... Status: Active Keys: File Name: PvYQKtrTuYsMV2AomUeHrUWkCGg.jwk Generated: 2022-02-08 15:44:17.030090484 +0000 sha1: PvYQKtrTuYsMV2AomUeHrUWkCGg sha256: QS82aXnPKA4XpfHr3umbA0r2iTbRcpWQ0VI2Qdhi6xg ...", "apiVersion: daemons.redhat.com/v1alpha1 kind: TangServer metadata: name: tangserver namespace: nbde finalizers: - finalizer.daemons.tangserver.redhat.com spec: replicas: 1 hiddenKeys: [] 1", "oc apply -f hidden-keys-deletion-tangserver.yaml", "oc -n nbde describe tangserver", "... Spec: Hidden Keys: sha1: PvYQKtrTuYsMV2AomUeHrUWkCGg Replicas: 1 Status: Active Keys: File Name: T-0wx1HusMeWx4WMOk4eK97Q5u4dY5tamdDs7_ughnY.jwk Generated: 2023-10-25 15:38:18.134939752 +0000 sha1: vVxkNCNq7gygeeA9zrHrbc3_NZ4 sha256: T-0wx1HusMeWx4WMOk4eK97Q5u4dY5tamdDs7_ughnY Status: Ready: 1 Running: 1 Service External URL: http://35.222.247.84:7500/adv Tang Server Error: No Events: ...", "curl 2> /dev/null http://34.28.173.205:7500/adv | jq", "{ \"payload\": \"eyJrZXlzIj...eSJdfV19\", \"protected\": \"eyJhbGciOiJFUzUxMiIsImN0eSI6Imp3ay1zZXQranNvbiJ9\", \"signature\": \"AUB0qSFx0FJLeTU...aV_GYWlDx50vCXKNyMMCRx\" }", "oc -n nbde describe tangserver", "... Spec: ... Status: Ready: 1 Running: 1 Service External URL: http://34.28.173.205:7500/adv Tang Server Error: No Events: ...", "curl 2> /dev/null http://34.28.173.205:7500/adv | jq", "{ \"payload\": \"eyJrZXlzIj...eSJdfV19\", \"protected\": \"eyJhbGciOiJFUzUxMiIsImN0eSI6Imp3ay1zZXQranNvbiJ9\", \"signature\": \"AUB0qSFx0FJLeTU...aV_GYWlDx50vCXKNyMMCRx\" }", "oc get pods -n cert-manager", "NAME READY STATUS RESTARTS AGE cert-manager-bd7fbb9fc-wvbbt 1/1 Running 0 3m39s cert-manager-cainjector-56cc5f9868-7g9z7 1/1 Running 0 4m5s cert-manager-webhook-d4f79d7f7-9dg9w 1/1 Running 0 4m9s", "oc new-project cert-manager-operator", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-cert-manager-operator namespace: cert-manager-operator spec: targetNamespaces: - \"cert-manager-operator\"", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-cert-manager-operator namespace: cert-manager-operator spec: targetNamespaces: [] spec: {}", "oc create -f operatorGroup.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-cert-manager-operator namespace: cert-manager-operator spec: channel: stable-v1 name: openshift-cert-manager-operator source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Automatic", "oc create -f subscription.yaml", "oc get subscription -n cert-manager-operator", "NAME PACKAGE SOURCE CHANNEL openshift-cert-manager-operator openshift-cert-manager-operator redhat-operators stable-v1", "oc get csv -n cert-manager-operator", "NAME DISPLAY VERSION REPLACES PHASE cert-manager-operator.v1.13.0 cert-manager Operator for Red Hat OpenShift 1.13.0 cert-manager-operator.v1.12.1 Succeeded", "oc get pods -n cert-manager-operator", "NAME READY STATUS RESTARTS AGE cert-manager-operator-controller-manager-695b4d46cb-r4hld 2/2 Running 0 7m4s", "oc get pods -n cert-manager", "NAME READY STATUS RESTARTS AGE cert-manager-58b7f649c4-dp6l4 1/1 Running 0 7m1s cert-manager-cainjector-5565b8f897-gx25h 1/1 Running 0 7m37s cert-manager-webhook-9bc98cbdd-f972x 1/1 Running 0 7m40s", "oc create configmap trusted-ca -n cert-manager", "oc label cm trusted-ca config.openshift.io/inject-trusted-cabundle=true -n cert-manager", "oc -n cert-manager-operator patch subscription openshift-cert-manager-operator --type='merge' -p '{\"spec\":{\"config\":{\"env\":[{\"name\":\"TRUSTED_CA_CONFIGMAP_NAME\",\"value\":\"trusted-ca\"}]}}}'", "oc rollout status deployment/cert-manager-operator-controller-manager -n cert-manager-operator && rollout status deployment/cert-manager -n cert-manager && rollout status deployment/cert-manager-webhook -n cert-manager && rollout status deployment/cert-manager-cainjector -n cert-manager", "deployment \"cert-manager-operator-controller-manager\" successfully rolled out deployment \"cert-manager\" successfully rolled out deployment \"cert-manager-webhook\" successfully rolled out deployment \"cert-manager-cainjector\" successfully rolled out", "oc get deployment cert-manager -n cert-manager -o=jsonpath={.spec.template.spec.'containers[0].volumeMounts'}", "[{\"mountPath\":\"/etc/pki/tls/certs/cert-manager-tls-ca-bundle.crt\",\"name\":\"trusted-ca\",\"subPath\":\"ca-bundle.crt\"}]", "oc get deployment cert-manager -n cert-manager -o=jsonpath={.spec.template.spec.volumes}", "[{\"configMap\":{\"defaultMode\":420,\"name\":\"trusted-ca\"},\"name\":\"trusted-ca\"}]", "oc edit certmanager cluster", "apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster spec: controllerConfig: overrideEnv: - name: HTTP_PROXY value: http://<proxy_url> 1 - name: HTTPS_PROXY value: https://<proxy_url> 2 - name: NO_PROXY value: <ignore_proxy_domains> 3", "oc get pods -l app.kubernetes.io/name=cert-manager -n cert-manager", "NAME READY STATUS RESTARTS AGE cert-manager-bd7fbb9fc-wvbbt 1/1 Running 0 39s", "oc get pod <redeployed_cert-manager_controller_pod> -n cert-manager -o yaml", "env: - name: HTTP_PROXY value: http://<PROXY_URL> - name: HTTPS_PROXY value: https://<PROXY_URL> - name: NO_PROXY value: <IGNORE_PROXY_DOMAINS>", "oc edit certmanager cluster", "apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster spec: controllerConfig: overrideArgs: - '--dns01-recursive-nameservers=<server_address>' 1 - '--dns01-recursive-nameservers-only' 2 - '--acme-http01-solver-nameservers=<host>:<port>' 3 - '--v=<verbosity_level>' 4 - '--metrics-listen-address=<host>:<port>' 5 - '--issuer-ambient-credentials' 6 webhookConfig: overrideArgs: - '--v=4' 7 cainjectorConfig: overrideArgs: - '--v=2' 8", "oc get pods -n cert-manager -o yaml", "metadata: name: cert-manager-6d4b5d4c97-kldwl namespace: cert-manager spec: containers: - args: - --acme-http01-solver-nameservers=1.1.1.1:53 - --cluster-resource-namespace=USD(POD_NAMESPACE) - --dns01-recursive-nameservers=1.1.1.1:53 - --dns01-recursive-nameservers-only - --leader-election-namespace=kube-system - --max-concurrent-challenges=60 - --metrics-listen-address=0.0.0.0:9042 - --v=6 metadata: name: cert-manager-cainjector-866c4fd758-ltxxj namespace: cert-manager spec: containers: - args: - --leader-election-namespace=kube-system - --v=2 metadata: name: cert-manager-webhook-6d48f88495-c88gd namespace: cert-manager spec: containers: - args: - --v=4", "oc get certificate", "NAME READY SECRET AGE certificate-from-clusterissuer-route53-ambient True certificate-from-clusterissuer-route53-ambient 8h", "oc edit certmanager cluster", "apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster spec: controllerConfig: overrideArgs: - '--enable-certificate-owner-ref'", "oc get pods -l app.kubernetes.io/name=cert-manager -n cert-manager -o yaml", "metadata: name: cert-manager-6e4b4d7d97-zmdnb namespace: cert-manager spec: containers: - args: - --enable-certificate-owner-ref", "oc get deployment -n cert-manager", "NAME READY UP-TO-DATE AVAILABLE AGE cert-manager 1/1 1 1 53m cert-manager-cainjector 1/1 1 1 53m cert-manager-webhook 1/1 1 1 53m", "oc get deployment -n cert-manager -o yaml", "metadata: name: cert-manager namespace: cert-manager spec: template: spec: containers: - name: cert-manager-controller resources: {} 1 metadata: name: cert-manager-cainjector namespace: cert-manager spec: template: spec: containers: - name: cert-manager-cainjector resources: {} 2 metadata: name: cert-manager-webhook namespace: cert-manager spec: template: spec: containers: - name: cert-manager-webhook resources: {} 3", "oc patch certmanager.operator cluster --type=merge -p=\" spec: controllerConfig: overrideResources: limits: 1 cpu: 200m 2 memory: 64Mi 3 requests: 4 cpu: 10m 5 memory: 16Mi 6 webhookConfig: overrideResources: limits: 7 cpu: 200m 8 memory: 64Mi 9 requests: 10 cpu: 10m 11 memory: 16Mi 12 cainjectorConfig: overrideResources: limits: 13 cpu: 200m 14 memory: 64Mi 15 requests: 16 cpu: 10m 17 memory: 16Mi 18 \"", "certmanager.operator.openshift.io/cluster patched", "oc get deployment -n cert-manager -o yaml", "metadata: name: cert-manager namespace: cert-manager spec: template: spec: containers: - name: cert-manager-controller resources: limits: cpu: 200m memory: 64Mi requests: cpu: 10m memory: 16Mi metadata: name: cert-manager-cainjector namespace: cert-manager spec: template: spec: containers: - name: cert-manager-cainjector resources: limits: cpu: 200m memory: 64Mi requests: cpu: 10m memory: 16Mi metadata: name: cert-manager-webhook namespace: cert-manager spec: template: spec: containers: - name: cert-manager-webhook resources: limits: cpu: 200m memory: 64Mi requests: cpu: 10m memory: 16Mi", "oc patch certmanager.operator cluster --type=merge -p=\" spec: controllerConfig: overrideScheduling: nodeSelector: node-role.kubernetes.io/control-plane: '' 1 tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule 2 webhookConfig: overrideScheduling: nodeSelector: node-role.kubernetes.io/control-plane: '' 3 tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule 4 cainjectorConfig: overrideScheduling: nodeSelector: node-role.kubernetes.io/control-plane: '' 5 tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule\" 6", "oc get pods -n cert-manager -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES cert-manager-58d9c69db4-78mzp 1/1 Running 0 10m 10.129.0.36 ip-10-0-1-106.ec2.internal <none> <none> cert-manager-cainjector-85b6987c66-rhzf7 1/1 Running 0 11m 10.128.0.39 ip-10-0-1-136.ec2.internal <none> <none> cert-manager-webhook-7f54b4b858-29bsp 1/1 Running 0 11m 10.129.0.35 ip-10-0-1-106.ec2.internal <none> <none>", "oc get deployments -n cert-manager -o jsonpath='{range .items[*]}{.metadata.name}{\"\\n\"}{.spec.template.spec.nodeSelector}{\"\\n\"}{.spec.template.spec.tolerations}{\"\\n\\n\"}{end}'", "cert-manager {\"kubernetes.io/os\":\"linux\",\"node-role.kubernetes.io/control-plane\":\"\"} [{\"effect\":\"NoSchedule\",\"key\":\"node-role.kubernetes.io/master\",\"operator\":\"Exists\"}] cert-manager-cainjector {\"kubernetes.io/os\":\"linux\",\"node-role.kubernetes.io/control-plane\":\"\"} [{\"effect\":\"NoSchedule\",\"key\":\"node-role.kubernetes.io/master\",\"operator\":\"Exists\"}] cert-manager-webhook {\"kubernetes.io/os\":\"linux\",\"node-role.kubernetes.io/control-plane\":\"\"} [{\"effect\":\"NoSchedule\",\"key\":\"node-role.kubernetes.io/master\",\"operator\":\"Exists\"}]", "oc get events -n cert-manager --field-selector reason=Scheduled", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: cert-manager namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - \"route53:GetChange\" effect: Allow resource: \"arn:aws:route53:::change/*\" - action: - \"route53:ChangeResourceRecordSets\" - \"route53:ListResourceRecordSets\" effect: Allow resource: \"arn:aws:route53:::hostedzone/*\" - action: - \"route53:ListHostedZonesByName\" effect: Allow resource: \"*\" secretRef: name: aws-creds namespace: cert-manager serviceAccountNames: - cert-manager", "oc create -f sample-credential-request.yaml", "oc -n cert-manager-operator patch subscription openshift-cert-manager-operator --type=merge -p '{\"spec\":{\"config\":{\"env\":[{\"name\":\"CLOUD_CREDENTIALS_SECRET_NAME\",\"value\":\"aws-creds\"}]}}}'", "oc get pods -l app.kubernetes.io/name=cert-manager -n cert-manager", "NAME READY STATUS RESTARTS AGE cert-manager-bd7fbb9fc-wvbbt 1/1 Running 0 15m39s", "oc get -n cert-manager pod/<cert-manager_controller_pod_name> -o yaml", "spec: containers: - args: - mountPath: /.aws name: cloud-credentials volumes: - name: cloud-credentials secret: secretName: aws-creds", "mkdir credentials-request", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: cert-manager namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - \"route53:GetChange\" effect: Allow resource: \"arn:aws:route53:::change/*\" - action: - \"route53:ChangeResourceRecordSets\" - \"route53:ListResourceRecordSets\" effect: Allow resource: \"arn:aws:route53:::hostedzone/*\" - action: - \"route53:ListHostedZonesByName\" effect: Allow resource: \"*\" secretRef: name: aws-creds namespace: cert-manager serviceAccountNames: - cert-manager", "ccoctl aws create-iam-roles --name <user_defined_name> --region=<aws_region> --credentials-requests-dir=<path_to_credrequests_dir> --identity-provider-arn <oidc_provider_arn> --output-dir=<path_to_output_dir>", "2023/05/15 18:10:34 Role arn:aws:iam::XXXXXXXXXXXX:role/<user_defined_name>-cert-manager-aws-creds created 2023/05/15 18:10:34 Saved credentials configuration to: <path_to_output_dir>/manifests/cert-manager-aws-creds-credentials.yaml 2023/05/15 18:10:35 Updated Role policy for Role <user_defined_name>-cert-manager-aws-creds", "oc -n cert-manager annotate serviceaccount cert-manager eks.amazonaws.com/role-arn=\"<aws_role_arn>\"", "oc delete pods -l app.kubernetes.io/name=cert-manager -n cert-manager", "oc get pods -l app.kubernetes.io/name=cert-manager -n cert-manager", "NAME READY STATUS RESTARTS AGE cert-manager-bd7fbb9fc-wvbbt 1/1 Running 0 39s", "oc set env -n cert-manager po/<cert_manager_controller_pod_name> --list", "pods/cert-manager-57f9555c54-vbcpg, container cert-manager-controller POD_NAMESPACE from field path metadata.namespace AWS_ROLE_ARN=XXXXXXXXXXXX AWS_WEB_IDENTITY_TOKEN_FILE=/var/run/secrets/eks.amazonaws.com/serviceaccount/token", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: cert-manager namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/dns.admin secretRef: name: gcp-credentials namespace: cert-manager serviceAccountNames: - cert-manager", "oc create -f sample-credential-request.yaml", "oc -n cert-manager-operator patch subscription openshift-cert-manager-operator --type=merge -p '{\"spec\":{\"config\":{\"env\":[{\"name\":\"CLOUD_CREDENTIALS_SECRET_NAME\",\"value\":\"gcp-credentials\"}]}}}'", "oc get pods -l app.kubernetes.io/name=cert-manager -n cert-manager", "NAME READY STATUS RESTARTS AGE cert-manager-bd7fbb9fc-wvbbt 1/1 Running 0 15m39s", "oc get -n cert-manager pod/<cert-manager_controller_pod_name> -o yaml", "spec: containers: - args: volumeMounts: - mountPath: /.config/gcloud name: cloud-credentials . volumes: - name: cloud-credentials secret: items: - key: service_account.json path: application_default_credentials.json secretName: gcp-credentials", "mkdir credentials-request", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: cert-manager namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/dns.admin secretRef: name: gcp-credentials namespace: cert-manager serviceAccountNames: - cert-manager", "ccoctl gcp create-service-accounts --name <user_defined_name> --output-dir=<path_to_output_dir> --credentials-requests-dir=<path_to_credrequests_dir> --workload-identity-pool <workload_identity_pool> --workload-identity-provider <workload_identity_provider> --project <gcp_project_id>", "ccoctl gcp create-service-accounts --name abcde-20230525-4bac2781 --output-dir=/home/outputdir --credentials-requests-dir=/home/credentials-requests --workload-identity-pool abcde-20230525-4bac2781 --workload-identity-provider abcde-20230525-4bac2781 --project openshift-gcp-devel", "ls <path_to_output_dir>/manifests/*-credentials.yaml | xargs -I{} oc apply -f {}", "oc -n cert-manager-operator patch subscription openshift-cert-manager-operator --type=merge -p '{\"spec\":{\"config\":{\"env\":[{\"name\":\"CLOUD_CREDENTIALS_SECRET_NAME\",\"value\":\"gcp-credentials\"}]}}}'", "oc get pods -l app.kubernetes.io/name=cert-manager -n cert-manager", "NAME READY STATUS RESTARTS AGE cert-manager-bd7fbb9fc-wvbbt 1/1 Running 0 15m39s", "oc get -n cert-manager pod/<cert-manager_controller_pod_name> -o yaml", "spec: containers: - args: volumeMounts: - mountPath: /var/run/secrets/openshift/serviceaccount name: bound-sa-token - mountPath: /.config/gcloud name: cloud-credentials volumes: - name: bound-sa-token projected: sources: - serviceAccountToken: audience: openshift path: token - name: cloud-credentials secret: items: - key: service_account.json path: application_default_credentials.json secretName: gcp-credentials", "apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: acme-cluster-issuer spec: acme:", "apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: letsencrypt-staging 1 spec: acme: preferredChain: \"\" privateKeySecretRef: name: <secret_for_private_key> 2 server: https://acme-staging-v02.api.letsencrypt.org/directory 3 solvers: - http01: ingress: ingressClassName: openshift-default 4", "oc patch ingress/<ingress-name> --type=merge --patch '{\"spec\":{\"ingressClassName\":\"openshift-default\"}}' -n <namespace>", "oc create -f acme-cluster-issuer.yaml", "apiVersion: v1 kind: Namespace metadata: name: my-ingress-namespace 1", "oc create -f namespace.yaml", "apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: sample-ingress 1 namespace: my-ingress-namespace 2 annotations: cert-manager.io/cluster-issuer: letsencrypt-staging 3 spec: ingressClassName: openshift-default 4 tls: - hosts: - <hostname> 5 secretName: sample-tls 6 rules: - host: <hostname> 7 http: paths: - path: / pathType: Prefix backend: service: name: sample-workload 8 port: number: 80", "oc create -f ingress.yaml", "oc edit certmanager cluster", "apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster spec: controllerConfig: 1 overrideArgs: - '--dns01-recursive-nameservers-only' 2 - '--dns01-recursive-nameservers=1.1.1.1:53' 3", "oc new-project <issuer_namespace>", "oc create secret generic aws-secret --from-literal=awsSecretAccessKey=<aws_secret_access_key> \\ 1 -n my-issuer-namespace", "apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: <letsencrypt_staging> 1 namespace: <issuer_namespace> 2 spec: acme: server: https://acme-staging-v02.api.letsencrypt.org/directory 3 email: \"<email_address>\" 4 privateKeySecretRef: name: <secret_private_key> 5 solvers: - dns01: route53: accessKeyID: <aws_key_id> 6 hostedZoneID: <hosted_zone_id> 7 region: <region_name> 8 secretAccessKeySecretRef: name: \"aws-secret\" 9 key: \"awsSecretAccessKey\" 10", "oc create -f issuer.yaml", "oc edit certmanager cluster", "apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster spec: controllerConfig: 1 overrideArgs: - '--dns01-recursive-nameservers-only' 2 - '--dns01-recursive-nameservers=1.1.1.1:53' 3", "oc new-project <issuer_namespace>", "oc patch certmanager/cluster --type=merge -p='{\"spec\":{\"controllerConfig\":{\"overrideArgs\":[\"--issuer-ambient-credentials\"]}}}'", "apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: <letsencrypt_staging> 1 namespace: <issuer_namespace> 2 spec: acme: server: https://acme-staging-v02.api.letsencrypt.org/directory 3 email: \"<email_address>\" 4 privateKeySecretRef: name: <secret_private_key> 5 solvers: - dns01: route53: hostedZoneID: <hosted_zone_id> 6 region: us-east-1", "oc create -f issuer.yaml", "oc edit certmanager cluster", "apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster spec: controllerConfig: 1 overrideArgs: - '--dns01-recursive-nameservers-only' 2 - '--dns01-recursive-nameservers=1.1.1.1:53' 3", "oc new-project my-issuer-namespace", "oc create secret generic clouddns-dns01-solver-svc-acct --from-file=service_account.json=<path/to/gcp_service_account.json> -n my-issuer-namespace", "apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: <acme_dns01_clouddns_issuer> 1 namespace: <issuer_namespace> 2 spec: acme: preferredChain: \"\" privateKeySecretRef: name: <secret_private_key> 3 server: https://acme-staging-v02.api.letsencrypt.org/directory 4 solvers: - dns01: cloudDNS: project: <project_id> 5 serviceAccountSecretRef: name: clouddns-dns01-solver-svc-acct 6 key: service_account.json 7", "oc create -f issuer.yaml", "oc edit certmanager cluster", "apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster spec: controllerConfig: 1 overrideArgs: - '--dns01-recursive-nameservers-only' 2 - '--dns01-recursive-nameservers=1.1.1.1:53' 3", "oc new-project <issuer_namespace>", "oc patch certmanager/cluster --type=merge -p='{\"spec\":{\"controllerConfig\":{\"overrideArgs\":[\"--issuer-ambient-credentials\"]}}}'", "apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: <acme_dns01_clouddns_issuer> 1 namespace: <issuer_namespace> spec: acme: preferredChain: \"\" privateKeySecretRef: name: <secret_private_key> 2 server: https://acme-staging-v02.api.letsencrypt.org/directory 3 solvers: - dns01: cloudDNS: project: <gcp_project_id> 4", "oc create -f issuer.yaml", "oc edit certmanager cluster", "apiVersion: operator.openshift.io/v1alpha1 kind: CertManager metadata: name: cluster spec: controllerConfig: 1 overrideArgs: - '--dns01-recursive-nameservers-only' 2 - '--dns01-recursive-nameservers=1.1.1.1:53' 3", "oc new-project my-issuer-namespace", "oc create secret generic <secret_name> --from-literal=<azure_secret_access_key_name>=<azure_secret_access_key_value> \\ 1 2 3 -n my-issuer-namespace", "apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: <acme-dns01-azuredns-issuer> 1 namespace: <issuer_namespace> 2 spec: acme: preferredChain: \"\" privateKeySecretRef: name: <secret_private_key> 3 server: https://acme-staging-v02.api.letsencrypt.org/directory 4 solvers: - dns01: azureDNS: clientID: <azure_client_id> 5 clientSecretSecretRef: name: <secret_name> 6 key: <azure_secret_access_key_name> 7 subscriptionID: <azure_subscription_id> 8 tenantID: <azure_tenant_id> 9 resourceGroupName: <azure_dns_zone_resource_group> 10 hostedZoneName: <azure_dns_zone> 11 environment: AzurePublicCloud", "oc create -f issuer.yaml", "apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: <tls_cert> 1 namespace: <issuer_namespace> 2 spec: isCA: false commonName: '<common_name>' 3 secretName: <secret_name> 4 dnsNames: - \"<domain_name>\" 5 issuerRef: name: <issuer_name> 6 kind: Issuer", "oc create -f certificate.yaml", "oc get certificate -w -n <issuer_namespace>", "apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: <tls_cert> 1 namespace: openshift-config spec: isCA: false commonName: \"api.<cluster_base_domain>\" 2 secretName: <secret_name> 3 dnsNames: - \"api.<cluster_base_domain>\" 4 issuerRef: name: <issuer_name> 5 kind: Issuer", "oc create -f certificate.yaml", "oc get certificate -w -n openshift-config", "apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: <tls_cert> 1 namespace: openshift-ingress spec: isCA: false commonName: \"apps.<cluster_base_domain>\" 2 secretName: <secret_name> 3 dnsNames: - \"apps.<cluster_base_domain>\" 4 - \"*.apps.<cluster_base_domain>\" 5 issuerRef: name: <issuer_name> 6 kind: Issuer", "oc create -f certificate.yaml", "oc get certificate -w -n openshift-ingress", "oc -n cert-manager-operator patch subscription openshift-cert-manager-operator --type='merge' -p '{\"spec\":{\"config\":{\"env\":[{\"name\":\"UNSUPPORTED_ADDON_FEATURES\",\"value\":\"IstioCSR=true\"}]}}}'", "oc rollout status deployment/cert-manager-operator-controller-manager -n cert-manager-operator", "deployment \"cert-manager-operator-controller-manager\" successfully rolled out", "apiVersion: cert-manager.io/v1 kind: Issuer 1 metadata: name: selfsigned namespace: <istio_project_name> 2 spec: selfSigned: {} --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: istio-ca namespace: <istio_project_name> spec: isCA: true duration: 87600h # 10 years secretName: istio-ca commonName: istio-ca privateKey: algorithm: ECDSA size: 256 subject: organizations: - cluster.local - cert-manager issuerRef: name: selfsigned kind: Issuer 3 group: cert-manager.io --- kind: Issuer metadata: name: istio-ca namespace: <istio_project_name> 4 spec: ca: secretName: istio-ca", "oc get issuer istio-ca -n <istio_project_name>", "NAME READY AGE istio-ca True 3m", "oc new-project <istio_csr_project_name>", "apiVersion: operator.openshift.io/v1alpha1 kind: IstioCSR metadata: name: default namespace: <istio_csr_project_name> spec: IstioCSRConfig: certManager: issuerRef: name: istio-ca 1 kind: Issuer 2 group: cert-manager.io istiodTLSConfig: trustDomain: cluster.local istio: namespace: istio-system", "oc create -f IstioCSR.yaml", "oc get deployment -n <istio_csr_project_name>", "NAME READY UP-TO-DATE AVAILABLE AGE cert-manager-istio-csr 1/1 1 1 24s", "oc get pod -n <istio_csr_project_name>", "NAME READY STATUS RESTARTS AGE cert-manager-istio-csr-5c979f9b7c-bv57w 1/1 Running 0 45s", "oc -n <istio_csr_project_name> logs <istio_csr_pod_name>", "oc -n cert-manager-operator logs <cert_manager_operator_pod_name>", "oc -n <istio-csr_project_name> delete istiocsrs.operator.openshift.io default", "oc get clusterrolebindings,clusterroles -l \"app=cert-manager-istio-csr,app.kubernetes.io/name=cert-manager-istio-csr\"", "oc get certificate,deployments,services,serviceaccounts -l \"app=cert-manager-istio-csr,app.kubernetes.io/name=cert-manager-istio-csr\" -n <istio_csr_project_name>", "oc get roles,rolebindings -l \"app=cert-manager-istio-csr,app.kubernetes.io/name=cert-manager-istio-csr\" -n <istio_csr_project_name>", "oc -n <istio_csr_project_name> delete <resource_type>/<resource_name>", "oc label namespace cert-manager openshift.io/cluster-monitoring=true", "apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: prometheus-k8s namespace: cert-manager rules: - apiGroups: - \"\" resources: - services - endpoints - pods verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: prometheus-k8s namespace: cert-manager roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: prometheus-k8s subjects: - kind: ServiceAccount name: prometheus-k8s namespace: openshift-monitoring --- apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: app: cert-manager app.kubernetes.io/component: controller app.kubernetes.io/instance: cert-manager app.kubernetes.io/name: cert-manager name: cert-manager namespace: cert-manager spec: endpoints: - interval: 30s port: tcp-prometheus-servicemonitor scheme: http selector: matchLabels: app.kubernetes.io/component: controller app.kubernetes.io/instance: cert-manager app.kubernetes.io/name: cert-manager", "oc create -f monitoring.yaml", "{instance=\"<endpoint>\"} 1", "{endpoint=\"tcp-prometheus-servicemonitor\"}", "oc edit certmanager.operator cluster", "apiVersion: operator.openshift.io/v1alpha1 kind: CertManager spec: logLevel: <log_level> 1", "oc -n cert-manager-operator patch subscription openshift-cert-manager-operator --type='merge' -p '{\"spec\":{\"config\":{\"env\":[{\"name\":\"OPERATOR_LOG_LEVEL\",\"value\":\"v\"}]}}}' 1", "oc set env deploy/cert-manager-operator-controller-manager -n cert-manager-operator --list | grep -e OPERATOR_LOG_LEVEL -e container", "deployments/cert-manager-operator-controller-manager, container kube-rbac-proxy OPERATOR_LOG_LEVEL=9 deployments/cert-manager-operator-controller-manager, container cert-manager-operator OPERATOR_LOG_LEVEL=9", "oc logs deploy/cert-manager-operator-controller-manager -n cert-manager-operator", "oc delete deployment -n cert-manager -l app.kubernetes.io/instance=cert-manager", "{\"kind\":\"Event\",\"apiVersion\":\"audit.k8s.io/v1\",\"level\":\"Metadata\",\"auditID\":\"ad209ce1-fec7-4130-8192-c4cc63f1d8cd\",\"stage\":\"ResponseComplete\",\"requestURI\":\"/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cert-recovery-controller-lock?timeout=35s\",\"verb\":\"update\",\"user\":{\"username\":\"system:serviceaccount:openshift-kube-controller-manager:localhost-recovery-client\",\"uid\":\"dd4997e3-d565-4e37-80f8-7fc122ccd785\",\"groups\":[\"system:serviceaccounts\",\"system:serviceaccounts:openshift-kube-controller-manager\",\"system:authenticated\"]},\"sourceIPs\":[\"::1\"],\"userAgent\":\"cluster-kube-controller-manager-operator/v0.0.0 (linux/amd64) kubernetes/USDFormat\",\"objectRef\":{\"resource\":\"configmaps\",\"namespace\":\"openshift-kube-controller-manager\",\"name\":\"cert-recovery-controller-lock\",\"uid\":\"5c57190b-6993-425d-8101-8337e48c7548\",\"apiVersion\":\"v1\",\"resourceVersion\":\"574307\"},\"responseStatus\":{\"metadata\":{},\"code\":200},\"requestReceivedTimestamp\":\"2020-04-02T08:27:20.200962Z\",\"stageTimestamp\":\"2020-04-02T08:27:20.206710Z\",\"annotations\":{\"authorization.k8s.io/decision\":\"allow\",\"authorization.k8s.io/reason\":\"RBAC: allowed by ClusterRoleBinding \\\"system:openshift:operator:kube-controller-manager-recovery\\\" of ClusterRole \\\"cluster-admin\\\" to ServiceAccount \\\"localhost-recovery-client/openshift-kube-controller-manager\\\"\"}}", "oc adm node-logs --role=master --path=openshift-apiserver/", "ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit-2021-03-09T00-12-19.834.log ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit-2021-03-09T00-11-49.835.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit-2021-03-09T00-13-00.128.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit.log", "oc adm node-logs <node_name> --path=openshift-apiserver/<log_name>", "oc adm node-logs ci-ln-m0wpfjb-f76d1-vnb5x-master-0 --path=openshift-apiserver/audit-2021-03-09T00-12-19.834.log", "{\"kind\":\"Event\",\"apiVersion\":\"audit.k8s.io/v1\",\"level\":\"Metadata\",\"auditID\":\"381acf6d-5f30-4c7d-8175-c9c317ae5893\",\"stage\":\"ResponseComplete\",\"requestURI\":\"/metrics\",\"verb\":\"get\",\"user\":{\"username\":\"system:serviceaccount:openshift-monitoring:prometheus-k8s\",\"uid\":\"825b60a0-3976-4861-a342-3b2b561e8f82\",\"groups\":[\"system:serviceaccounts\",\"system:serviceaccounts:openshift-monitoring\",\"system:authenticated\"]},\"sourceIPs\":[\"10.129.2.6\"],\"userAgent\":\"Prometheus/2.23.0\",\"responseStatus\":{\"metadata\":{},\"code\":200},\"requestReceivedTimestamp\":\"2021-03-08T18:02:04.086545Z\",\"stageTimestamp\":\"2021-03-08T18:02:04.107102Z\",\"annotations\":{\"authorization.k8s.io/decision\":\"allow\",\"authorization.k8s.io/reason\":\"RBAC: allowed by ClusterRoleBinding \\\"prometheus-k8s\\\" of ClusterRole \\\"prometheus-k8s\\\" to ServiceAccount \\\"prometheus-k8s/openshift-monitoring\\\"\"}}", "oc adm node-logs --role=master --path=kube-apiserver/", "ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit-2021-03-09T14-07-27.129.log ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit-2021-03-09T19-24-22.620.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit-2021-03-09T18-37-07.511.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit.log", "oc adm node-logs <node_name> --path=kube-apiserver/<log_name>", "oc adm node-logs ci-ln-m0wpfjb-f76d1-vnb5x-master-0 --path=kube-apiserver/audit-2021-03-09T14-07-27.129.log", "{\"kind\":\"Event\",\"apiVersion\":\"audit.k8s.io/v1\",\"level\":\"Metadata\",\"auditID\":\"cfce8a0b-b5f5-4365-8c9f-79c1227d10f9\",\"stage\":\"ResponseComplete\",\"requestURI\":\"/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa\",\"verb\":\"get\",\"user\":{\"username\":\"system:serviceaccount:openshift-kube-scheduler-operator:openshift-kube-scheduler-operator\",\"uid\":\"2574b041-f3c8-44e6-a057-baef7aa81516\",\"groups\":[\"system:serviceaccounts\",\"system:serviceaccounts:openshift-kube-scheduler-operator\",\"system:authenticated\"]},\"sourceIPs\":[\"10.128.0.8\"],\"userAgent\":\"cluster-kube-scheduler-operator/v0.0.0 (linux/amd64) kubernetes/USDFormat\",\"objectRef\":{\"resource\":\"serviceaccounts\",\"namespace\":\"openshift-kube-scheduler\",\"name\":\"openshift-kube-scheduler-sa\",\"apiVersion\":\"v1\"},\"responseStatus\":{\"metadata\":{},\"code\":200},\"requestReceivedTimestamp\":\"2021-03-08T18:06:42.512619Z\",\"stageTimestamp\":\"2021-03-08T18:06:42.516145Z\",\"annotations\":{\"authentication.k8s.io/legacy-token\":\"system:serviceaccount:openshift-kube-scheduler-operator:openshift-kube-scheduler-operator\",\"authorization.k8s.io/decision\":\"allow\",\"authorization.k8s.io/reason\":\"RBAC: allowed by ClusterRoleBinding \\\"system:openshift:operator:cluster-kube-scheduler-operator\\\" of ClusterRole \\\"cluster-admin\\\" to ServiceAccount \\\"openshift-kube-scheduler-operator/openshift-kube-scheduler-operator\\\"\"}}", "oc adm node-logs --role=master --path=oauth-apiserver/", "ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit-2021-03-09T13-06-26.128.log ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit-2021-03-09T18-23-21.619.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit-2021-03-09T17-36-06.510.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit.log", "oc adm node-logs <node_name> --path=oauth-apiserver/<log_name>", "oc adm node-logs ci-ln-m0wpfjb-f76d1-vnb5x-master-0 --path=oauth-apiserver/audit-2021-03-09T13-06-26.128.log", "{\"kind\":\"Event\",\"apiVersion\":\"audit.k8s.io/v1\",\"level\":\"Metadata\",\"auditID\":\"dd4c44e2-3ea1-4830-9ab7-c91a5f1388d6\",\"stage\":\"ResponseComplete\",\"requestURI\":\"/apis/user.openshift.io/v1/users/~\",\"verb\":\"get\",\"user\":{\"username\":\"system:serviceaccount:openshift-monitoring:prometheus-k8s\",\"groups\":[\"system:serviceaccounts\",\"system:serviceaccounts:openshift-monitoring\",\"system:authenticated\"]},\"sourceIPs\":[\"10.0.32.4\",\"10.128.0.1\"],\"userAgent\":\"dockerregistry/v0.0.0 (linux/amd64) kubernetes/USDFormat\",\"objectRef\":{\"resource\":\"users\",\"name\":\"~\",\"apiGroup\":\"user.openshift.io\",\"apiVersion\":\"v1\"},\"responseStatus\":{\"metadata\":{},\"code\":200},\"requestReceivedTimestamp\":\"2021-03-08T17:47:43.653187Z\",\"stageTimestamp\":\"2021-03-08T17:47:43.660187Z\",\"annotations\":{\"authorization.k8s.io/decision\":\"allow\",\"authorization.k8s.io/reason\":\"RBAC: allowed by ClusterRoleBinding \\\"basic-users\\\" of ClusterRole \\\"basic-user\\\" to Group \\\"system:authenticated\\\"\"}}", "oc adm node-logs --role=master --path=oauth-server/", "ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit-2022-05-11T18-57-32.395.log ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit-2022-05-11T19-07-07.021.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit-2022-05-11T19-06-51.844.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit.log", "oc adm node-logs <node_name> --path=oauth-server/<log_name>", "oc adm node-logs ci-ln-m0wpfjb-f76d1-vnb5x-master-0 --path=oauth-server/audit-2022-05-11T18-57-32.395.log", "{\"kind\":\"Event\",\"apiVersion\":\"audit.k8s.io/v1\",\"level\":\"Metadata\",\"auditID\":\"13c20345-f33b-4b7d-b3b6-e7793f805621\",\"stage\":\"ResponseComplete\",\"requestURI\":\"/login\",\"verb\":\"post\",\"user\":{\"username\":\"system:anonymous\",\"groups\":[\"system:unauthenticated\"]},\"sourceIPs\":[\"10.128.2.6\"],\"userAgent\":\"Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Firefox/91.0\",\"responseStatus\":{\"metadata\":{},\"code\":302},\"requestReceivedTimestamp\":\"2022-05-11T17:31:16.280155Z\",\"stageTimestamp\":\"2022-05-11T17:31:16.297083Z\",\"annotations\":{\"authentication.openshift.io/decision\":\"error\",\"authentication.openshift.io/username\":\"kubeadmin\",\"authorization.k8s.io/decision\":\"allow\",\"authorization.k8s.io/reason\":\"\"}}", "oc adm node-logs node-1.example.com --path=openshift-apiserver/audit.log | jq 'select(.user.username == \"myusername\")'", "oc adm node-logs node-1.example.com --path=openshift-apiserver/audit.log | jq 'select(.userAgent == \"cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/USDFormat\")'", "oc adm node-logs node-1.example.com --path=kube-apiserver/audit.log | jq 'select(.requestURI | startswith(\"/apis/apiextensions.k8s.io/v1beta1\")) | .userAgent'", "oc adm node-logs node-1.example.com --path=oauth-apiserver/audit.log | jq 'select(.verb != \"get\")'", "oc adm node-logs node-1.example.com --path=oauth-server/audit.log | jq 'select(.annotations[\"authentication.openshift.io/username\"] != null and .annotations[\"authentication.openshift.io/decision\"] == \"error\")'", "oc adm must-gather -- /usr/bin/gather_audit_logs", "tar cvaf must-gather.tar.gz must-gather.local.472290403699006248 1", "oc edit apiserver cluster", "apiVersion: config.openshift.io/v1 kind: APIServer metadata: spec: audit: profile: WriteRequestBodies 1", "oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 12 1", "oc edit apiserver cluster", "apiVersion: config.openshift.io/v1 kind: APIServer metadata: spec: audit: customRules: 1 - group: system:authenticated:oauth profile: WriteRequestBodies - group: system:authenticated profile: AllRequestBodies profile: Default 2", "oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 12 1", "oc edit apiserver cluster", "apiVersion: config.openshift.io/v1 kind: APIServer metadata: spec: audit: profile: None", "oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 12 1", "oc explain <component>.spec.tlsSecurityProfile.<profile> 1", "oc explain apiserver.spec.tlsSecurityProfile.intermediate", "KIND: APIServer VERSION: config.openshift.io/v1 DESCRIPTION: intermediate is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Intermediate_compatibility_.28recommended.29 and looks like this (yaml): ciphers: - TLS_AES_128_GCM_SHA256 - TLS_AES_256_GCM_SHA384 - TLS_CHACHA20_POLY1305_SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES256-GCM-SHA384 - ECDHE-RSA-AES256-GCM-SHA384 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - DHE-RSA-AES128-GCM-SHA256 - DHE-RSA-AES256-GCM-SHA384 minTLSVersion: TLSv1.2", "oc explain <component>.spec.tlsSecurityProfile 1", "oc explain ingresscontroller.spec.tlsSecurityProfile", "KIND: IngressController VERSION: operator.openshift.io/v1 RESOURCE: tlsSecurityProfile <Object> DESCRIPTION: FIELDS: custom <> custom is a user-defined TLS security profile. Be extremely careful using a custom profile as invalid configurations can be catastrophic. An example custom profile looks like this: ciphers: - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: TLSv1.1 intermediate <> intermediate is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Intermediate_compatibility_.28recommended.29 and looks like this (yaml): ... 1 modern <> modern is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Modern_compatibility and looks like this (yaml): ... 2 NOTE: Currently unsupported. old <> old is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Old_backward_compatibility and looks like this (yaml): ... 3 type <string>", "apiVersion: operator.openshift.io/v1 kind: IngressController spec: tlsSecurityProfile: old: {} type: Old", "oc edit IngressController default -n openshift-ingress-operator", "apiVersion: operator.openshift.io/v1 kind: IngressController spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11", "oc describe IngressController default -n openshift-ingress-operator", "Name: default Namespace: openshift-ingress-operator Labels: <none> Annotations: <none> API Version: operator.openshift.io/v1 Kind: IngressController Spec: Tls Security Profile: Custom: Ciphers: ECDHE-ECDSA-CHACHA20-POLY1305 ECDHE-RSA-CHACHA20-POLY1305 ECDHE-RSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES128-GCM-SHA256 Min TLS Version: VersionTLS11 Type: Custom", "apiVersion: config.openshift.io/v1 kind: APIServer spec: tlsSecurityProfile: old: {} type: Old", "oc edit APIServer cluster", "apiVersion: config.openshift.io/v1 kind: APIServer metadata: name: cluster spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11", "oc describe apiserver cluster", "Name: cluster Namespace: API Version: config.openshift.io/v1 Kind: APIServer Spec: Audit: Profile: Default Tls Security Profile: Custom: Ciphers: ECDHE-ECDSA-CHACHA20-POLY1305 ECDHE-RSA-CHACHA20-POLY1305 ECDHE-RSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES128-GCM-SHA256 Min TLS Version: VersionTLS11 Type: Custom", "oc describe etcd cluster", "Name: cluster Namespace: API Version: operator.openshift.io/v1 Kind: Etcd Spec: Log Level: Normal Management State: Managed Observed Config: Serving Info: Cipher Suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 Min TLS Version: VersionTLS12", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig spec: tlsSecurityProfile: old: {} type: Old machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\"", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-tls-security-profile spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 4 #", "oc create -f <filename>", "oc debug node/<node_name>", "sh-4.4# chroot /host", "sh-4.4# cat /etc/kubernetes/kubelet.conf", "\"kind\": \"KubeletConfiguration\", \"apiVersion\": \"kubelet.config.k8s.io/v1beta1\", # \"tlsCipherSuites\": [ \"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256\", \"TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256\" ], \"tlsMinVersion\": \"VersionTLS12\", #", "oc get pods -n <namespace>", "oc get pods -n workshop", "NAME READY STATUS RESTARTS AGE parksmap-1-4xkwf 1/1 Running 0 2m17s parksmap-1-deploy 0/1 Completed 0 2m22s", "oc get pod parksmap-1-4xkwf -n workshop -o yaml", "apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/network-status: |- [{ \"name\": \"openshift-sdn\", \"interface\": \"eth0\", \"ips\": [ \"10.131.0.18\" ], \"default\": true, \"dns\": {} }] k8s.v1.cni.cncf.io/network-status: |- [{ \"name\": \"openshift-sdn\", \"interface\": \"eth0\", \"ips\": [ \"10.131.0.18\" ], \"default\": true, \"dns\": {} }] openshift.io/deployment-config.latest-version: \"1\" openshift.io/deployment-config.name: parksmap openshift.io/deployment.name: parksmap-1 openshift.io/generated-by: OpenShiftWebConsole openshift.io/scc: restricted-v2 1 seccomp.security.alpha.kubernetes.io/pod: runtime/default 2", "oc -n <workload-namespace> adm policy add-scc-to-user <scc-name> -z <serviceaccount_name>", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: custom-seccomp spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<hash> filesystem: root mode: 0644 path: /var/lib/kubelet/seccomp/seccomp-nostat.json", "seccompProfiles: - localhost/<custom-name>.json 1", "spec: securityContext: seccompProfile: type: Localhost localhostProfile: <custom-name>.json 1", "oc edit apiserver.config.openshift.io cluster", "apiVersion: config.openshift.io/v1 kind: APIServer metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-07-11T17:35:37Z\" generation: 1 name: cluster resourceVersion: \"907\" selfLink: /apis/config.openshift.io/v1/apiservers/cluster uid: 4b45a8dd-a402-11e9-91ec-0219944e0696 spec: additionalCORSAllowedOrigins: - (?i)//my\\.subdomain\\.domain\\.com(:|\\z) 1", "oc edit apiserver", "spec: encryption: type: aesgcm 1", "oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "EncryptionCompleted All resources encrypted: routes.route.openshift.io", "oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "EncryptionCompleted All resources encrypted: secrets, configmaps", "oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "EncryptionCompleted All resources encrypted: oauthaccesstokens.oauth.openshift.io, oauthauthorizetokens.oauth.openshift.io", "oc edit apiserver", "spec: encryption: type: identity 1", "oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "DecryptionCompleted Encryption mode set to identity and everything is decrypted", "oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "DecryptionCompleted Encryption mode set to identity and everything is decrypted", "oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "DecryptionCompleted Encryption mode set to identity and everything is decrypted", "oc create secret generic container-security-operator-extra-certs --from-file=quay.crt -n openshift-operators", "oc get packagemanifests container-security-operator -o jsonpath='{range .status.channels[*]}{@.currentCSV} {@.name}{\"\\n\"}{end}' | awk '{print \"STARTING_CSV=\" USD1 \" CHANNEL=\" USD2 }' | sort -Vr | head -1", "STARTING_CSV=container-security-operator.v3.8.9 CHANNEL=stable-3.8", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: container-security-operator namespace: openshift-operators spec: channel: USD{CHANNEL} 1 installPlanApproval: Automatic name: container-security-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: USD{STARTING_CSV} 2", "oc apply -f container-security-operator.yaml", "subscription.operators.coreos.com/container-security-operator created", "oc get vuln --all-namespaces", "NAMESPACE NAME AGE default sha256.ca90... 6m56s skynet sha256.ca90... 9m37s", "oc describe vuln --namespace mynamespace sha256.ac50e3752", "Name: sha256.ac50e3752 Namespace: quay-enterprise Spec: Features: Name: nss-util Namespace Name: centos:7 Version: 3.44.0-3.el7 Versionformat: rpm Vulnerabilities: Description: Network Security Services (NSS) is a set of libraries", "oc delete customresourcedefinition imagemanifestvulns.secscan.quay.redhat.com", "customresourcedefinition.apiextensions.k8s.io \"imagemanifestvulns.secscan.quay.redhat.com\" deleted", "echo plaintext | clevis encrypt tang '{\"url\":\"http://localhost:7500\"}' -y >/tmp/encrypted.oldkey", "clevis decrypt </tmp/encrypted.oldkey", "tang-show-keys 7500", "36AHjNH3NZDSnlONLz1-V4ie6t8", "cd /var/db/tang/", "ls -A1", "36AHjNH3NZDSnlONLz1-V4ie6t8.jwk gJZiNPMLRBnyo_ZKfK4_5SrnHYo.jwk", "for key in *.jwk; do mv -- \"USDkey\" \".USDkey\"; done", "/usr/libexec/tangd-keygen /var/db/tang", "ls -A1", ".36AHjNH3NZDSnlONLz1-V4ie6t8.jwk .gJZiNPMLRBnyo_ZKfK4_5SrnHYo.jwk Bp8XjITceWSN_7XFfW7WfJDTomE.jwk WOjQYkyK7DxY_T5pMncMO5w0f6E.jwk", "tang-show-keys 7500", "WOjQYkyK7DxY_T5pMncMO5w0f6E", "clevis decrypt </tmp/encrypted.oldkey", "apiVersion: apps/v1 kind: DaemonSet metadata: name: tang-rekey namespace: openshift-machine-config-operator spec: selector: matchLabels: name: tang-rekey template: metadata: labels: name: tang-rekey spec: containers: - name: tang-rekey image: registry.access.redhat.com/ubi9/ubi-minimal:latest imagePullPolicy: IfNotPresent command: - \"/sbin/chroot\" - \"/host\" - \"/bin/bash\" - \"-ec\" args: - | rm -f /tmp/rekey-complete || true echo \"Current tang pin:\" clevis-luks-list -d USDROOT_DEV -s 1 echo \"Applying new tang pin: USDNEW_TANG_PIN\" clevis-luks-edit -f -d USDROOT_DEV -s 1 -c \"USDNEW_TANG_PIN\" echo \"Pin applied successfully\" touch /tmp/rekey-complete sleep infinity readinessProbe: exec: command: - cat - /host/tmp/rekey-complete initialDelaySeconds: 30 periodSeconds: 10 env: - name: ROOT_DEV value: /dev/disk/by-partlabel/root - name: NEW_TANG_PIN value: >- {\"t\":1,\"pins\":{\"tang\":[ {\"url\":\"http://tangserver01:7500\",\"thp\":\"WOjQYkyK7DxY_T5pMncMO5w0f6E\"}, {\"url\":\"http://tangserver02:7500\",\"thp\":\"I5Ynh2JefoAO3tNH9TgI4obIaXI\"}, {\"url\":\"http://tangserver03:7500\",\"thp\":\"38qWZVeDKzCPG9pHLqKzs6k1ons\"} ]}} volumeMounts: - name: hostroot mountPath: /host securityContext: privileged: true volumes: - name: hostroot hostPath: path: / nodeSelector: kubernetes.io/os: linux priorityClassName: system-node-critical restartPolicy: Always serviceAccount: machine-config-daemon serviceAccountName: machine-config-daemon", "oc apply -f tang-rekey.yaml", "oc get -n openshift-machine-config-operator ds tang-rekey", "NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE tang-rekey 1 1 0 1 0 kubernetes.io/os=linux 11s", "oc get -n openshift-machine-config-operator ds tang-rekey", "NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE tang-rekey 1 1 1 1 1 kubernetes.io/os=linux 13h", "echo \"okay\" | clevis encrypt tang '{\"url\":\"http://tangserver02:7500\",\"thp\":\"badthumbprint\"}' | clevis decrypt", "Unable to fetch advertisement: 'http://tangserver02:7500/adv/badthumbprint'!", "echo \"okay\" | clevis encrypt tang '{\"url\":\"http://tangserver03:7500\",\"thp\":\"goodthumbprint\"}' | clevis decrypt", "okay", "oc get pods -A | grep tang-rekey", "openshift-machine-config-operator tang-rekey-7ks6h 1/1 Running 20 (8m39s ago) 89m", "oc logs tang-rekey-7ks6h", "Current tang pin: 1: sss '{\"t\":1,\"pins\":{\"tang\":[{\"url\":\"http://10.46.55.192:7500\"},{\"url\":\"http://10.46.55.192:7501\"},{\"url\":\"http://10.46.55.192:7502\"}]}}' Applying new tang pin: {\"t\":1,\"pins\":{\"tang\":[ {\"url\":\"http://tangserver01:7500\",\"thp\":\"WOjQYkyK7DxY_T5pMncMO5w0f6E\"}, {\"url\":\"http://tangserver02:7500\",\"thp\":\"I5Ynh2JefoAO3tNH9TgI4obIaXI\"}, {\"url\":\"http://tangserver03:7500\",\"thp\":\"38qWZVeDKzCPG9pHLqKzs6k1ons\"} ]}} Updating binding Binding edited successfully Pin applied successfully", "cd /var/db/tang/", "ls -A1", ".36AHjNH3NZDSnlONLz1-V4ie6t8.jwk .gJZiNPMLRBnyo_ZKfK4_5SrnHYo.jwk Bp8XjITceWSN_7XFfW7WfJDTomE.jwk WOjQYkyK7DxY_T5pMncMO5w0f6E.jwk", "rm .*.jwk", "ls -A1", "Bp8XjITceWSN_7XFfW7WfJDTomE.jwk WOjQYkyK7DxY_T5pMncMO5w0f6E.jwk", "tang-show-keys 7500", "WOjQYkyK7DxY_T5pMncMO5w0f6E", "clevis decrypt </tmp/encryptValidation", "Error communicating with the server!", "sudo clevis luks pass -d /dev/vda2 -s 1", "sudo clevis luks regen -d /dev/vda2 -s 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html-single/security_and_compliance/index
B.96.5. RHSA-2011:0475 - Critical: thunderbird security update
B.96.5. RHSA-2011:0475 - Critical: thunderbird security update An updated thunderbird package that fixes several security issues is now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having critical security impact. Common Vulnerability Scoring System (CVSS) base scores, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. Mozilla Thunderbird is a standalone mail and newsgroup client. CVE-2011-0080 , CVE-2011-0081 Several flaws were found in the processing of malformed HTML content. An HTML mail message containing malicious content could possibly lead to arbitrary code execution with the privileges of the user running Thunderbird. CVE-2011-0078 An arbitrary memory write flaw was found in the way Thunderbird handled out-of-memory conditions. If all memory was consumed when a user viewed a malicious HTML mail message, it could possibly lead to arbitrary code execution with the privileges of the user running Thunderbird. CVE-2011-0077 An integer overflow flaw was found in the way Thunderbird handled the HTML frameset tag. An HTML mail message with a frameset tag containing large values for the "rows" and "cols" attributes could trigger this flaw, possibly leading to arbitrary code execution with the privileges of the user running Thunderbird. CVE-2011-0075 A flaw was found in the way Thunderbird handled the HTML iframe tag. An HTML mail message with an iframe tag containing a specially-crafted source address could trigger this flaw, possibly leading to arbitrary code execution with the privileges of the user running Thunderbird. CVE-2011-0074 A flaw was found in the way Thunderbird displayed multiple marquee elements. A malformed HTML mail message could cause Thunderbird to execute arbitrary code with the privileges of the user running Thunderbird. CVE-2011-0073 A flaw was found in the way Thunderbird handled the nsTreeSelection element. Malformed content could cause Thunderbird to execute arbitrary code with the privileges of the user running Thunderbird. CVE-2011-0071 A directory traversal flaw was found in the Thunderbird resource:// protocol handler. Malicious content could cause Thunderbird to access arbitrary files accessible to the user running Thunderbird. CVE-2011-0070 A double free flaw was found in the way Thunderbird handled "application/http-index-format" documents. A malformed HTTP response could cause Thunderbird to execute arbitrary code with the privileges of the user running Thunderbird. All Thunderbird users should upgrade to this updated package, which resolves these issues. All running instances of Thunderbird must be restarted for the update to take effect.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/rhsa-2011-0475
6.9. Snapshots
6.9. Snapshots 6.9.1. Creating a Snapshot of a Virtual Machine A snapshot is a view of a virtual machine's operating system and applications on any or all available disks at a given point in time. Take a snapshot of a virtual machine before you make a change to it that may have unintended consequences. You can use a snapshot to restore a virtual machine to a state. Creating a Snapshot of a Virtual Machine In the VM Portal: Open a virtual machine. In the Snapshots panel, click +Create Snapshot . A snapshot is added to the panel, including all attached disks. In the Administration Portal: Click Compute Virtual Machines . Click a virtual machine's name to go to the details view. Click the Snapshots tab and click Create . Enter a description for the snapshot. Select Disks to include using the check boxes. Note If no disks are selected, a partial snapshot of the virtual machine, without a disk, is created. You can preview this snapshot to view the configuration of the virtual machine. Note that committing a partial snapshot will result in a virtual machine without a disk. Select Save Memory to include a running virtual machine's memory in the snapshot. Click OK . The virtual machine's operating system and applications on the selected disk(s) are stored in a snapshot that can be previewed or restored. The snapshot is created with a status of Locked , which changes to Ok . When you click the snapshot, its details are shown on the General , Disks , Network Interfaces , and Installed Applications drop-down views in the Snapshots tab. 6.9.2. Using a Snapshot to Restore a Virtual Machine A snapshot can be used to restore a virtual machine to its state. Using Snapshots to Restore Virtual Machines In the VM Portal: Shutdown the virtual machine. In the Snapshots panel, click the Restore Snapshot icon for the snapshot you want to restore. The snapshot is loaded. In the Administration Portal: Click Compute Virtual Machines and select a virtual machine. Click the name of the virtual machine to go to the details view. Shut down the virtual machine. Click the Snapshots tab to list the available snapshots. Select a snapshot to restore in the upper pane. The snapshot details display in the lower pane. Click the Preview drop-down menu button and select Custom . Use the check boxes to select the VM Configuration , Memory , and disk(s) you want to restore, then click OK . This allows you to create and restore from a customized snapshot using the configuration and disk(s) from multiple snapshots. The status of the snapshot changes to Preview Mode . The status of the virtual machine briefly changes to Image Locked before returning to Down . Start the virtual machine; it runs using the disk image of the snapshot. Click Commit to permanently restore the virtual machine to the condition of the snapshot. Any subsequent snapshots are erased. Alternatively, click the Undo button to deactivate the snapshot and return the virtual machine to its state. 6.9.3. Creating a Virtual Machine from a Snapshot You can use a snapshot to create another virtual machine. Creating a Virtual Machine from a Snapshot Click Compute Virtual Machines and select a virtual machine. Click the virtual machine's name to go to the details view. Click the Snapshots tab to list the available snapshots. Select a snapshot in the list displayed and click Clone . Enter the Name of the virtual machine. Click OK . After a short time, the cloned virtual machine appears in the Virtual Machines tab in the navigation pane with a status of Image Locked . The virtual machine remains in this state until Red Hat Virtualization completes the creation of the virtual machine. A virtual machine with a preallocated 20 GB hard drive takes about fifteen minutes to create. Sparsely-allocated virtual disks take less time to create than do preallocated virtual disks. When the virtual machine is ready to use, its status changes from Image Locked to Down in Compute Virtual Machines . 6.9.4. Deleting a Snapshot You can delete a virtual machine snapshot and permanently remove it from your Red Hat Virtualization environment. Deleting a Snapshot In the VM Portal: Open a virtual machine. In the Snapshots panel, click the Delete Snapshot icon of the snapshot you want to delete. In the Administration Portal: Click Compute Virtual Machines . Click the virtual machine's name to go to the details view. Click the Snapshots tab to list the snapshots for that virtual machine. Select the snapshot to delete. Click Delete . Click OK . Note If the deletion fails, fix the underlying problem (for example, a failed host, an inaccessible storage device, or a temporary network issue) and try again.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/virtual_machine_management_guide/sect-Snapshots
Chapter 6. Preparing a UEFI HTTP installation source
Chapter 6. Preparing a UEFI HTTP installation source As an administrator of a server on a local network, you can configure an HTTP server to enable HTTP boot and network installation for other systems on your network. 6.1. Network install overview A network installation allows you to install Red Hat Enterprise Linux to a system that has access to an installation server. At a minimum, two systems are required for a network installation: Server A system running a DHCP server, an HTTP, HTTPS, FTP, or NFS server, and in the PXE boot case, a TFTP server. Although each server can run on a different physical system, the procedures in this section assume a single system is running all servers. Client The system to which you are installing Red Hat Enterprise Linux. Once installation starts, the client queries the DHCP server, receives the boot files from the HTTP or TFTP server, and downloads the installation image from the HTTP, HTTPS, FTP or NFS server. Unlike other installation methods, the client does not require any physical boot media for the installation to start. To boot a client from the network, enable network boot in the firmware or in a quick boot menu on the client. On some hardware, the option to boot from a network might be disabled, or not available. The workflow steps to prepare to install Red Hat Enterprise Linux from a network using HTTP or PXE are as follows: Procedure Export the installation ISO image or the installation tree to an NFS, HTTPS, HTTP, or FTP server. Configure the HTTP or TFTP server and DHCP server, and start the HTTP or TFTP service on the server. Boot the client and start the installation. You can choose between the following network boot protocols: HTTP Red Hat recommends using HTTP boot if your client UEFI supports it. HTTP boot is usually more reliable. PXE (TFTP) PXE boot is more widely supported by client systems, but sending the boot files over this protocol might be slow and result in timeout failures. Additional resources Red Hat Satellite product documentation 6.2. Configuring the DHCPv4 server for network boot Enable the DHCP version 4 (DHCPv4) service on your server, so that it can provide network boot functionality. Prerequisites You are preparing network installation over the IPv4 protocol. For IPv6, see Configuring the DHCPv6 server for network boot instead. Find the network addresses of the server. In the following examples, the server has a network card with this configuration: IPv4 address 192.168.124.2/24 IPv4 gateway 192.168.124.1 Procedure Install the DHCP server: Set up a DHCPv4 server. Enter the following configuration in the /etc/dhcp/dhcpd.conf file. Replace the addresses to match your network card. Start the DHCPv4 service: 6.3. Configuring the DHCPv6 server for network boot Enable the DHCP version 6 (DHCPv4) service on your server, so that it can provide network boot functionality. Prerequisites You are preparing network installation over the IPv6 protocol. For IPv4, see Configuring the DHCPv4 server for network boot instead. Find the network addresses of the server. In the following examples, the server has a network card with this configuration: IPv6 address fd33:eb1b:9b36::2/64 IPv6 gateway fd33:eb1b:9b36::1 Procedure Install the DHCP server: Set up a DHCPv6 server. Enter the following configuration in the /etc/dhcp/dhcpd6.conf file. Replace the addresses to match your network card. Start the DHCPv6 service: If DHCPv6 packets are dropped by the RP filter in the firewall, check its log. If the log contains the rpfilter_DROP entry, disable the filter using the following configuration in the /etc/firewalld/firewalld.conf file: 6.4. Configuring the HTTP server for HTTP boot You must install and enable the httpd service on your server so that the server can provide HTTP boot resources on your network. Prerequisites Find the network addresses of the server. In the following examples, the server has a network card with the 192.168.124.2 IPv4 address. Procedure Install the HTTP server: Create the /var/www/html/redhat/ directory: Download the RHEL DVD ISO file. See All Red Hat Enterprise Linux Downloads . Create a mount point for the ISO file: Mount the ISO file: Copy the boot loader, kernel, and initramfs from the mounted ISO file into your HTML directory: Make the boot loader configuration editable: Edit the /var/www/html/redhat/EFI/BOOT/grub.cfg file and replace its content with the following: In this file, replace the following strings: RHEL-9-3-0-BaseOS-x86_64 and Red Hat Enterprise Linux 9.3 Edit the version number to match the version of RHEL that you downloaded. 192.168.124.2 Replace with the IP address to your server. Make the EFI boot file executable: Open ports in the firewall to allow HTTP (80), DHCP (67, 68) and DHCPv6 (546, 547) traffic: This command enables temporary access until the server reboot. Optional: To enable permanent access, add the --permanent option to the command. Reload firewall rules: Start the HTTP server: Make the html directory and its content readable and executable: Restore the SELinux context of the html directory:
[ "dnf install dhcp-server", "option architecture-type code 93 = unsigned integer 16; subnet 192.168.124.0 netmask 255.255.255.0 { option routers 192.168.124.1 ; option domain-name-servers 192.168.124.1 ; range 192.168.124.100 192.168.124.200 ; class \"pxeclients\" { match if substring (option vendor-class-identifier, 0, 9) = \"PXEClient\"; next-server 192.168.124.2 ; if option architecture-type = 00:07 { filename \"redhat/EFI/BOOT/BOOTX64.EFI\"; } else { filename \"pxelinux/pxelinux.0\"; } } class \"httpclients\" { match if substring (option vendor-class-identifier, 0, 10) = \"HTTPClient\"; option vendor-class-identifier \"HTTPClient\"; filename \"http:// 192.168.124.2 /redhat/EFI/BOOT/BOOTX64.EFI\"; } }", "systemctl enable --now dhcpd", "dnf install dhcp-server", "option dhcp6.bootfile-url code 59 = string; option dhcp6.vendor-class code 16 = {integer 32, integer 16, string}; subnet6 fd33:eb1b:9b36::/64 { range6 fd33:eb1b:9b36::64 fd33:eb1b:9b36::c8 ; class \"PXEClient\" { match substring (option dhcp6.vendor-class, 6, 9); } subclass \"PXEClient\" \"PXEClient\" { option dhcp6.bootfile-url \"tftp:// [fd33:eb1b:9b36::2] /redhat/EFI/BOOT/BOOTX64.EFI\"; } class \"HTTPClient\" { match substring (option dhcp6.vendor-class, 6, 10); } subclass \"HTTPClient\" \"HTTPClient\" { option dhcp6.bootfile-url \"http:// [fd33:eb1b:9b36::2] /redhat/EFI/BOOT/BOOTX64.EFI\"; option dhcp6.vendor-class 0 10 \"HTTPClient\"; } }", "systemctl enable --now dhcpd6", "IPv6_rpfilter=no", "dnf install httpd", "mkdir -p /var/www/html/redhat/", "mkdir -p /var/www/html/redhat/iso/", "mount -o loop,ro -t iso9660 path-to-RHEL-DVD.iso /var/www/html/redhat/iso", "cp -r /var/www/html/redhat/iso/images /var/www/html/redhat/ cp -r /var/www/html/redhat/iso/EFI /var/www/html/redhat/", "chmod 644 /var/www/html/redhat/EFI/BOOT/grub.cfg", "set default=\"1\" function load_video { insmod efi_gop insmod efi_uga insmod video_bochs insmod video_cirrus insmod all_video } load_video set gfxpayload=keep insmod gzio insmod part_gpt insmod ext2 set timeout=60 # END /etc/grub.d/00_header # search --no-floppy --set=root -l ' RHEL-9-3-0-BaseOS-x86_64 ' # BEGIN /etc/grub.d/10_linux # menuentry 'Install Red Hat Enterprise Linux 9.3 ' --class fedora --class gnu-linux --class gnu --class os { linuxefi ../../images/pxeboot/vmlinuz inst.repo=http:// 192.168.124.2 /redhat/iso quiet initrdefi ../../images/pxeboot/initrd.img } menuentry 'Test this media & install Red Hat Enterprise Linux 9.3 ' --class fedora --class gnu-linux --class gnu --class os { linuxefi ../../images/pxeboot/vmlinuz inst.repo=http:// 192.168.124.2 /redhat/iso quiet initrdefi ../../images/pxeboot/initrd.img } submenu 'Troubleshooting -->' { menuentry 'Install Red Hat Enterprise Linux 9.3 in text mode' --class fedora --class gnu-linux --class gnu --class os { linuxefi ../../images/pxeboot/vmlinuz inst.repo=http:// 192.168.124.2 /redhat/iso inst.text quiet initrdefi ../../images/pxeboot/initrd.img } menuentry 'Rescue a Red Hat Enterprise Linux system' --class fedora --class gnu-linux --class gnu --class os { linuxefi ../../images/pxeboot/vmlinuz inst.repo=http:// 192.168.124.2 /redhat/iso inst.rescue quiet initrdefi ../../images/pxeboot/initrd.img } }", "chmod 755 /var/www/html/redhat/EFI/BOOT/BOOTX64.EFI", "firewall-cmd --zone public --add-port={80/tcp,67/udp,68/udp,546/udp,547/udp}", "firewall-cmd --reload", "systemctl enable --now httpd", "chmod -cR u=rwX,g=rX,o=rX /var/www/html", "restorecon -FvvR /var/www/html" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/automatically_installing_rhel/preparing-to-install-from-the-network-using-http_rhel-installer
Chapter 9. Yum
Chapter 9. Yum Yum is the Red Hat package manager that is able to query for information about available packages, fetch packages from repositories, install and uninstall them, and update an entire system to the latest available version. Yum performs automatic dependency resolution when updating, installing, or removing packages, and thus is able to automatically determine, fetch, and install all available dependent packages. Yum can be configured with new, additional repositories, or package sources , and also provides many plug-ins which enhance and extend its capabilities. Yum is able to perform many of the same tasks that RPM can; additionally, many of the command-line options are similar. Yum enables easy and simple package management on a single machine or on groups of them. The following sections assume your system was registered with Red Hat Subscription Management during installation as described in the Red Hat Enterprise Linux 7 Installation Guide . If your system is not subscribed, see Chapter 7, Registering the System and Managing Subscriptions . Important Yum provides secure package management by enabling GPG (Gnu Privacy Guard; also known as GnuPG) signature verification on GPG-signed packages to be turned on for all package repositories (package sources), or for individual repositories. When signature verification is enabled, yum will refuse to install any packages not GPG-signed with the correct key for that repository. This means that you can trust that the RPM packages you download and install on your system are from a trusted source, such as Red Hat, and were not modified during transfer. See Section 9.5, "Configuring Yum and Yum Repositories" for details on enabling signature-checking with yum. Yum also enables you to easily set up your own repositories of RPM packages for download and installation on other machines. When possible, yum uses parallel download of multiple packages and metadata to speed up downloading. Learning yum is a worthwhile investment because it is often the fastest way to perform system administration tasks, and it provides capabilities beyond those provided by the PackageKit graphical package management tools. Note You must have superuser privileges in order to use yum to install, update or remove packages on your system. All examples in this chapter assume that you have already obtained superuser privileges by using either the su or sudo command. 9.1. Checking For and Updating Packages Yum enables you to check if your system has any updates waiting to be applied. You can list packages that need to be updated and update them as a whole, or you can update a selected individual package. 9.1.1. Checking For Updates To see which installed packages on your system have updates available, use the following command: Example 9.1. Example output of the yum check-update command The output of yum check-update can look as follows: The packages in the above output are listed as having updates available. The first package in the list is dracut . Each line in the example output consists of several rows, in case of dracut : dracut - the name of the package, x86_64 - the CPU architecture the package was built for, 033 - the version of the updated package to be installed, 360.el7 - the release of the updated package, _2 - a build version, added as part of a z-stream update, rhel-7-server-rpms - the repository in which the updated package is located. The output also shows that we can update the kernel (the kernel package), yum and RPM themselves (the yum and rpm packages), as well as their dependencies (such as the rpm-libs , and rpm-python packages), all using the yum command. 9.1.2. Updating Packages You can choose to update a single package, multiple packages, or all packages at once. If any dependencies of the package or packages you update have updates available themselves, then they are updated too. Updating a Single Package To update a single package, run the following command as root : Example 9.2. Updating the rpm package To update the rpm package, type: This output contains several items of interest: Loaded plugins: langpacks, product-id, subscription-manager - Yum always informs you which yum plug-ins are installed and enabled. See Section 9.6, "Yum Plug-ins" for general information on yum plug-ins, or Section 9.6.3, "Working with Yum Plug-ins" for descriptions of specific plug-ins. rpm.x86_64 - you can download and install a new rpm package as well as its dependencies. Transaction check is performed for each of these packages. Yum presents the update information and then prompts you for confirmation of the update; yum runs interactively by default. If you already know which transactions the yum command plans to perform, you can use the -y option to automatically answer yes to any questions that yum asks (in which case it runs non-interactively). However, you should always examine which changes yum plans to make to the system so that you can easily troubleshoot any problems that might arise. You can also choose to download the package without installing it. To do so, select the d option at the download prompt. This launches a background download of the selected package. If a transaction fails, you can view yum transaction history by using the yum history command as described in Section 9.4, "Working with Transaction History" . Important Yum always installs a new kernel regardless of whether you are using the yum update or yum install command. When using RPM , on the other hand, it is important to use the rpm -i kernel command which installs a new kernel instead of rpm -u kernel which replaces the current kernel. Similarly, it is possible to update a package group. Type as root : Here, replace group_name with a name of the package group you want to update. For more information on package groups, see Section 9.3, "Working with Package Groups" . Yum also offers the upgrade command that is equal to update with enabled obsoletes configuration option (see Section 9.5.1, "Setting [main] Options" ). By default, obsoletes is turned on in /etc/yum.conf , which makes these two commands equivalent. Updating All Packages and Their Dependencies To update all packages and their dependencies, use the yum update command without any arguments: Updating Security-Related Packages If packages have security updates available, you can update only these packages to their latest versions. Type as root : You can also update packages only to versions containing the latest security updates. Type as root : For example, assume that: the kernel-3.10.0-1 package is installed on your system; the kernel-3.10.0-2 package was released as a security update; the kernel-3.10.0-3 package was released as a bug fix update. Then yum update-minimal --security updates the package to kernel-3.10.0-2 , and yum update --security updates the package to kernel-3.10.0-3 . Automating Package Updating To refresh the package database and download updates automatically, you can use the yum-cron service. For more information, see Section 9.7, "Automatically Refreshing Package Database and Downloading Updates with Yum-cron" . 9.1.3. Upgrading the System Off-line with ISO and Yum For systems that are disconnected from the Internet or Red Hat Network, using the yum update command with the Red Hat Enterprise Linux installation ISO image is an easy and quick way to upgrade systems to the latest minor version. The following steps illustrate the upgrading process: Create a target directory to mount your ISO image. This directory is not automatically created when mounting, so create it before proceeding to the step. As root , type: Replace mount_dir with a path to the mount directory. Typically, users create it as a subdirectory in the /media directory. Mount the Red Hat Enterprise Linux 7 installation ISO image to the previously created target directory. As root , type: Replace iso_name with a path to your ISO image and mount_dir with a path to the target directory. Here, the -o loop option is required to mount the file as a block device. Copy the media.repo file from the mount directory to the /etc/yum.repos.d/ directory. Note that configuration files in this directory must have the .repo extension to function properly. This creates a configuration file for the yum repository. Replace new.repo with the filename, for example rhel7.repo . Edit the new configuration file so that it points to the Red Hat Enterprise Linux installation ISO. Add the following line into the /etc/yum.repos.d/ new.repo file: Replace mount_dir with a path to the mount point. Update all yum repositories including /etc/yum.repos.d/ new.repo created in steps. As root , type: This upgrades your system to the version provided by the mounted ISO image. After successful upgrade, you can unmount the ISO image. As root , type: where mount_dir is a path to your mount directory. Also, you can remove the mount directory created in the first step. As root , type: If you will not use the previously created configuration file for another installation or update, you can remove it. As root , type: Example 9.3. Upgrading from Red Hat Enterprise Linux 7.0 to 7.1 If required to upgrade a system without access to the Internet using an ISO image with the newer version of the system, called for example rhel-server-7.1-x86_64-dvd.iso , create a target directory for mounting, such as /media/rhel7/ . As root , change into the directory with your ISO image and type: Then set up a yum repository for your image by copying the media.repo file from the mount directory: To make yum recognize the mount point as a repository, add the following line into the /etc/yum.repos.d/rhel7.repo copied in the step: Now, updating the yum repository will upgrade your system to a version provided by rhel-server-7.1-x86_64-dvd.iso . As root , execute: When your system is successfully upgraded, you can unmount the image, remove the target directory and the configuration file: 9.2. Working with Packages Yum enables you to perform a complete set of operations with software packages, including searching for packages, viewing information about them, installing and removing. 9.2.1. Searching Packages You can search all RPM package names, descriptions and summaries by using the following command: Replace term with a package name you want to search. Example 9.4. Searching for packages matching a specific string To list all packages that match "vim", "gvim", or "emacs", type: The yum search command is useful for searching for packages you do not know the name of, but for which you know a related term. Note that by default, yum search returns matches in package name and summary, which makes the search faster. Use the yum search all command for a more exhaustive but slower search. Filtering the Results All of yum's list commands allow you to filter the results by appending one or more glob expressions as arguments. Glob expressions are normal strings of characters which contain one or more of the wildcard characters * (which expands to match any character subset) and ? (which expands to match any single character). Be careful to escape the glob expressions when passing them as arguments to a yum command, otherwise the Bash shell will interpret these expressions as pathname expansions , and potentially pass all files in the current directory that match the global expressions to yum . To make sure the glob expressions are passed to yum as intended, use one of the following methods: escape the wildcard characters by preceding them with a backslash character double-quote or single-quote the entire glob expression. Examples in the following section demonstrate usage of both these methods. 9.2.2. Listing Packages To list information on all installed and available packages type the following at a shell prompt: To list installed and available packages that match inserted glob expressions use the following command: Example 9.5. Listing ABRT-related packages Packages with various ABRT add-ons and plug-ins either begin with "abrt-addon-", or "abrt-plugin-". To list these packages, type the following command at a shell prompt. Note how the wildcard characters are escaped with a backslash character: To list all packages installed on your system use the installed keyword. The rightmost column in the output lists the repository from which the package was retrieved. Example 9.6. Listing all installed versions of the krb package The following example shows how to list all installed packages that begin with "krb" followed by exactly one character and a hyphen. This is useful when you want to list all versions of certain component as these are distinguished by numbers. The entire glob expression is quoted to ensure proper processing. To list all packages in all enabled repositories that are available to install, use the command in the following form: Example 9.7. Listing available gstreamer plug-ins For instance, to list all available packages with names that contain "gstreamer" and then "plugin", run the following command: Listing Repositories To list the repository ID, name, and number of packages for each enabled repository on your system, use the following command: To list more information about these repositories, add the -v option. With this option enabled, information including the file name, overall size, date of the last update, and base URL are displayed for each listed repository. As an alternative, you can use the repoinfo command that produces the same output. To list both enabled and disabled repositories use the following command. A status column is added to the output list to show which of the repositories are enabled. By passing disabled as a first argument, you can reduce the command output to disabled repositories. For further specification you can pass the ID or name of repositories or related glob_expressions as arguments. Note that if there is an exact match between the repository ID or name and the inserted argument, this repository is listed even if it does not pass the enabled or disabled filter. 9.2.3. Displaying Package Information To display information about one or more packages, use the following command (glob expressions are valid here as well): Replace package_name with the name of the package. Example 9.8. Displaying information on the abrt package To display information about the abrt package, type: The yum info package_name command is similar to the rpm -q --info package_name command, but provides as additional information the name of the yum repository the RPM package was installed from (look for the From repo: line in the output). Using yumdb You can also query the yum database for alternative and useful information about a package by using the following command: This command provides additional information about a package, including the check sum of the package (and the algorithm used to produce it, such as SHA-256), the command given on the command line that was invoked to install the package (if any), and the reason why the package is installed on the system (where user indicates it was installed by the user, and dep means it was brought in as a dependency). Example 9.9. Querying yumdb for information on the yum package To display additional information about the yum package, type: For more information on the yumdb command, see the yumdb (8) manual page. 9.2.4. Installing Packages To install a single package and all of its non-installed dependencies, enter a command in the following form as root : You can also install multiple packages simultaneously by appending their names as arguments. To do so, type as root : If you are installing packages on a multilib system, such as an AMD64 or Intel 64 machine, you can specify the architecture of the package (as long as it is available in an enabled repository) by appending .arch to the package name: Example 9.10. Installing packages on multilib system To install the sqlite package for the i686 architecture, type: You can use glob expressions to quickly install multiple similarly named packages. Execute as root : Example 9.11. Installing all audacious plugins Global expressions are useful when you want to install several packages with similar names. To install all audacious plug-ins, use the command in the following form: In addition to package names and glob expressions, you can also provide file names to yum install . If you know the name of the binary you want to install, but not its package name, you can give yum install the path name. As root , type: Yum then searches through its package lists, finds the package which provides /usr/sbin/named , if any, and prompts you as to whether you want to install it. As you can see in the above examples, the yum install command does not require strictly defined arguments. It can process various formats of package names and glob expressions, which makes installation easier for users. On the other hand, it takes some time until yum parses the input correctly, especially if you specify a large number of packages. To optimize the package search, you can use the following commands to explicitly define how to parse the arguments: With install-n , yum interprets name as the exact name of the package. The install-na command tells yum that the subsequent argument contains the package name and architecture divided by the dot character. With install-nevra , yum will expect an argument in the form name-epoch:version-release.architecture . Similarly, you can use yum remove-n , yum remove-na , and yum remove-nevra when searching for packages to be removed. Note If you know you want to install the package that contains the named binary, but you do not know in which bin/ or sbin/ directory the file is installed, use the yum provides command with a glob expression: yum provides "*/ file_name " is a useful way to find the packages that contain file_name . Example 9.12. Installation Process The following example provides an overview of installation with use of yum . To download and install the latest version of the httpd package, execute as root : After executing the above command, yum loads the necessary plug-ins and runs the transaction check. In this case, httpd is already installed. Since the installed package is older than the latest currently available version, it will be updated. The same applies to the httpd-tools package that httpd depends on. Then, a transaction summary is displayed: In this step yum prompts you to confirm the installation. Apart from y (yes) and N (no) options, you can choose d (download only) to download the packages but not to install them directly. If you choose y , the installation proceeds with the following messages until it is finished successfully. To install a previously downloaded package from the local directory on your system, use the following command: Replace path with the path to the package you want to install. 9.2.5. Downloading Packages As shown in Example 9.12, "Installation Process" , at a certain point of installation process you are prompted to confirm the installation with the following message: With the d option, yum downloads the packages without installing them immediately. You can install these packages later offline with the yum localinstall command or you can share them with a different device. Downloaded packages are saved in one of the subdirectories of the cache directory, by default /var/cache/yum/USDbasearch/USDreleasever/packages/ . The downloading proceeds in background mode so that you can use yum for other operations in parallel. 9.2.6. Removing Packages Similarly to package installation, yum enables you to uninstall them. To uninstall a particular package, as well as any packages that depend on it, run the following command as root : As when you install multiple packages, you can remove several at once by adding more package names to the command. Example 9.13. Removing several packages To remove totem , type the following at a shell prompt: Similar to install , remove can take these arguments: package names glob expressions file lists package provides Warning Yum is not able to remove a package without also removing packages which depend on it. This type of operation, which can only be performed by RPM , is not advised, and can potentially leave your system in a non-functioning state or cause applications to not work correctly or crash. 9.3. Working with Package Groups A package group is a collection of packages that serve a common purpose, for instance System Tools or Sound and Video . Installing a package group pulls a set of dependent packages, saving time considerably. The yum groups command is a top-level command that covers all the operations that act on package groups in yum. 9.3.1. Listing Package Groups The summary option is used to view the number of installed groups, available groups, available environment groups, and both installed and available language groups: Example 9.14. Example output of yum groups summary To list all package groups from yum repositories add the list option. You can filter the command output by group names. Several optional arguments can be passed to this command, including hidden to list also groups not marked as user visible, and ids to list group IDs. You can add language , environment , installed , or available options to reduce the command output to a specific group type. To list mandatory and optional packages contained in a particular group, use the following command: Example 9.15. Viewing information on the LibreOffice package group As you can see in the above example, the packages included in the package group can have different states that are marked with the following symbols: " - " - Package is not installed and it will not be installed as a part of the package group. " + " - Package is not installed but it will be installed on the yum upgrade or yum group upgrade . " = " - Package is installed and it was installed as a part of the package group. no symbol - Package is installed but it was installed outside of the package group. This means that the yum group remove will not remove these packages. These distinctions take place only when the group_command configuration parameter is set to objects , which is the default setting. Set this parameter to a different value if you do not want yum to track if a package was installed as a part of the group or separately, which will make " no symbol " packages equivalent to " = " packages. You can alter the above package states using the yum group mark command. For example, yum group mark packages marks any given installed packages as members of a specified group. To avoid installation of new packages on group update, use yum group mark blacklist . See the yum (8) man page for more information on capabilities of yum group mark . Note You can identify an environmental group with use of the @^ prefix and a package group can be marked with @ . When using yum group list , info , install , or remove , pass @group_name to specify a package group, @^group_name to specify an environmental group, or group_name to include both. 9.3.2. Installing a Package Group Each package group has a name and a group ID ( groupid ). To list the names of all package groups, and their group IDs, which are displayed in parentheses, type: Example 9.16. Finding name and groupid of a package group To find the name or ID of a package group, for example a group related to the KDE desktop environment, type: Some groups are hidden by settings in the configured repositories. For example, on a server, make use of the hidden command option to list hidden groups too: You can install a package group by passing its full group name, without the groupid part, to the group install command. As root , type: You can also install by groupid. As root , execute the following command: You can pass the groupid or quoted group name to the install command if you prepend it with an @ symbol, which tells yum that you want to perform group install . As root , type: Replace group with the groupid or quoted group name. The same logic applies to environmental groups: Example 9.17. Four equivalent ways of installing the KDE Desktop group As mentioned before, you can use four alternative, but equivalent ways to install a package group. For KDE Desktop, the commands look as follows: 9.3.3. Removing a Package Group You can remove a package group using syntax similar to the install syntax, with use of either name of the package group or its id. As root , type: Also, you can pass the groupid or quoted name to the remove command if you prepend it with an @ -symbol, which tells yum that you want to perform group remove . As root , type: Replace group with the groupid or quoted group name. Similarly, you can replace an environmental group: Example 9.18. Four equivalent ways of removing the KDE Desktop group Similarly to install, you can use four alternative, but equivalent ways to remove a package group. For KDE Desktop, the commands look as follows: 9.4. Working with Transaction History The yum history command enables users to review information about a timeline of yum transactions, the dates and times they occurred, the number of packages affected, whether these transactions succeeded or were aborted, and if the RPM database was changed between transactions. Additionally, this command can be used to undo or redo certain transactions. All history data is stored in the history DB in the /var/lib/yum/history/ directory. 9.4.1. Listing Transactions To display a list of the twenty most recent transactions, as root , either run yum history with no additional arguments, or type the following at a shell prompt: To display all transactions, add the all keyword: To display only transactions in a given range, use the command in the following form: You can also list only transactions regarding a particular package or packages. To do so, use the command with a package name or a glob expression: Example 9.19. Listing the five oldest transactions In the output of yum history list , the most recent transaction is displayed at the top of the list. To display information about the five oldest transactions stored in the history data base, type: All forms of the yum history list command produce tabular output with each row consisting of the following columns: ID - an integer value that identifies a particular transaction. Login user - the name of the user whose login session was used to initiate a transaction. This information is typically presented in the Full Name < username> form. For transactions that were not issued by a user (such as an automatic system update), System <unset> is used instead. Date and time - the date and time when a transaction was issued. Action(s) - a list of actions that were performed during a transaction as described in Table 9.1, "Possible values of the Action(s) field" . Altered - the number of packages that were affected by a transaction, possibly followed by additional information as described in Table 9.2, "Possible values of the Altered field" . Table 9.1. Possible values of the Action(s) field Action Abbreviation Description Downgrade D At least one package has been downgraded to an older version. Erase E At least one package has been removed. Install I At least one new package has been installed. Obsoleting O At least one package has been marked as obsolete. Reinstall R At least one package has been reinstalled. Update U At least one package has been updated to a newer version. Table 9.2. Possible values of the Altered field Symbol Description < Before the transaction finished, the rpmdb database was changed outside yum. > After the transaction finished, the rpmdb database was changed outside yum. * The transaction failed to finish. # The transaction finished successfully, but yum returned a non-zero exit code. E The transaction finished successfully, but an error or a warning was displayed. P The transaction finished successfully, but problems already existed in the rpmdb database. s The transaction finished successfully, but the --skip-broken command-line option was used and certain packages were skipped. To synchronize the rpmdb or yumdb database contents for any installed package with the currently used rpmdb or yumdb database, type the following: To display some overall statistics about the currently used history database use the following command: Example 9.20. Example output of yum history stats Yum also enables you to display a summary of all past transactions. To do so, run the command in the following form as root : To display only transactions in a given range, type: Similarly to the yum history list command, you can also display a summary of transactions regarding a certain package or packages by supplying a package name or a glob expression: Example 9.21. Summary of the five latest transactions All forms of the yum history summary command produce simplified tabular output similar to the output of yum history list . As shown above, both yum history list and yum history summary are oriented towards transactions, and although they allow you to display only transactions related to a given package or packages, they lack important details, such as package versions. To list transactions from the perspective of a package, run the following command as root : Example 9.22. Tracing the history of a package For example, to trace the history of subscription-manager and related packages, type the following at a shell prompt: In this example, three packages were installed during the initial system installation: subscription-manager , subscription-manager-firstboot , and subscription-manager-gui . In the third transaction, all these packages were updated from version 1.10.11 to version 1.10.17. 9.4.2. Examining Transactions To display the summary of a single transaction, as root , use the yum history summary command in the following form: Here, id stands for the ID of the transaction. To examine a particular transaction or transactions in more detail, run the following command as root : The id argument is optional and when you omit it, yum automatically uses the last transaction. Note that when specifying more than one transaction, you can also use a range: Example 9.23. Example output of yum history info The following is sample output for two transactions, each installing one new package: You can also view additional information, such as what configuration options were used at the time of the transaction, or from what repository and why were certain packages installed. To determine what additional information is available for a certain transaction, type the following at a shell prompt as root : Similarly to yum history info , when no id is provided, yum automatically uses the latest transaction. Another way to refer to the latest transaction is to use the last keyword: Example 9.24. Example output of yum history addon-info For the fourth transaction in the history, the yum history addon-info command provides the following output: In the output of the yum history addon-info command, three types of information are available: config-main - global yum options that were in use during the transaction. See Section 9.5.1, "Setting [main] Options" for information on how to change global options. config-repos - options for individual yum repositories. See Section 9.5.2, "Setting [repository] Options" for information on how to change options for individual repositories. saved_tx - the data that can be used by the yum load-transaction command in order to repeat the transaction on another machine (see below). To display a selected type of additional information, run the following command as root : 9.4.3. Reverting and Repeating Transactions Apart from reviewing the transaction history, the yum history command provides means to revert or repeat a selected transaction. To revert a transaction, type the following at a shell prompt as root : To repeat a particular transaction, as root , run the following command: Both commands also accept the last keyword to undo or repeat the latest transaction. Note that both yum history undo and yum history redo commands only revert or repeat the steps that were performed during a transaction. If the transaction installed a new package, the yum history undo command will uninstall it, and if the transaction uninstalled a package the command will again install it. This command also attempts to downgrade all updated packages to their version, if these older packages are still available. When managing several identical systems, yum also enables you to perform a transaction on one of them, store the transaction details in a file, and after a period of testing, repeat the same transaction on the remaining systems as well. To store the transaction details to a file, type the following at a shell prompt as root : Once you copy this file to the target system, you can repeat the transaction by using the following command as root : You can configure load-transaction to ignore missing packages or rpmdb version. For more information on these configuration options see the yum.conf (5) man page. 9.4.4. Starting New Transaction History Yum stores the transaction history in a single SQLite database file. To start new transaction history, run the following command as root : This will create a new, empty database file in the /var/lib/yum/history/ directory. The old transaction history will be kept, but will not be accessible as long as a newer database file is present in the directory. 9.5. Configuring Yum and Yum Repositories Note To expand your expertise, you might also be interested in the Red Hat System Administration III (RH254) and RHCSA Rapid Track (RH199) training courses. The configuration information for yum and related utilities is located at /etc/yum.conf . This file contains one mandatory [main] section, which enables you to set yum options that have global effect, and can also contain one or more [ repository ] sections, which allow you to set repository-specific options. However, it is recommended to define individual repositories in new or existing .repo files in the /etc/yum.repos.d/ directory. The values you define in individual [ repository ] sections of the /etc/yum.conf file override values set in the [main] section. This section shows you how to: set global yum options by editing the [main] section of the /etc/yum.conf configuration file; set options for individual repositories by editing the [ repository ] sections in /etc/yum.conf and .repo files in the /etc/yum.repos.d/ directory; use yum variables in /etc/yum.conf and files in the /etc/yum.repos.d/ directory so that dynamic version and architecture values are handled correctly; add, enable, and disable yum repositories on the command line; and set up your own custom yum repository. 9.5.1. Setting [main] Options The /etc/yum.conf configuration file contains exactly one [main] section, and while some of the key-value pairs in this section affect how yum operates, others affect how yum treats repositories. You can add many additional options under the [main] section heading in /etc/yum.conf . A sample /etc/yum.conf configuration file can look like this: The following are the most commonly used options in the [main] section: assumeyes = value The assumeyes option determines whether or not yum prompts for confirmation of critical actions. Replace value with one of: 0 ( default ) - yum prompts for confirmation of critical actions it performs. 1 - Do not prompt for confirmation of critical yum actions. If assumeyes=1 is set, yum behaves in the same way as the command-line options -y and --assumeyes . cachedir = directory Use this option to set the directory where yum stores its cache and database files. Replace directory with an absolute path to the directory. By default, yum's cache directory is /var/cache/yum/USDbasearch/USDreleasever/ . See Section 9.5.3, "Using Yum Variables" for descriptions of the USDbasearch and USDreleasever yum variables. debuglevel = value This option specifies the detail of debugging output produced by yum. Here, value is an integer between 1 and 10 . Setting a higher debuglevel value causes yum to display more detailed debugging output. debuglevel=2 is the default, while debuglevel=0 disables debugging output. exactarch = value With this option, you can set yum to consider the exact architecture when updating already installed packages. Replace value with: 0 - Do not take into account the exact architecture when updating packages. 1 ( default ) - Consider the exact architecture when updating packages. With this setting, yum does not install a package for 32-bit architecture to update a package already installed on the system with 64-bit architecture. exclude = package_name more_package_names The exclude option enables you to exclude packages by keyword during installation or system update. Listing multiple packages for exclusion can be accomplished by quoting a space-delimited list of packages. Shell glob expressions using wildcards (for example, * and ? ) are allowed. gpgcheck = value Use the gpgcheck option to specify if yum should perform a GPG signature check on packages. Replace value with: 0 - Disable GPG signature-checking on packages in all repositories, including local package installation. 1 ( default ) - Enable checking of GPG signature on all packages in all repositories, including local package installation. With gpgcheck enabled, all packages' signatures are checked. If this option is set in the [main] section of the /etc/yum.conf file, it sets the GPG-checking rule for all repositories. However, you can also set gpgcheck= value for individual repositories instead; that is, you can enable GPG-checking on one repository while disabling it on another. Setting gpgcheck= value for an individual repository in its corresponding .repo file overrides the default if it is present in /etc/yum.conf . group_command = value Use the group_command option to specify how the yum group install , yum group upgrade , and yum group remove commands handle a package group. Replace value with on of: simple - Install all members of a package group. Upgrade only previously installed packages, but do not install packages that have been added to the group in the meantime. compat - Similar to simple but yum upgrade also installs packages that were added to the group since the upgrade. objects - ( default .) With this option, yum keeps track of the previously installed groups and distinguishes between packages installed as a part of the group and packages installed separately. See Example 9.15, "Viewing information on the LibreOffice package group" group_package_types = package_type more_package_types Here you can specify which type of packages ( optional , default or mandatory ) is installed when the yum group install command is called. The default and mandatory package types are chosen by default. history_record = value With this option, you can set yum to record transaction history. Replace value with one of: 0 - yum should not record history entries for transactions. 1 ( default ) - yum should record history entries for transactions. This operation takes certain amount of disk space, and some extra time in the transactions, but it provides a lot of information about past operations, which can be displayed with the yum history command. history_record=1 is the default. For more information on the yum history command, see Section 9.4, "Working with Transaction History" . Note Yum uses history records to detect modifications to the rpmdb data base that have been done outside of yum. In such case, yum displays a warning and automatically searches for possible problems caused by altering rpmdb . With history_record turned off, yum is not able to detect these changes and no automatic checks are performed. installonlypkgs = space separated list of packages Here you can provide a space-separated list of packages which yum can install , but will never update . See the yum.conf (5) manual page for the list of packages which are install-only by default. If you add the installonlypkgs directive to /etc/yum.conf , ensure that you list all of the packages that should be install-only, including any of those listed under the installonlypkgs section of yum.conf (5). In particular, make sure that kernel packages are always listed in installonlypkgs (as they are by default), and installonly_limit is always set to a value greater than 2 so that a backup kernel is always available in case the default one fails to boot. installonly_limit = value This option sets how many packages listed in the installonlypkgs directive can be installed at the same time. Replace value with an integer representing the maximum number of versions that can be installed simultaneously for any single package listed in installonlypkgs . The defaults for the installonlypkgs directive include several different kernel packages, so be aware that changing the value of installonly_limit also affects the maximum number of installed versions of any single kernel package. The default value listed in /etc/yum.conf is installonly_limit=3 , and the minimum possible value is installonly_limit=2 . You cannot set installonly_limit=1 because that would make yum remove the running kernel, which is prohibited. If installonly_limit=1 is used, yum fails. Using installonly_limit=2 ensures that one backup kernel is available. However, it is recommended to keep the default setting installonly_limit=3 , so that you have two backup kernels available. keepcache = value The keepcache option determines whether yum keeps the cache of headers and packages after successful installation. Here, value is one of: 0 ( default ) - Do not retain the cache of headers and packages after a successful installation. 1 - Retain the cache after a successful installation. logfile = file_name To specify the location for logging output, replace file_name with an absolute path to the file in which yum should write its logging output. By default, yum logs to /var/log/yum.log . max_connenctions = number Here value stands for the maximum number of simultaneous connections, default is 5. multilib_policy = value The multilib_policy option sets the installation behavior if several architecture versions are available for package install. Here, value stands for: best - install the best-choice architecture for this system. For example, setting multilib_policy=best on an AMD64 system causes yum to install the 64-bit versions of all packages. all - always install every possible architecture for every package. For example, with multilib_policy set to all on an AMD64 system, yum would install both the i686 and AMD64 versions of a package, if both were available. obsoletes = value The obsoletes option enables the obsoletes process logic during updates.When one package declares in its spec file that it obsoletes another package, the latter package is replaced by the former package when the former package is installed. Obsoletes are declared, for example, when a package is renamed. Replace value with one of: 0 - Disable yum's obsoletes processing logic when performing updates. 1 ( default ) - Enable yum's obsoletes processing logic when performing updates. plugins = value This is a global switch to enable or disable yum plug-ins, value is one of: 0 - Disable all yum plug-ins globally. Important Disabling all plug-ins is not advised because certain plug-ins provide important yum services. In particular, product-id and subscription-manager plug-ins provide support for the certificate-based Content Delivery Network ( CDN ). Disabling plug-ins globally is provided as a convenience option, and is generally only recommended when diagnosing a potential problem with yum. 1 ( default ) - Enable all yum plug-ins globally. With plugins=1 , you can still disable a specific yum plug-in by setting enabled=0 in that plug-in's configuration file. For more information about various yum plug-ins, see Section 9.6, "Yum Plug-ins" . For further information on controlling plug-ins, see Section 9.6.1, "Enabling, Configuring, and Disabling Yum Plug-ins" . reposdir = directory Here, directory is an absolute path to the directory where .repo files are located. All .repo files contain repository information (similar to the [ repository ] sections of /etc/yum.conf ). Yum collects all repository information from .repo files and the [ repository ] section of the /etc/yum.conf file to create a master list of repositories to use for transactions. If reposdir is not set, yum uses the default directory /etc/yum.repos.d/ . retries = value This option sets the number of times yum should attempt to retrieve a file before returning an error. value is an integer 0 or greater. Setting value to 0 makes yum retry forever. The default value is 10 . For a complete list of available [main] options, see the [main] OPTIONS section of the yum.conf (5) manual page. 9.5.2. Setting [repository] Options The [ repository ] sections, where repository is a unique repository ID such as my_personal_repo (spaces are not permitted), allow you to define individual yum repositories. To avoid conflicts, custom repositories should not use names used by Red Hat repositories. The following is a bare minimum example of the form a [ repository ] section takes: Every [ repository ] section must contain the following directives: name = repository_name Here, repository_name is a human-readable string describing the repository. baseurl = repository_url Replace repository_url with a URL to the directory where the repodata directory of a repository is located: If the repository is available over HTTP, use: http://path/to/repo If the repository is available over FTP, use: ftp://path/to/repo If the repository is local to the machine, use: file:///path/to/local/repo If a specific online repository requires basic HTTP authentication, you can specify your user name and password by prepending it to the URL as username : password @ link . For example, if a repository on http://www.example.com/repo/ requires a user name of "user" and a password of "password", then the baseurl link could be specified as http://user:[email protected]/repo/ . Usually this URL is an HTTP link, such as: Note that yum always expands the USDreleasever , USDarch , and USDbasearch variables in URLs. For more information about yum variables, see Section 9.5.3, "Using Yum Variables" . Other useful [ repository ] directive are: enabled = value This is a simple way to tell yum to use or ignore a particular repository, value is one of: 0 - Do not include this repository as a package source when performing updates and installs. This is an easy way of quickly turning repositories on and off, which is useful when you desire a single package from a repository that you do not want to enable for updates or installs. 1 - Include this repository as a package source. Turning repositories on and off can also be performed by passing either the --enablerepo= repo_name or --disablerepo= repo_name option to yum , or through the Add/Remove Software window of the PackageKit utility. async = value Controls parallel downloading of repository packages. Here, value is one of: auto ( default ) - parallel downloading is used if possible, which means that yum automatically disables it for repositories created by plug-ins to avoid failures. on - parallel downloading is enabled for the repository. off - parallel downloading is disabled for the repository. Many more [ repository ] options exist, part of them have the same form and function as certain [ main ] options. For a complete list, see the [repository] OPTIONS section of the yum.conf (5) manual page. Example 9.25. A sample /etc/yum.repos.d/redhat.repo file The following is a sample /etc/yum.repos.d/redhat.repo file: 9.5.3. Using Yum Variables You can use and reference the following built-in variables in yum commands and in all yum configuration files (that is, /etc/yum.conf and all .repo files in the /etc/yum.repos.d/ directory): USDreleasever You can use this variable to reference the release version of Red Hat Enterprise Linux. Yum obtains the value of USDreleasever from the distroverpkg= value line in the /etc/yum.conf configuration file. If there is no such line in /etc/yum.conf , then yum infers the correct value by deriving the version number from the redhat-release product package that provides the redhat-release file. USDarch You can use this variable to refer to the system's CPU architecture as returned when calling Python's os.uname() function. Valid values for USDarch include: i586 , i686 and x86_64 . USDbasearch You can use USDbasearch to reference the base architecture of the system. For example, i686 and i586 machines both have a base architecture of i386 , and AMD64 and Intel 64 machines have a base architecture of x86_64 . USDYUM0-9 These ten variables are each replaced with the value of any shell environment variables with the same name. If one of these variables is referenced (in /etc/yum.conf for example) and a shell environment variable with the same name does not exist, then the configuration file variable is not replaced. To define a custom variable or to override the value of an existing one, create a file with the same name as the variable (without the " USD " sign) in the /etc/yum/vars/ directory, and add the desired value on its first line. For example, repository descriptions often include the operating system name. To define a new variable called USDosname , create a new file with "Red Hat Enterprise Linux" on the first line and save it as /etc/yum/vars/osname : Instead of "Red Hat Enterprise Linux 7", you can now use the following in the .repo files: 9.5.4. Viewing the Current Configuration To display the current values of global yum options (that is, the options specified in the [main] section of the /etc/yum.conf file), execute the yum-config-manager command with no command-line options: To list the content of a different configuration section or sections, use the command in the following form: You can also use a glob expression to display the configuration of all matching sections: Example 9.26. Viewing configuration of the main section To list all configuration options and their corresponding values for the main section, type the following at a shell prompt: 9.5.5. Adding, Enabling, and Disabling a Yum Repository Note To expand your expertise, you might also be interested in the Red Hat System Administration III (RH254) training course. Section 9.5.2, "Setting [repository] Options" describes various options you can use to define a yum repository. This section explains how to add, enable, and disable a repository by using the yum-config-manager command. Important When the system is registered with Red Hat Subscription Management to the certificate-based Content Delivery Network ( CDN ), the Red Hat Subscription Manager tools are used to manage repositories in the /etc/yum.repos.d/redhat.repo file. Adding a Yum Repository To define a new repository, you can either add a [ repository ] section to the /etc/yum.conf file, or to a .repo file in the /etc/yum.repos.d/ directory. All files with the .repo file extension in this directory are read by yum, and it is recommended to define your repositories here instead of in /etc/yum.conf . Warning Obtaining and installing software packages from unverified or untrusted software sources other than Red Hat's certificate-based Content Delivery Network ( CDN ) constitutes a potential security risk, and could lead to security, stability, compatibility, and maintainability issues. Yum repositories commonly provide their own .repo file. To add such a repository to your system and enable it, run the following command as root : ...where repository_url is a link to the .repo file. Example 9.27. Adding example.repo To add a repository located at http://www.example.com/example.repo , type the following at a shell prompt: Enabling a Yum Repository To enable a particular repository or repositories, type the following at a shell prompt as root : ...where repository is the unique repository ID (use yum repolist all to list available repository IDs). Alternatively, you can use a glob expression to enable all matching repositories: Example 9.28. Enabling repositories defined in custom sections of /etc/yum.conf. To enable repositories defined in the [example] , [example-debuginfo] , and [example-source] sections, type: Example 9.29. Enabling all repositories To enable all repositories defined both in the /etc/yum.conf file and in the /etc/yum.repos.d/ directory, type: When successful, the yum-config-manager --enable command displays the current repository configuration. Disabling a Yum Repository To disable a yum repository, run the following command as root : ...where repository is the unique repository ID (use yum repolist all to list available repository IDs). Similarly to yum-config-manager --enable , you can use a glob expression to disable all matching repositories at the same time: Example 9.30. Disabling all repositories To disable all repositories defined both in the /etc/yum.conf file and in the /etc/yum.repos.d/ directory, type: When successful, the yum-config-manager --disable command displays the current configuration. 9.5.6. Creating a Yum Repository To set up a yum repository: Install the createrepo package: Copy all packages for your new repository into one directory, such as /tmp/local_repo/ : To create the repository run: This creates the necessary metadata for the yum repository and places metadata in a newly created subdirectory repodata . The repository is now ready to be consumed by yum. This repository can be shared over the HTTP or FTP protocol, or refered directly from the local machine. See the Section 9.5.2, "Setting [repository] Options" section for more details on how to configure a yum repository. Note When constructing the URL for a repository, refer to the /mnt/local_repo not to /mnt/local_repo/repodata , as this directory contains only metadata. Actual yum packages are in /mnt/local_repo . 9.5.6.1. Adding packages to an already created yum repository To add packages to an already created yum repository: Copy the new packages to your repository directory, such as /tmp/local_repo/ : To reflect the newly added packages in the metadata, run: Optional: If you have already used any yum command with newly updated repository, run: 9.5.7. Adding the Optional and Supplementary Repositories The Optional and Supplementary subscription channels provide additional software packages for Red Hat Enterprise Linux that cover open source licensed software (in the Optional channel) and proprietary licensed software (in the Supplementary channel). Before subscribing to the Optional and Supplementary channels see the Scope of Coverage Details . If you decide to install packages from these channels, follow the steps documented in the article called How to access Optional and Supplementary channels, and -devel packages using Red Hat Subscription Manager (RHSM)? on the Red Hat Customer Portal. 9.6. Yum Plug-ins Yum provides plug-ins that extend and enhance its operations. Certain plug-ins are installed by default. Yum always informs you which plug-ins, if any, are loaded and active whenever you call any yum command. For example: Note that the plug-in names which follow Loaded plugins are the names you can provide to the --disableplugin= plugin_name option. 9.6.1. Enabling, Configuring, and Disabling Yum Plug-ins To enable yum plug-ins, ensure that a line beginning with plugins= is present in the [main] section of /etc/yum.conf , and that its value is 1 : You can disable all plug-ins by changing this line to plugins=0 . Important Disabling all plug-ins is not advised because certain plug-ins provide important yum services. In particular, the product-id and subscription-manager plug-ins provide support for the certificate-based Content Delivery Network ( CDN ). Disabling plug-ins globally is provided as a convenience option, and is generally only recommended when diagnosing a potential problem with yum. Every installed plug-in has its own configuration file in the /etc/yum/pluginconf.d/ directory. You can set plug-in specific options in these files. For example, here is the aliases plug-in's aliases.conf configuration file: Similar to the /etc/yum.conf file, the plug-in configuration files always contain a [main] section where the enabled= option controls whether the plug-in is enabled when you run yum commands. If this option is missing, you can add it manually to the file. If you disable all plug-ins by setting enabled=0 in /etc/yum.conf , then all plug-ins are disabled regardless of whether they are enabled in their individual configuration files. If you merely want to disable all yum plug-ins for a single yum command, use the --noplugins option. If you want to disable one or more yum plug-ins for a single yum command, add the --disableplugin= plugin_name option to the command. For example, to disable the aliases plug-in while updating a system, type: The plug-in names you provide to the --disableplugin= option are the same names listed after the Loaded plugins line in the output of any yum command. You can disable multiple plug-ins by separating their names with commas. In addition, you can match multiple plug-in names or shorten long ones by using glob expressions: 9.6.2. Installing Additional Yum Plug-ins Yum plug-ins usually adhere to the yum-plugin- plugin_name package-naming convention, but not always: the package which provides the kabi plug-in is named kabi-yum-plugins , for example. You can install a yum plug-in in the same way you install other packages. For instance, to install the yum-aliases plug-in, type the following at a shell prompt: 9.6.3. Working with Yum Plug-ins The following list provides descriptions and usage instructions for several useful yum plug-ins. Plug-ins are listed by names, brackets contain the name of the package. search-disabled-repos ( subscription-manager ) The search-disabled-repos plug-in allows you to temporarily or permanently enable disabled repositories to help resolve dependencies. With this plug-in enabled, when Yum fails to install a package due to failed dependency resolution, it offers to temporarily enable disabled repositories and try again. If the installation succeeds, Yum also offers to enable the used repositories permanently. Note that the plug-in works only with the repositories that are managed by subscription-manager and not with custom repositories. Important If yum is executed with the --assumeyes or -y option, or if the assumeyes directive is enabled in /etc/yum.conf , the plug-in enables disabled repositories, both temporarily and permanently, without prompting for confirmation. This may lead to problems, for example, enabling repositories that you do not want enabled. To configure the search-disabled-repos plug-in, edit the configuration file located in /etc/yum/pluginconf.d/search-disabled-repos.conf . For the list of directives you can use in the [main] section, see the table below. Table 9.3. Supported search-disabled-repos.conf directives Directive Description enabled = value Allows you to enable or disable the plug-in. The value must be either 1 (enabled), or 0 (disabled). The plug-in is enabled by default. notify_only = value Allows you to restrict the behavior of the plug-in to notifications only. The value must be either 1 (notify only without modifying the behavior of Yum), or 0 (modify the behavior of Yum). By default the plug-in only notifies the user. ignored_repos = repositories Allows you to specify the repositories that will not be enabled by the plug-in. kabi ( kabi-yum-plugins ) The kabi plug-in checks whether a driver update package conforms with the official Red Hat kernel Application Binary Interface ( kABI ). With this plug-in enabled, when a user attempts to install a package that uses kernel symbols which are not on a whitelist, a warning message is written to the system log. Additionally, configuring the plug-in to run in enforcing mode prevents such packages from being installed at all. To configure the kabi plug-in, edit the configuration file located in /etc/yum/pluginconf.d/kabi.conf . A list of directives that can be used in the [main] section is shown in the table below. Table 9.4. Supported kabi.conf directives Directive Description enabled = value Allows you to enable or disable the plug-in. The value must be either 1 (enabled), or 0 (disabled). When installed, the plug-in is enabled by default. whitelists = directory Allows you to specify the directory in which the files with supported kernel symbols are located. By default, the kabi plug-in uses files provided by the kernel-abi-whitelists package (that is, the /usr/lib/modules/kabi-rhel70/ directory). enforce = value Allows you to enable or disable enforcing mode. The value must be either 1 (enabled), or 0 (disabled). By default, this option is commented out and the kabi plug-in only displays a warning message. product-id ( subscription-manager ) The product-id plug-in manages product identity certificates for products installed from the Content Delivery Network. The product-id plug-in is installed by default. langpacks ( yum-langpacks ) The langpacks plug-in is used to search for locale packages of a selected language for every package that is installed. The langpacks plug-in is installed by default. aliases ( yum-plugin-aliases ) The aliases plug-in adds the alias command-line option which enables configuring and using aliases for yum commands. yum-changelog ( yum-plugin-changelog ) The yum-changelog plug-in adds the --changelog command-line option that enables viewing package change logs before and after updating. yum-tmprepo ( yum-plugin-tmprepo ) The yum-tmprepo plug-in adds the --tmprepo command-line option that takes the URL of a repository file, downloads and enables it for only one transaction. This plug-in tries to ensure the safe temporary usage of repositories. By default, it does not allow to disable the gpg check. yum-verify ( yum-plugin-verify ) The yum-verify plug-in adds the verify , verify-rpm , and verify-all command-line options for viewing verification data on the system. yum-versionlock ( yum-plugin-versionlock ) The yum-versionlock plug-in excludes other versions of selected packages, which enables protecting packages from being updated by newer versions. With the versionlock command-line option, you can view and edit the list of locked packages. 9.7. Automatically Refreshing Package Database and Downloading Updates with Yum-cron The yum-cron service checks and downloads package updates automatically. The cron jobs provided by the yum-cron service are active immediately after installation of the yum-cron package. The yum-cron service can also automatically install downloaded updates. With default settings, the yum-cron service: Updates the metadata in the yum cache once per hour. Downloads pending package updates to the yum cache once per day. If new packages are available in the repository, an email is sent. See chapter Section 9.7.2, "Setting up Optional Email Notifications" for more information. The yum-cron service has two configuration files: /etc/yum/yum-cron.conf For daily tasks. /etc/yum/yum-cron-hourly.conf For hourly tasks. 9.7.1. Enabling Automatic Installation of Updates To enable automatic installation of downloaded updates, edit the daily configuration file for daily installation or the hourly configuration file for hourly installation by setting the apply_updates option as follows: 9.7.2. Setting up Optional Email Notifications By default, the yum-cron service uses cron to send emails containing an output of the executed command. This email is sent according to cron configuration, typically to the local superuser and stored in the /var/spool/mail/root file. You can use specific email configuration different from the settings which affect all cron jobs. However, this email configuration does not support TLS and overall email built-in logic is very basic. To enable yum-cron built-in email notifications: Open selected yum-cron configuration file: /etc/yum/yum-cron.conf For daily tasks. /etc/yum/yum-cron-hourly.conf For hourly tasks. In the [emitters] section, set the following option: Set the email_from , email_to , email_host options as required 9.7.3. Enabling or Disabling Specific Repositories The yum-cron does not support specific configuration of repositories. As a workaround for enabling or disabling specific repositories for yum-cron but not for yum in general follow the steps bellow: Create an empty repository configuration directory anywhere on the system. Copy all configuration files from the /etc/yum.repos.d/ directory to this newly created directory. In the respective .repo configuration file within the /etc/yum.repos.d/ , set the enabled option as follows: enabled = 1 To enable the repository. enabled = 0 To disable the repository. Add the following option, which points to the newly created repository directory, at the end of the selected yum-cron configuration file: 9.7.4. Testing Yum-cron Settings To test yum-cron settings without waiting for the scheduled yum-cron task: Open selected yum-cron configuration file: /etc/yum/yum-cron.conf For daily tasks. /etc/yum/yum-cron-hourly.conf For hourly tasks. Set the random_sleep option in the selected configuration file as follows: Run the configuration files: 9.7.5. Disabling Yum-cron messages The yum-cron messages cannot be entirely disabled, but can be limited to messages with critical priority only. To limit the messages: Open selected yum-cron configuration file: /etc/yum/yum-cron.conf For daily tasks. /etc/yum/yum-cron-hourly.conf For hourly tasks. Set the following option in the [base] section of the configuration file: 9.7.6. Automatically Cleaning Packages The yum-cron service does not support any configuration option for removing packages similar to the yum clean all command. To clean packages automatically, you can create a cron job as an executable shell script: Create a shell script in the /etc/cron.daily/ directory containing: Make the script executable: 9.8. Additional Resources For more information on how to manage software packages on Red Hat Enterprise Linux, see the resources listed below. Installed Documentation yum (8) - The manual page for the yum command-line utility provides a complete list of supported options and commands. yumdb (8) - The manual page for the yumdb command-line utility documents how to use this tool to query and, if necessary, alter the yum database. yum.conf (5) - The manual page named yum.conf documents available yum configuration options. yum-utils (1) - The manual page named yum-utils lists and briefly describes additional utilities for managing yum configuration, manipulating repositories, and working with yum database. Online Resources Yum Guides - The Yum Guides page on the project home page provides links to further documentation. Red Hat Customer Portal Labs - The Red Hat Customer Portal Labs includes a "Yum Repository Configuration Helper". See Also Chapter 6, Gaining Privileges documents how to gain administrative privileges by using the su and sudo commands.
[ "check-update", "~]# yum check-update Loaded plugins: product-id, search-disabled-repos, subscription-manager dracut.x86_64 033-360.el7_2 rhel-7-server-rpms dracut-config-rescue.x86_64 033-360.el7_2 rhel-7-server-rpms kernel.x86_64 3.10.0-327.el7 rhel-7-server-rpms rpm.x86_64 4.11.3-17.el7 rhel-7-server-rpms rpm-libs.x86_64 4.11.3-17.el7 rhel-7-server-rpms rpm-python.x86_64 4.11.3-17.el7 rhel-7-server-rpms yum.noarch 3.4.3-132.el7 rhel-7-server-rpms", "update package_name", "~]# yum update rpm Loaded plugins: langpacks, product-id, subscription-manager Updating Red Hat repositories. INFO:rhsm-app.repolib:repos updated: 0 Setting up Update Process Resolving Dependencies --> Running transaction check ---> Package rpm.x86_64 0:4.11.1-3.el7 will be updated --> Processing Dependency: rpm = 4.11.1-3.el7 for package: rpm-libs-4.11.1-3.el7.x86_64 --> Processing Dependency: rpm = 4.11.1-3.el7 for package: rpm-python-4.11.1-3.el7.x86_64 --> Processing Dependency: rpm = 4.11.1-3.el7 for package: rpm-build-4.11.1-3.el7.x86_64 ---> Package rpm.x86_64 0:4.11.2-2.el7 will be an update --> Running transaction check --> Finished Dependency Resolution Dependencies Resolved ============================================================================= Package Arch Version Repository Size ============================================================================= Updating: rpm x86_64 4.11.2-2.el7 rhel 1.1 M Updating for dependencies: rpm-build x86_64 4.11.2-2.el7 rhel 139 k rpm-build-libs x86_64 4.11.2-2.el7 rhel 98 k rpm-libs x86_64 4.11.2-2.el7 rhel 261 k rpm-python x86_64 4.11.2-2.el7 rhel 74 k Transaction Summary ============================================================================= Upgrade 1 Package (+4 Dependent packages) Total size: 1.7 M Is this ok [y/d/N]:", "group update group_name", "update", "update --security", "update-minimal --security", "mkdir mount_dir", "mount -o loop iso_name mount_dir", "cp mount_dir/media.repo /etc/yum.repos.d/new.repo", "baseurl=file:/// mount_dir", "update", "umount mount_dir", "rmdir mount_dir", "rm /etc/yum.repos.d/ new.repo", "~]# mount -o loop rhel-server-7.1-x86_64-dvd.iso /media/rhel7/", "~]# cp /media/rhel7/media.repo /etc/yum.repos.d/rhel7.repo", "baseurl=file:///media/rhel7/", "~]# yum update", "~]# umount /media/rhel7/", "~]# rmdir /media/rhel7/", "~]# rm /etc/yum.repos.d/rhel7.repo", "search term &hellip;", "~]USD yum search vim gvim emacs Loaded plugins: langpacks, product-id, search-disabled-repos, subscription-manager ============================= N/S matched: vim ============================== vim -X11.x86_64 : The VIM version of the vi editor for the X Window System vim -common.x86_64 : The common files needed by any version of the VIM editor [output truncated] ============================ N/S matched: emacs ============================= emacs .x86_64 : GNU Emacs text editor emacs -auctex.noarch : Enhanced TeX modes for Emacs [output truncated] Name and summary matches mostly, use \"search all\" for everything. Warning: No matches found for: gvim", "list all", "list glob_expression&hellip;", "~]USD yum list abrt-addon\\* abrt-plugin\\* Loaded plugins: langpacks, product-id, search-disabled-repos, subscription-manager Installed Packages abrt-addon-ccpp.x86_64 2.1.11-35.el7 @rhel-7-server-rpms abrt-addon-kerneloops.x86_64 2.1.11-35.el7 @rhel-7-server-rpms abrt-addon-pstoreoops.x86_64 2.1.11-35.el7 @rhel-7-server-rpms abrt-addon-python.x86_64 2.1.11-35.el7 @rhel-7-server-rpms abrt-addon-vmcore.x86_64 2.1.11-35.el7 @rhel-7-server-rpms abrt-addon-xorg.x86_64 2.1.11-35.el7 @rhel-7-server-rpms", "list installed glob_expression &hellip;", "~]USD yum list installed \"krb?-*\" Loaded plugins: langpacks, product-id, search-disabled-repos, subscription-manager Installed Packages krb5-libs.x86_64 1.13.2-10.el7 @rhel-7-server-rpms", "list available glob_expression &hellip;", "~]USD yum list available gstreamer*plugin\\* Loaded plugins: langpacks, product-id, search-disabled-repos, subscription-manager Available Packages gstreamer-plugins-bad-free.i686 0.10.23-20.el7 rhel-7-server-rpms gstreamer-plugins-base.i686 0.10.36-10.el7 rhel-7-server-rpms gstreamer-plugins-good.i686 0.10.31-11.el7 rhel-7-server-rpms gstreamer1-plugins-bad-free.i686 1.4.5-3.el7 rhel-7-server-rpms gstreamer1-plugins-base.i686 1.4.5-2.el7 rhel-7-server-rpms gstreamer1-plugins-base-devel.i686 1.4.5-2.el7 rhel-7-server-rpms gstreamer1-plugins-base-devel.x86_64 1.4.5-2.el7 rhel-7-server-rpms gstreamer1-plugins-good.i686 1.4.5-2.el7 rhel-7-server-rpms", "repolist", "repolist -v", "repoinfo", "repolist all", "info package_name &hellip;", "~]USD yum info abrt Loaded plugins: langpacks, product-id, search-disabled-repos, subscription-manager Installed Packages Name : abrt Arch : x86_64 Version : 2.1.11 Release : 35.el7 Size : 2.3 M Repo : installed From repo : rhel-7-server-rpms Summary : Automatic bug detection and reporting tool URL : https://fedorahosted.org/abrt/ License : GPLv2+ Description : abrt is a tool to help users to detect defects in applications and : to create a bug report with all information needed by maintainer to fix : it. It uses plugin system to extend its functionality.", "yumdb info package_name", "~]USD yumdb info yum Loaded plugins: langpacks, product-id yum-3.4.3-132.el7.noarch changed_by = 1000 checksum_data = a9d0510e2ff0d04d04476c693c0313a11379053928efd29561f9a837b3d9eb02 checksum_type = sha256 command_line = upgrade from_repo = rhel-7-server-rpms from_repo_revision = 1449144806 from_repo_timestamp = 1449144805 installed_by = 4294967295 origin_url = https://cdn.redhat.com/content/dist/rhel/server/7/7Server/x86_64/os/Packages/yum-3.4.3-132.el7.noarch.rpm reason = user releasever = 7Server var_uuid = 147a7d49-b60a-429f-8d8f-3edb6ce6f4a1", "install package_name", "install package_name package_name &hellip;", "install package_name .arch", "~]# yum install sqlite.i686", "install glob_expression &hellip;", "~]# yum install audacious-plugins-\\*", "install /usr/sbin/named", "install-n name", "install-na name.architecture", "install-nevra name-epoch:version-release.architecture", "~]# yum provides \"*bin/named\" Loaded plugins: langpacks, product-id, search-disabled-repos, subscription- : manager 32:bind-9.9.4-14.el7.x86_64 : The Berkeley Internet Name Domain (BIND) DNS : (Domain Name System) server Repo : rhel-7-server-rpms Matched from: Filename : /usr/sbin/named", "~]# yum install httpd Loaded plugins: langpacks, product-id, subscription-manager Resolving Dependencies --> Running transaction check ---> Package httpd.x86_64 0:2.4.6-12.el7 will be updated ---> Package httpd.x86_64 0:2.4.6-13.el7 will be an update --> Processing Dependency: 2.4.6-13.el7 for package: httpd-2.4.6-13.el7.x86_64 --> Running transaction check ---> Package httpd-tools.x86_64 0:2.4.6-12.el7 will be updated ---> Package httpd-tools.x86_64 0:2.4.6-13.el7 will be an update --> Finished Dependency Resolution Dependencies Resolved", "================================================================================ Package Arch Version Repository Size ================================================================================ Updating: httpd x86_64 2.4.6-13.el7 rhel-x86_64-server-7 1.2 M Updating for dependencies: httpd-tools x86_64 2.4.6-13.el7 rhel-x86_64-server-7 77 k Transaction Summary ================================================================================ Upgrade 1 Package (+1 Dependent package) Total size: 1.2 M Is this ok [y/d/N]:", "Downloading packages: Running transaction check Running transaction test Transaction test succeeded Running transaction Updating : httpd-tools-2.4.6-13.el7.x86_64 1/4 Updating : httpd-2.4.6-13.el7.x86_64 2/4 Cleanup : httpd-2.4.6-12.el7.x86_64 3/4 Cleanup : httpd-tools-2.4.6-12.el7.x86_64 4/4 Verifying : httpd-2.4.6-13.el7.x86_64 1/4 Verifying : httpd-tools-2.4.6-13.el7.x86_64 2/4 Verifying : httpd-tools-2.4.6-12.el7.x86_64 3/4 Verifying : httpd-2.4.6-12.el7.x86_64 4/4 Updated: httpd.x86_64 0:2.4.6-13.el7 Dependency Updated: httpd-tools.x86_64 0:2.4.6-13.el7 Complete!", "localinstall path", "Total size: 1.2 M Is this ok [y/d/N]:", "remove package_name &hellip;", "~]# yum remove totem", "groups summary", "~]USD yum groups summary Loaded plugins: langpacks, product-id, subscription-manager Available Environment Groups: 12 Installed Groups: 10 Available Groups: 12", "group list glob_expression &hellip;", "group info glob_expression &hellip;", "~]USD yum group info LibreOffice Loaded plugins: langpacks, product-id, subscription-manager Group: LibreOffice Group-Id: libreoffice Description: LibreOffice Productivity Suite Mandatory Packages: =libreoffice-calc libreoffice-draw -libreoffice-emailmerge libreoffice-graphicfilter =libreoffice-impress =libreoffice-math =libreoffice-writer +libreoffice-xsltfilter Optional Packages: libreoffice-base libreoffice-pyuno", "group list ids", "~]USD yum group list ids kde\\* Available environment groups: KDE Plasma Workspaces (kde-desktop-environment) Done", "~]USD yum group list hidden ids kde\\* Loaded plugins: product-id, subscription-manager Available Groups: KDE (kde-desktop) Done", "group install \"group name\"", "group install groupid", "install @ group", "install @^ group", "~]# yum group install \"KDE Desktop\" ~]# yum group install kde-desktop ~]# yum install @\"KDE Desktop\" ~]# yum install @kde-desktop", "group remove group_name", "group remove groupid", "remove @ group", "remove @^ group", "~]# yum group remove \"KDE Desktop\" ~]# yum group remove kde-desktop ~]# yum remove @\"KDE Desktop\" ~]# yum remove @kde-desktop", "history list", "history list all", "history list start_id .. end_id", "history list glob_expression &hellip;", "~]# yum history list 1..5 Loaded plugins: langpacks, product-id, subscription-manager ID | Login user | Date and time | Action(s) | Altered ------------------------------------------------------------------------------- 5 | User <user> | 2013-07-29 15:33 | Install | 1 4 | User <user> | 2013-07-21 15:10 | Install | 1 3 | User <user> | 2013-07-16 15:27 | I, U | 73 2 | System <unset> | 2013-07-16 15:19 | Update | 1 1 | System <unset> | 2013-07-16 14:38 | Install | 1106 history list", "history sync", "history stats", "~]# yum history stats Loaded plugins: langpacks, product-id, subscription-manager File : //var/lib/yum/history/history-2012-08-15.sqlite Size : 2,766,848 Transactions: 41 Begin time : Wed Aug 15 16:18:25 2012 End time : Wed Feb 27 14:52:30 2013 Counts : NEVRAC : 2,204 NEVRA : 2,204 NA : 1,759 NEVR : 2,204 rpm DB : 2,204 yum DB : 2,204 history stats", "history summary", "history summary start_id .. end_id", "history summary glob_expression &hellip;", "~]# yum history summary 1..5 Loaded plugins: langpacks, product-id, subscription-manager Login user | Time | Action(s) | Altered ------------------------------------------------------------------------------- Jaromir ... <jhradilek> | Last day | Install | 1 Jaromir ... <jhradilek> | Last week | Install | 1 Jaromir ... <jhradilek> | Last 2 weeks | I, U | 73 System <unset> | Last 2 weeks | I, U | 1107 history summary", "history package-list glob_expression &hellip;", "~]# yum history package-list subscription-manager\\* Loaded plugins: langpacks, product-id, search-disabled-repos, subscription-manager ID | Action(s) | Package ------------------------------------------------------------------------------- 2 | Updated | subscription-manager-1.13.22-1.el7.x86_64 EE 2 | Update | 1.15.9-15.el7.x86_64 EE 2 | Obsoleted | subscription-manager-firstboot-1.13.22-1.el7.x86_64 EE 2 | Updated | subscription-manager-gui-1.13.22-1.el7.x86_64 EE 2 | Update | 1.15.9-15.el7.x86_64 EE 2 | Obsoleting | subscription-manager-initial-setup-addon-1.15.9-15.el7.x86_64 EE 1 | Install | subscription-manager-1.13.22-1.el7.x86_64 1 | Install | subscription-manager-firstboot-1.13.22-1.el7.x86_64 1 | Install | subscription-manager-gui-1.13.22-1.el7.x86_64 history package-list", "history summary id", "history info id &hellip;", "history info start_id .. end_id", "~]# yum history info 4..5 Loaded plugins: langpacks, product-id, search-disabled-repos, subscription-manager Transaction ID : 4..5 Begin time : Mon Dec 7 16:51:07 2015 Begin rpmdb : 1252:d2b62b7b5768e855723954852fd7e55f641fbad9 End time : 17:18:49 2015 (27 minutes) End rpmdb : 1253:cf8449dc4c53fc0cbc0a4c48e496a6c50f3d43c5 User : Maxim Svistunov <msvistun> Return-Code : Success Command Line : install tigervnc-server.x86_64 Command Line : reinstall tigervnc-server Transaction performed with: Installed rpm-4.11.3-17.el7.x86_64 @rhel-7-server-rpms Installed subscription-manager-1.15.9-15.el7.x86_64 @rhel-7-server-rpms Installed yum-3.4.3-132.el7.noarch @rhel-7-server-rpms Packages Altered: Reinstall tigervnc-server-1.3.1-3.el7.x86_64 @rhel-7-server-rpms history info", "history addon-info id", "history addon-info last", "~]# yum history addon-info 4 Loaded plugins: langpacks, product-id, subscription-manager Transaction ID: 4 Available additional history information: config-main config-repos saved_tx history addon-info", "history addon-info id information", "history undo id", "history redo id", "-q history addon-info id saved_tx > file_name", "load-transaction file_name", "history new", "[main] cachedir=/var/cache/yum/USDbasearch/USDreleasever keepcache=0 debuglevel=2 logfile=/var/log/yum.log exactarch=1 obsoletes=1 gpgcheck=1 plugins=1 installonly_limit=3 PUT YOUR REPOS HERE OR IN separate files named file.repo in /etc/yum.repos.d", "[ repository ] name= repository_name baseurl= repository_url", "baseurl=http://path/to/repo/releases/USDreleasever/server/USDbasearch/os/", "# Red Hat Repositories Managed by (rhsm) subscription-manager # [red-hat-enterprise-linux-scalable-file-system-for-rhel-6-entitlement-rpms] name = Red Hat Enterprise Linux Scalable File System (for RHEL 6 Entitlement) (RPMs) baseurl = https://cdn.redhat.com/content/dist/rhel/entitlement-6/releases/USDreleasever/USDbasearch/scalablefilesystem/os enabled = 1 gpgcheck = 1 gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release sslverify = 1 sslcacert = /etc/rhsm/ca/redhat-uep.pem sslclientkey = /etc/pki/entitlement/key.pem sslclientcert = /etc/pki/entitlement/11300387955690106.pem [red-hat-enterprise-linux-scalable-file-system-for-rhel-6-entitlement-source-rpms] name = Red Hat Enterprise Linux Scalable File System (for RHEL 6 Entitlement) (Source RPMs) baseurl = https://cdn.redhat.com/content/dist/rhel/entitlement-6/releases/USDreleasever/USDbasearch/scalablefilesystem/source/SRPMS enabled = 0 gpgcheck = 1 gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release sslverify = 1 sslcacert = /etc/rhsm/ca/redhat-uep.pem sslclientkey = /etc/pki/entitlement/key.pem sslclientcert = /etc/pki/entitlement/11300387955690106.pem [red-hat-enterprise-linux-scalable-file-system-for-rhel-6-entitlement-debug-rpms] name = Red Hat Enterprise Linux Scalable File System (for RHEL 6 Entitlement) (Debug RPMs) baseurl = https://cdn.redhat.com/content/dist/rhel/entitlement-6/releases/USDreleasever/USDbasearch/scalablefilesystem/debug enabled = 0 gpgcheck = 1 gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release sslverify = 1 sslcacert = /etc/rhsm/ca/redhat-uep.pem sslclientkey = /etc/pki/entitlement/key.pem sslclientcert = /etc/pki/entitlement/11300387955690106.pem", "~]# echo \"Red Hat Enterprise Linux 7\" > /etc/yum/vars/osname", "name=USDosname USDreleasever", "yum-config-manager", "yum-config-manager section &hellip;", "yum-config-manager glob_expression &hellip;", "~]USD yum-config-manager main \\* Loaded plugins: langpacks, product-id, subscription-manager ================================== main =================================== [main] alwaysprompt = True assumeyes = False bandwith = 0 bugtracker_url = https://bugzilla.redhat.com/enter_bug.cgi?product=Red%20Hat%20Enterprise%20Linux%206&component=yum cache = 0 [output truncated]", "yum-config-manager --add-repo repository_url", "~]# yum-config-manager --add-repo http://www.example.com/example.repo Loaded plugins: langpacks, product-id, subscription-manager adding repo from: http://www.example.com/example.repo grabbing file http://www.example.com/example.repo to /etc/yum.repos.d/example.repo example.repo | 413 B 00:00 repo saved to /etc/yum.repos.d/example.repo", "yum-config-manager --enable repository &hellip;", "yum-config-manager --enable glob_expression &hellip;", "~]# yum-config-manager --enable example\\* Loaded plugins: langpacks, product-id, subscription-manager ============================== repo: example ============================== [example] bandwidth = 0 base_persistdir = /var/lib/yum/repos/x86_64/7Server baseurl = http://www.example.com/repo/7Server/x86_64/ cache = 0 cachedir = /var/cache/yum/x86_64/7Server/example [output truncated]", "~]# yum-config-manager --enable \\* Loaded plugins: langpacks, product-id, subscription-manager ============================== repo: example ============================== [example] bandwidth = 0 base_persistdir = /var/lib/yum/repos/x86_64/7Server baseurl = http://www.example.com/repo/7Server/x86_64/ cache = 0 cachedir = /var/cache/yum/x86_64/7Server/example [output truncated]", "yum-config-manager --disable repository &hellip;", "yum-config-manager --disable glob_expression &hellip;", "~]# yum-config-manager --disable \\* Loaded plugins: langpacks, product-id, subscription-manager ============================== repo: example ============================== [example] bandwidth = 0 base_persistdir = /var/lib/yum/repos/x86_64/7Server baseurl = http://www.example.com/repo/7Server/x86_64/ cache = 0 cachedir = /var/cache/yum/x86_64/7Server/example [output truncated]", "yum install createrepo", "cp /your/packages/*.rpm /tmp/local_repo/", "createrepo /tmp/local_repo/", "cp /your/packages/*.rpm /tmp/local_repo/", "createrepo --update /tmp/local_repo/", "clean expire-cache", "~]# yum info yum Loaded plugins: langpacks, product-id, subscription-manager [output truncated]", "plugins=1", "[main] enabled=1", "~]# yum update --disableplugin=aliases", "~]# yum update --disableplugin=aliases,lang*", "~]# yum install yum-plugin-aliases", "apply_updates = yes", "emit_via = email", "reposdir= /path/to/new/reposdir", "random_sleep = 0", "yum-cron /etc/yum/yum-cron.conf yum-cron /etc/yum/yum-cron-hourly.conf", "debuglevel = -4", "#!/bin/sh clean all", "chmod +x /etc/cron.daily/ script-name.sh" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system_administrators_guide/ch-yum
Chapter 28. HTTP Sink
Chapter 28. HTTP Sink Forwards an event to a HTTP endpoint 28.1. Configuration Options The following table summarizes the configuration options available for the http-sink Kamelet: Property Name Description Type Default Example url * URL The URL to send data to string "https://my-service/path" method Method The HTTP method to use string "POST" Note Fields marked with an asterisk (*) are mandatory. 28.2. Dependencies At runtime, the http-sink Kamelet relies upon the presence of the following dependencies: camel:http camel:kamelet camel:core 28.3. Usage This section describes how you can use the http-sink . 28.3.1. Knative Sink You can use the http-sink Kamelet as a Knative sink by binding it to a Knative object. http-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: http-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: http-sink properties: url: "https://my-service/path" 28.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 28.3.1.2. Procedure for using the cluster CLI Save the http-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f http-sink-binding.yaml 28.3.1.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind channel:mychannel http-sink -p "sink.url=https://my-service/path" This command creates the KameletBinding in the current namespace on the cluster. 28.3.2. Kafka Sink You can use the http-sink Kamelet as a Kafka sink by binding it to a Kafka topic. http-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: http-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: http-sink properties: url: "https://my-service/path" 28.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 28.3.2.2. Procedure for using the cluster CLI Save the http-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f http-sink-binding.yaml 28.3.2.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic http-sink -p "sink.url=https://my-service/path" This command creates the KameletBinding in the current namespace on the cluster. 28.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/http-sink.kamelet.yaml
[ "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: http-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: http-sink properties: url: \"https://my-service/path\"", "apply -f http-sink-binding.yaml", "kamel bind channel:mychannel http-sink -p \"sink.url=https://my-service/path\"", "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: http-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: http-sink properties: url: \"https://my-service/path\"", "apply -f http-sink-binding.yaml", "kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic http-sink -p \"sink.url=https://my-service/path\"" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.9/html/kamelets_reference/http-sink
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Use the Create Issue form in Red Hat Jira to provide your feedback. The Jira issue is created in the Red Hat Satellite Jira project, where you can track its progress. Prerequisites Ensure you have registered a Red Hat account . Procedure Click the following link: Create Issue . If Jira displays a login error, log in and proceed after you are redirected to the form. Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create .
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/installing_satellite_server_in_a_disconnected_network_environment/providing-feedback-on-red-hat-documentation_satellite
Appendix B. API Permissions Matrix
Appendix B. API Permissions Matrix The Red Hat Satellite 6 API supports numerous actions, many of which require specific permissions. The following table lists the API permission names, the actions associated with those permissions, and the associated resource type. Table B.1. API Permissions Matrix Permission Name Actions Resource Type view_activation_keys katello/activation_keys/all katello/activation_keys/index katello/activation_keys/auto_complete_search katello/api/v2/activation_keys/index katello/api/v2/activation_keys/show katello/api/v2/activation_keys/available_host_collections katello/api/v2/activation_keys/available_releases katello/api/v2/activation_keys/product_content Katello::ActivationKey create_activation_keys katello/api/v2/activation_keys/create katello/api/v2/activation_keys/copy Katello::ActivationKey edit_activation_keys katello/api/v2/activation_keys/update katello/api/v2/activation_keys/content_override katello/api/v2/activation_keys/add_subscriptions katello/api/v2/activation_keys/remove_subscriptions Katello::ActivationKey destroy_activation_keys katello/api/v2/activation_keys/destroy Katello::ActivationKey logout users/logout view_architectures architectures/index architectures/show architectures/auto_complete_search api/v2/architectures/index api/v2/architectures/show create_architectures architectures/new architectures/create api/v2/architectures/create edit_architectures architectures/edit architectures/update api/v2/architectures/update destroy_architectures architectures/destroy api/v2/architectures/destroy view_audit_logs audits/index audits/show audits/auto_complete_search api/v2/audits/index api/v2/audits/show view_authenticators auth_source_ldaps/index auth_source_ldaps/show api/v2/auth_source_ldaps/index api/v2/auth_source_ldaps/show create_authenticators auth_source_ldaps/new auth_source_ldaps/create api/v2/auth_source_ldaps/create edit_authenticators auth_source_ldaps/edit auth_source_ldaps/update api/v2/auth_source_ldaps/update destroy_authenticators auth_source_ldaps/destroy api/v2/auth_source_ldaps/destroy view_bookmarks bookmarks/index bookmarks/show api/v2/bookmarks/index api/v2/bookmarks/show create_bookmarks bookmarks/new bookmarks/create api/v2/bookmarks/new api/v2/bookmarks/create edit_bookmarks bookmarks/edit bookmarks/update api/v2/bookmarks/edit api/v2/bookmarks/update destroy_bookmarks bookmarks/destroy api/v2/bookmarks/destroy download_bootdisk foreman_bootdisk/disks/generic foreman_bootdisk/disks/host foreman_bootdisk/disks/full_host foreman_bootdisk/disks/subnet foreman_bootdisk/disks/help foreman_bootdisk/api/v2/disks/generic foreman_bootdisk/api/v2/disks/host manage_capsule_content katello/api/v2/capsule_content/lifecycle_environments katello/api/v2/capsule_content/available_lifecycle_environments katello/api/v2/capsule_content/add_lifecycle_environment katello/api/v2/capsule_content/remove_lifecycle_environment katello/api/v2/capsule_content/sync katello/api/v2/capsule_content/sync_status katello/api/v2/capsule_content/cancel_sync SmartProxy view_capsule_content smart_proxies/pulp_storage smart_proxies/pulp_status smart_proxies/show_with_content SmartProxy view_compute_profiles compute_profiles/index compute_profiles/show compute_profiles/auto_complete_search api/v2/compute_profiles/index api/v2/compute_profiles/show create_compute_profiles compute_profiles/new compute_profiles/create api/v2/compute_profiles/create edit_compute_profiles compute_profiles/edit compute_profiles/update api/v2/compute_profiles/update destroy_compute_profiles compute_profiles/destroy api/v2/compute_profiles/destroy view_compute_resources compute_resources/index compute_resources/show compute_resources/auto_complete_search compute_resources/ping compute_resources/available_images api/v2/compute_resources/index api/v2/compute_resources/show api/v2/compute_resources/available_images api/v2/compute_resources/available_clusters api/v2/compute_resources/available_folders api/v2/compute_resources/available_flavors api/v2/compute_resources/available_networks api/v2/compute_resources/available_resource_pools api/v2/compute_resources/available_security_groups api/v2/compute_resources/available_storage_domains api/v2/compute_resources/available_zones api/v2/compute_resources/available_storage_pods create_compute_resources compute_resources/new compute_resources/create compute_resources/test_connection api/v2/compute_resources/create edit_compute_resources compute_resources/edit compute_resources/update compute_resources/test_connection compute_attributes/new compute_attributes/create compute_attributes/edit compute_attributes/update api/v2/compute_resources/update api/v2/compute_attributes/create api/v2/compute_attributes/update destroy_compute_resources compute_resources/destroy api/v2/compute_resources/destroy view_compute_resources_vms compute_resources_vms/index compute_resources_vms/show create_compute_resources_vms compute_resources_vms/new compute_resources_vms/create edit_compute_resources_vms compute_resources_vms/edit compute_resources_vms/update destroy_compute_resources_vms compute_resources_vms/destroy power_compute_resources_vms compute_resources_vms/power compute_resources_vms/pause console_compute_resources_vms compute_resources_vms/console view_config_groups config_groups/index config_groups/auto_complete_search api/v2/config_groups/index api/v2/config_groups/show create_config_groups config_groups/new config_groups/create api/v2/config_groups/create edit_config_groups config_groups/edit config_groups/update api/v2/config_groups/update destroy_config_groups config_groups/destroy api/v2/config_groups/destroy view_config_reports config_reports/index config_reports/show config_reports/auto_complete_search api/v2/config_reports/index api/v2/config_reports/show api/v2/config_reports/last destroy_config_reports config_reports/destroy api/v2/config_reports/destroy upload_config_reports api/v2/config_reports/create view_containers containers/index containers/show api/v2/containers/index api/v2/containers/show api/v2/containers/logs Container commit_containers containers/commit Container create_containers containers/steps/show containers/steps/update containers/new api/v2/containers/create api/v2/containers/power Container destroy_containers containers/destroy api/v2/containers/destroy Container power_compute_resources_vms containers/power api/v2/containers/create api/v2/containers/power ComputeResource view_content_views katello/api/v2/content_views/index katello/api/v2/content_views/show katello/api/v2/content_views/available_puppet_modules katello/api/v2/content_views/available_puppet_module_names katello/api/v2/content_view_filters/index katello/api/v2/content_view_filters/show katello/api/v2/content_view_filter_rules/index katello/api/v2/content_view_filter_rules/show katello/api/v2/content_view_histories/index katello/api/v2/content_view_puppet_modules/index katello/api/v2/content_view_puppet_modules/show katello/api/v2/content_view_versions/index katello/api/v2/content_view_versions/show katello/api/v2/package_groups/index katello/api/v2/package_groups/show katello/api/v2/errata/index katello/api/v2/errata/show katello/api/v2/puppet_modules/index katello/api/v2/puppet_modules/show katello/content_views/auto_complete katello/content_views/auto_complete_search katello/errata/short_details katello/errata/auto_complete katello/packages/details katello/packages/auto_complete katello/products/auto_complete katello/repositories/auto_complete_library katello/content_search/index katello/content_search/products katello/content_search/repos katello/content_search/packages katello/content_search/errata katello/content_search/puppet_modules katello/content_search/packages_items katello/content_search/errata_items katello/content_search/puppet_modules_items katello/content_search/view_packages katello/content_search/view_puppet_modules katello/content_search/repo_packages katello/content_search/repo_errata katello/content_search/repo_puppet_modules katello/content_search/repo_compare_errata katello/content_search/repo_compare_packages katello/content_search/repo_compare_puppet_modules katello/content_search/view_compare_errata katello/content_search/view_compare_packages katello/content_search/view_compare_puppet_modules katello/content_search/views Katello::ContentView create_content_views katello/api/v2/content_views/create katello/api/v2/content_views/copy Katello::ContentView edit_content_views katello/api/v2/content_views/update katello/api/v2/content_view_filters/create katello/api/v2/content_view_filters/update katello/api/v2/content_view_filters/destroy katello/api/v2/content_view_filter_rules/create katello/api/v2/content_view_filter_rules/update katello/api/v2/content_view_filter_rules/destroy katello/api/v2/content_view_puppet_modules/create katello/api/v2/content_view_puppet_modules/update katello/api/v2/content_view_puppet_modules/destroy Katello::ContentView destroy_content_views katello/api/v2/content_views/destroy katello/api/v2/content_views/remove katello/api/v2/content_view_versions/destroy Katello::ContentView publish_content_views katello/api/v2/content_views/publish katello/api/v2/content_view_versions/incremental_update Katello::ContentView promote_or_remove_content_views katello/api/v2/content_view_versions/promote katello/api/v2/content_views/remove_from_environment katello/api/v2/content_views/remove Katello::ContentView export_content_views katello/api/v2/content_view_versions/export Katello::ContentView access_dashboard dashboard/index dashboard/save_positions dashboard/reset_default dashboard/create dashboard/destroy api/v2/dashboard/index view_discovered_hosts discovered_hosts/index discovered_hosts/show discovered_hosts/auto_complete_search api/v2/discovered_hosts/show Host submit_discovered_hosts api/v2/discovered_hosts/facts api/v2/discovered_hosts/create Host auto_provision_discovered_hosts discovered_hosts/auto_provision discovered_hosts/auto_provision_all api/v2/discovered_hosts/auto_provision api/v2/discovered_hosts/auto_provision_all Host provision_discovered_hosts discovered_hosts/edit discovered_hosts/update api/v2/discovered_hosts/update Host edit_discovered_hosts discovered_hosts/update_multiple_location discovered_hosts/select_multiple_organization discovered_hosts/update_multiple_organization discovered_hosts/select_multiple_location discovered_hosts/refresh_facts discovered_hosts/reboot discovered_hosts/reboot_all api/v2/discovered_hosts/refresh_facts api/v2/discovered_hosts/reboot api/v2/discovered_hosts/reboot_all Host destroy_discovered_hosts discovered_hosts/destroy discovered_hosts/submit_multiple_destroy discovered_hosts/multiple_destroy api/v2/discovered_hosts/destroy Host view_discovery_rules discovery_rules/index discovery_rules/show discovery_rules/auto_complete_search api/v2/discovery_rules/index api/v2/discovery_rules/show DiscoveryRule create_discovery_rules discovery_rules/new discovery_rules/create api/v2/discovery_rules/create DiscoveryRule edit_discovery_rules discovery_rules/edit discovery_rules/update discovery_rules/enable discovery_rules/disable api/v2/discovery_rules/create api/v2/discovery_rules/update DiscoveryRule execute_discovery_rules discovery_rules/auto_provision discovery_rules/auto_provision_all api/v2/discovery_rules/auto_provision api/v2/discovery_rules/auto_provision_all DiscoveryRule destroy_discovery_rules discovery_rules/destroy api/v2/discovery_rules/destroy DiscoveryRule view_domains domains/index domains/show domains/auto_complete_search api/v2/domains/index api/v2/domains/show api/v2/parameters/index api/v2/parameters/show create_domains domains/new domains/create api/v2/domains/create edit_domains domains/edit domains/update api/v2/domains/update api/v2/parameters/create api/v2/parameters/update api/v2/parameters/destroy api/v2/parameters/reset destroy_domains domains/destroy api/v2/domains/destroy view_environments environments/index environments/show environments/auto_complete_search api/v2/environments/index api/v2/environments/show create_environments environments/new environments/create api/v2/environments/create edit_environments environments/edit environments/update api/v2/environments/update destroy_environments environments/destroy api/v2/environments/destroy import_environments environments/import_environments environments/obsolete_and_new api/v2/environments/import_puppetclasses api/v2/smart_proxies/import_puppetclasses view_external_usergroups external_usergroups/index external_usergroups/show api/v2/external_usergroups/index api/v2/external_usergroups/show create_external_usergroups external_usergroups/new external_usergroups/create api/v2/external_usergroups/new api/v2/external_usergroups/create edit_external_usergroups external_usergroups/edit external_usergroups/update external_usergroups/refresh api/v2/external_usergroups/update api/v2/external_usergroups/refresh destroy_external_usergroups external_usergroups/destroy api/v2/external_usergroups/destroy view_external_variables lookup_keys/index lookup_keys/show lookup_keys/auto_complete_search puppetclass_lookup_keys/index puppetclass_lookup_keys/show puppetclass_lookup_keys/auto_complete_search variable_lookup_keys/index variable_lookup_keys/show variable_lookup_keys/auto_complete_search lookup_values/index api/v2/smart_variables/index api/v2/smart_variables/show api/v2/smart_class_parameters/index api/v2/smart_class_parameters/show api/v2/override_values/index api/v2/override_values/show create_external_variables lookup_keys/new lookup_keys/create puppetclass_lookup_keys/new puppetclass_lookup_keys/create variable_lookup_keys/new variable_lookup_keys/create lookup_values/create api/v2/smart_variables/create api/v2/smart_class_parameters/create api/v2/override_values/create edit_external_variables lookup_keys/edit lookup_keys/update puppetclass_lookup_keys/edit puppetclass_lookup_keys/update variable_lookup_keys/edit variable_lookup_keys/update lookup_values/create lookup_values/update lookup_values/destroy api/v2/smart_variables/update api/v2/smart_class_parameters/update api/v2/override_values/create api/v2/override_values/update api/v2/override_values/destroy destroy_external_variables lookup_keys/destroy puppetclass_lookup_keys/destroy variable_lookup_keys/destroy lookup_values/destroy api/v2/smart_variables/destroy api/v2/smart_class_parameters/destroy api/v2/override_values/create api/v2/override_values/update api/v2/override_values/destroy view_facts facts/index facts/show fact_values/index fact_values/show fact_values/auto_complete_search api/v2/fact_values/index api/v2/fact_values/show upload_facts api/v2/hosts/facts view_filters filters/index filters/auto_complete_search api/v2/filters/index api/v2/filters/show create_filters filters/new filters/create api/v2/filters/create edit_filters filters/edit filters/update permissions/index api/v2/filters/update api/v2/permissions/index api/v2/permissions/show destroy_filters filters/destroy api/v2/filters/destroy view_arf_reports arf_reports/index arf_reports/show arf_reports/parse_html arf_reports/show_html arf_reports/parse_bzip arf_reports/auto_complete_search api/v2/compliance/arf_reports/index api/v2/compliance/arf_reports/show compliance_hosts/show destroy_arf_reports arf_reports/destroy arf_reports/delete_multiple arf_reports/submit_delete_multiple api/v2/compliance/arf_reports/destroy create_arf_reports api/v2/compliance/arf_reports/create view_policies policies/index policies/show policies/parse policies/auto_complete_search policy_dashboard/index compliance_dashboard/index api/v2/compliance/policies/index api/v2/compliance/policies/show api/v2/compliance/policies/content ForemanOpenscap::Policy edit_policies policies/edit policies/update policies/scap_content_selected api/v2/compliance/policies/update ForemanOpenscap::Policy create_policies policies/new policies/create api/v2/compliance/policies/create ForemanOpenscap::Policy destroy_policies policies/destroy api/v2/compliance/policies/destroy ForemanOpenscap::Policy assign_policies policies/select_multiple_hosts policies/update_multiple_hosts policies/disassociate_multiple_hosts policies/remove_policy_from_multiple_hosts ForemanOpenscap::Policy view_scap_contents scap_contents/index scap_contents/show scap_contents/auto_complete_search api/v2/compliance/scap_contents/index api/v2/compliance/scap_contents/show ForemanOpenscap::ScapContent view_scap_contents scap_contents/index scap_contents/show scap_contents/auto_complete_search api/v2/compliance/scap_contents/index api/v2/compliance/scap_contents/show ForemanOpenscap::ScapContent edit_scap_contents scap_contents/edit scap_contents/update api/v2/compliance/scap_contents/update ForemanOpenscap::ScapContent create_scap_contents scap_contents/new scap_contents/create api/v2/compliance/scap_contents/create ForemanOpenscap::ScapContent destroy_scap_contents scap_contents/destroy api/v2/compliance/scap_contents/destroy ForemanOpenscap::ScapContent edit_hostgroups hostgroups/openscap_proxy_changed Host view_job_templates job_templates/index job_templates/show job_templates/revision job_templates/auto_complete_search job_templates/auto_complete_job_category job_templates/preview job_templates/export api/v2/job_templates/index api/v2/job_templates/show api/v2/job_templates/revision api/v2/job_templates/export api/v2/template_inputs/index api/v2/template_inputs/show api/v2/foreign_input_sets/index api/v2/foreign_input_sets/show JobTemplate create_job_templates job_templates/new job_templates/create job_templates/clone_template job_templates/import api/v2/job_templates/create api/v2/job_templates/clone api/v2/job_templates/import JobTemplate edit_job_templates job_templates/edit job_templates/update api/v2/job_templates/update api/v2/template_inputs/create api/v2/template_inputs/update api/v2/template_inputs/destroy api/v2/foreign_input_sets/create api/v2/foreign_input_sets/update api/v2/foreign_input_sets/destroy edit_job_templates job_templates/edit job_templates/update api/v2/job_templates/update api/v2/template_inputs/create api/v2/template_inputs/update api/v2/template_inputs/destroy api/v2/foreign_input_sets/create api/v2/foreign_input_sets/update api/v2/foreign_input_sets/destroy edit_remote_execution_features remote_execution_features/index remote_execution_features/show remote_execution_features/update api/v2/remote_execution_features/index api/v2/remote_execution_features/show api/v2/remote_execution_features/update RemoteExecutionFeature destroy_job_templates job_templates/destroy api/v2/job_templates/destroy JobTemplate lock_job_templates job_templates/lock job_templates/unlock JobTemplate create_job_invocations job_invocations/new job_invocations/create job_invocations/refresh job_invocations/rerun job_invocations/preview_hosts api/v2/job_invocations/create JobInvocation view_job_invocations job_invocations/index job_invocations/show template_invocations/show api/v2/job_invocations/index api/v2/job_invocations/show api/v2/job_invocations/output JobInvocation execute_template_invocation TemplateInvocation filter_autocompletion_for_template_invocation template_invocations/auto_complete_search job_invocations/show template_invocations/index TemplateInvocation view_foreman_tasks foreman_tasks/tasks/auto_complete_search foreman_tasks/tasks/sub_tasks foreman_tasks/tasks/index foreman_tasks/tasks/show foreman_tasks/api/tasks/bulk_search foreman_tasks/api/tasks/show foreman_tasks/api/tasks/index foreman_tasks/api/tasks/summary ForemanTasks::Task edit_foreman_tasks foreman_tasks/tasks/resume foreman_tasks/tasks/unlock foreman_tasks/tasks/force_unlock foreman_tasks/tasks/cancel_step foreman_tasks/api/tasks/bulk_resume ForemanTasks::Task create_recurring_logics ForemanTasks::RecurringLogic view_recurring_logics foreman_tasks/recurring_logics/index foreman_tasks/recurring_logics/show foreman_tasks/api/recurring_logics/index foreman_tasks/api/recurring_logics/show ForemanTasks::RecurringLogic edit_recurring_logics foreman_tasks/recurring_logics/cancel foreman_tasks/api/recurring_logics/cancel ForemanTasks::RecurringLogic view_globals common_parameters/index common_parameters/show common_parameters/auto_complete_search api/v2/common_parameters/index api/v2/common_parameters/show create_globals common_parameters/new common_parameters/create api/v2/common_parameters/create edit_globals common_parameters/edit common_parameters/update api/v2/common_parameters/update destroy_globals common_parameters/destroy api/v2/common_parameters/destroy view_gpg_keys katello/gpg_keys/all katello/gpg_keys/index katello/gpg_keys/auto_complete_search katello/api/v2/gpg_keys/index katello/api/v2/gpg_keys/show Katello::GpgKey create_gpg_keys katello/api/v2/gpg_keys/create Katello::GpgKey edit_gpg_keys katello/api/v2/gpg_keys/update katello/api/v2/gpg_keys/content Katello::GpgKey destroy_gpg_keys katello/api/v2/gpg_keys/destroy Katello::GpgKey view_host_collections katello/api/v2/host_collections/index katello/api/v2/host_collections/show katello/host_collections/auto_complete_search Katello::HostCollection create_host_collections katello/api/v2/host_collections/create katello/api/v2/host_collections/copy Katello::HostCollection edit_host_collections katello/api/v2/host_collections/update katello/api/v2/host_collections/add_systems katello/api/v2/host_collections/remove_systems Katello::HostCollection destroy_host_collections katello/api/v2/host_collections/destroy Katello::HostCollection edit_classes host_editing/edit_classes api/v2/host_classes/index api/v2/host_classes/create api/v2/host_classes/destroy create_params host_editing/create_params api/v2/parameters/create edit_params host_editing/edit_params api/v2/parameters/update destroy_params host_editing/destroy_params api/v2/parameters/destroy api/v2/parameters/reset view_hostgroups hostgroups/index hostgroups/show hostgroups/auto_complete_search api/v2/hostgroups/index api/v2/hostgroups/show create_hostgroups hostgroups/new hostgroups/create hostgroups/clone hostgroups/nest hostgroups/process_hostgroup hostgroups/architecture_selected hostgroups/domain_selected hostgroups/environment_selected hostgroups/medium_selected hostgroups/os_selected hostgroups/use_image_selected hostgroups/process_hostgroup hostgroups/puppetclass_parameters host/process_hostgroup puppetclasses/parameters api/v2/hostgroups/create api/v2/hostgroups/clone edit_hostgroups hostgroups/edit hostgroups/update hostgroups/architecture_selected hostgroups/process_hostgroup hostgroups/architecture_selected hostgroups/domain_selected hostgroups/environment_selected hostgroups/medium_selected hostgroups/os_selected hostgroups/use_image_selected hostgroups/process_hostgroup hostgroups/puppetclass_parameters host/process_hostgroup puppetclasses/parameters api/v2/hostgroups/update api/v2/parameters/create api/v2/parameters/update api/v2/parameters/destroy api/v2/parameters/reset api/v2/hostgroup_classes/index api/v2/hostgroup_classes/create api/v2/hostgroup_classes/destroy destroy_hostgroups hostgroups/destroy api/v2/hostgroups/destroy view_hosts hosts/index hosts/show hosts/errors hosts/active hosts/out_of_sync hosts/disabled hosts/pending hosts/vm hosts/externalNodes hosts/pxe_config hosts/storeconfig_klasses hosts/auto_complete_search hosts/bmc hosts/runtime hosts/resources hosts/templates hosts/overview hosts/nics dashboard/OutOfSync dashboard/errors dashboard/active unattended/host_template unattended/hostgroup_template api/v2/hosts/index api/v2/hosts/show api/v2/hosts/status/configuration api/v2/hosts/get_status api/v2/hosts/vm_compute_attributes api/v2/hosts/template api/v2/interfaces/index api/v2/interfaces/show locations/mismatches organizations/mismatches hosts/puppet_environment_for_content_view katello/api/v2/host_autocomplete/auto_complete_search katello/api/v2/host_errata/index katello/api/v2/host_errata/show katello/api/v2/host_errata/auto_complete_search katello/api/v2/host_subscriptions/index katello/api/v2/host_subscriptions/events katello/api/v2/host_subscriptions/product_content katello/api/v2/hosts/applicable_errata katello/api/v2/hosts/installable_errata katello/api/v2/hosts/bulk/available_incremental_updates katello/api/v2/host_packages/index create_hosts hosts/new hosts/create hosts/clone hosts/architecture_selected hosts/compute_resource_selected hosts/domain_selected hosts/environment_selected hosts/hostgroup_or_environment_selected hosts/medium_selected hosts/os_selected hosts/use_image_selected hosts/process_hostgroup hosts/process_taxonomy hosts/current_parameters hosts/puppetclass_parameters hosts/template_used hosts/interfaces compute_resources/cluster_selected compute_resources/template_selected compute_resources/provider_selected compute_resources/resource_pools puppetclasses/parameters subnets/freeip interfaces/new api/v2/hosts/create api/v2/interfaces/create api/v2/tasks/index edit_hosts hosts/openscap_proxy_changed hosts/edit hosts/update hosts/multiple_actions hosts/reset_multiple hosts/submit_multiple_enable hosts/select_multiple_hostgroup hosts/select_multiple_environment hosts/submit_multiple_disable hosts/multiple_parameters hosts/multiple_disable hosts/multiple_enable hosts/update_multiple_environment hosts/update_multiple_hostgroup hosts/update_multiple_parameters hosts/toggle_manage hosts/select_multiple_organization hosts/update_multiple_organization hosts/disassociate hosts/multiple_disassociate hosts/update_multiple_disassociate hosts/select_multiple_owner hosts/update_multiple_owner hosts/select_multiple_power_state hosts/update_multiple_power_state hosts/select_multiple_puppet_proxy hosts/update_multiple_puppet_proxy hosts/select_multiple_puppet_ca_proxy hosts/update_multiple_puppet_ca_proxy hosts/select_multiple_location hosts/update_multiple_location hosts/architecture_selected hosts/compute_resource_selected hosts/domain_selected hosts/environment_selected hosts/hostgroup_or_environment_selected hosts/medium_selected hosts/os_selected hosts/use_image_selected hosts/process_hostgroup hosts/process_taxonomy hosts/current_parameters hosts/puppetclass_parameters hosts/template_used hosts/interfaces compute_resources/associate compute_resources/[:cluster_selected, :template_selected, :provider_selected, :resource_pools] compute_resources_vms/associate puppetclasses/parameters subnets/freeip interfaces/new api/v2/hosts/update api/v2/hosts/disassociate api/v2/interfaces/create api/v2/interfaces/update api/v2/interfaces/destroy api/v2/compute_resources/associate api/v2/hosts/host_collections katello/api/v2/host_errata/apply katello/api/v2/host_packages/install katello/api/v2/host_packages/upgrade katello/api/v2/host_packages/upgrade_all katello/api/v2/host_packages/remove katello/api/v2/host_subscriptions/auto_attach katello/api/v2/host_subscriptions/add_subscriptions katello/api/v2/host_subscriptions/remove_subscriptions katello/api/v2/host_subscriptions/content_override katello/api/v2/hosts/bulk/add_host_collections katello/api/v2/hosts/bulk/remove_host_collections katello/api/v2/hosts/bulk/install_content katello/api/v2/hosts/bulk/update_content katello/api/v2/hosts/bulk/remove_content katello/api/v2/hosts/bulk/environment_content_view destroy_hosts hosts/destroy hosts/multiple_actions hosts/reset_multiple hosts/multiple_destroy hosts/submit_multiple_destroy api/v2/hosts/destroy api/v2/interfaces/destroy katello/api/v2/hosts/bulk/destroy build_hosts hosts/setBuild hosts/cancelBuild hosts/multiple_build hosts/submit_multiple_build hosts/review_before_build hosts/rebuild_config hosts/submit_rebuild_config tasks/show api/v2/tasks/index api/v2/hosts/rebuild_config power_hosts hosts/power api/v2/hosts/power console_hosts hosts/console ipmi_boot hosts/ipmi_boot api/v2/hosts/boot puppetrun_hosts hosts/puppetrun hosts/multiple_puppetrun hosts/update_multiple_puppetrun api/v2/hosts/puppetrun search_repository_image_search image_search/auto_complete_repository_name image_search/auto_complete_image_tag image_search/search_repository Docker/ImageSearch view_images images/index images/show images/auto_complete_search api/v2/images/index api/v2/images/show create_images images/new images/create api/v2/images/create edit_images images/edit images/update api/v2/images/update destroy_images images/destroy api/v2/images/destroy view_lifecycle_environments katello/api/v2/environments/index katello/api/v2/environments/show katello/api/v2/environments/paths katello/api/v2/environments/repositories katello/api/rhsm/candlepin_proxies/rhsm_index katello/environments/auto_complete_search Katello::KTEnvironment create_lifecycle_environments katello/api/v2/environments/create Katello::KTEnvironment edit_lifecycle_environments katello/api/v2/environments/update Katello::KTEnvironment destroy_lifecycle_environments katello/api/v2/environments/destroy Katello::KTEnvironment promote_or_remove_content_views_to_environments Katello::KTEnvironment view_locations locations/index locations/show locations/auto_complete_search api/v2/locations/index api/v2/locations/show create_locations locations/new locations/create locations/clone_taxonomy locations/step2 locations/nest api/v2/locations/create edit_locations locations/edit locations/update locations/import_mismatches locations/parent_taxonomy_selected api/v2/locations/update destroy_locations locations/destroy api/v2/locations/destroy assign_locations locations/assign_all_hosts locations/assign_hosts locations/assign_selected_hosts view_mail_notifications mail_notifications/index mail_notifications/auto_complete_search mail_notifications/show api/v2/mail_notifications/index api/v2/mail_notifications/show view_media media/index media/show media/auto_complete_search api/v2/media/index api/v2/media/show create_media media/new media/create api/v2/media/create edit_media media/edit media/update api/v2/media/update destroy_media media/destroy api/v2/media/destroy view_models models/index models/show models/auto_complete_search api/v2/models/index api/v2/models/show create_models models/new models/create api/v2/models/create edit_models models/edit models/update api/v2/models/update destroy_models models/destroy api/v2/models/destroy view_operatingsystems operatingsystems/index operatingsystems/show operatingsystems/bootfiles operatingsystems/auto_complete_search api/v2/operatingsystems/index api/v2/operatingsystems/show api/v2/operatingsystems/bootfiles api/v2/os_default_templates/index api/v2/os_default_templates/show create_operatingsystems operatingsystems/new operatingsystems/create api/v2/operatingsystems/create api/v2/os_default_templates/create edit_operatingsystems operatingsystems/edit operatingsystems/update api/v2/operatingsystems/update api/v2/parameters/create api/v2/parameters/update api/v2/parameters/destroy api/v2/parameters/reset api/v2/os_default_templates/create api/v2/os_default_templates/update api/v2/os_default_templates/destroy destroy_operatingsystems operatingsystems/destroy api/v2/operatingsystems/destroy api/v2/os_default_templates/create view_organizations organizations/index organizations/show organizations/auto_complete_search api/v2/organizations/index api/v2/organizations/show katello/api/v2/organizations/index katello/api/v2/organizations/show katello/api/v2/organizations/redhat_provider katello/api/v2/organizations/download_debug_certificate katello/api/v2/tasks/index create_organizations organizations/new organizations/create organizations/clone_taxonomy organizations/step2 organizations/nest api/v2/organizations/create katello/api/v2/organizations/create edit_organizations organizations/edit organizations/update organizations/import_mismatches organizations/parent_taxonomy_selected api/v2/organizations/update katello/api/v2/organizations/update katello/api/v2/organizations/autoattach_subscriptions destroy_organizations organizations/destroy api/v2/organizations/destroy katello/api/v2/organizations/destroy assign_organizations organizations/assign_all_hosts organizations/assign_hosts organizations/assign_selected_hosts view_ptables ptables/index ptables/show ptables/auto_complete_search ptables/revision ptables/preview api/v2/ptables/show api/v2/ptables/revision create_ptables ptables/new ptables/create ptables/clone_template api/v2/ptables/create api/v2/ptables/clone edit_ptables ptables/edit ptables/update api/v2/ptables/update destroy_ptables ptables/destroy api/v2/ptables/destroy lock_ptables ptables/lock ptables/unlock api/v2/ptables/lock api/v2/ptables/unlock view_plugins plugins/index api/v2/plugins/index view_products katello/products/auto_complete katello/products/auto_complete_search katello/api/v2/products/index katello/api/v2/products/show katello/api/v2/repositories/index katello/api/v2/repositories/show katello/api/v2/packages/index katello/api/v2/packages/show katello/api/v2/distributions/index katello/api/v2/distributions/show katello/api/v2/package_groups/index katello/api/v2/package_groups/show katello/api/v2/errata/index katello/api/v2/errata/show katello/api/v2/puppet_modules/index katello/api/v2/puppet_modules/show katello/errata/short_details katello/errata/auto_complete katello/packages/details katello/packages/auto_complete katello/puppet_modules/show katello/repositories/auto_complete_library katello/repositories/repository_types katello/content_search/index katello/content_search/products katello/content_search/repos katello/content_search/packages katello/content_search/errata katello/content_search/puppet_modules katello/content_search/packages_items katello/content_search/errata_items katello/content_search/puppet_modules_items katello/content_search/repo_packages katello/content_search/repo_errata katello/content_search/repo_puppet_modules katello/content_search/repo_compare_errata katello/content_search/repo_compare_packages katello/content_search/repo_compare_puppet_modules Katello::Product create_products katello/api/v2/products/create katello/api/v2/repositories/create Katello::Product edit_products katello/api/v2/products/update katello/api/v2/repositories/update katello/api/v2/repositories/remove_content katello/api/v2/repositories/import_uploads katello/api/v2/repositories/upload_content katello/api/v2/products_bulk_actions/update_sync_plans katello/api/v2/content_uploads/create katello/api/v2/content_uploads/update katello/api/v2/content_uploads/destroy katello/api/v2/organizations/repo_discover katello/api/v2/organizations/cancel_repo_discover Katello::Product destroy_products katello/api/v2/products/destroy katello/api/v2/repositories/destroy katello/api/v2/products_bulk_actions/destroy_products katello/api/v2/repositories_bulk_actions/destroy_repositories Katello::Product sync_products katello/api/v2/products/sync katello/api/v2/repositories/sync katello/api/v2/products_bulk_actions/sync_products katello/api/v2/repositories_bulk_actions/sync_repositories katello/api/v2/sync/index katello/api/v2/sync_plans/sync katello/sync_management/index katello/sync_management/sync_status katello/sync_management/product_status katello/sync_management/sync katello/sync_management/destroy Katello::Product export_products katello/api/v2/repositories/export Katello::Product view_provisioning_templates provisioning_templates/index provisioning_templates/show provisioning_templates/revision provisioning_templates/auto_complete_search provisioning_templates/preview api/v2/provisioning_templates/index api/v2/provisioning_templates/show api/v2/provisioning_templates/revision api/v2/template_combinations/index api/v2/template_combinations/show api/v2/template_kinds/index create_provisioning_templates provisioning_templates/new provisioning_templates/create provisioning_templates/clone_template api/v2/provisioning_templates/create api/v2/provisioning_templates/clone api/v2/template_combinations/create edit_provisioning_templates provisioning_templates/edit provisioning_templates/update api/v2/provisioning_templates/update api/v2/template_combinations/update destroy_provisioning_templates provisioning_templates/destroy api/v2/provisioning_templates/destory api/v2/template_combinations/destory deploy_provisioning_templates provisioning_templates/build_pxe_default api/v2/provisioning_templates/build_pxe_default lock_provisioning_templates provisioning_templates/lock provisioning_templates/unlock api/v2/provisioning_templates/lock api/v2/provisioning_templates/unlock user_logout users/logout my_account users/edit katello/api/v2/tasks/show api_status api/v2/home/status/ view_puppetclasses puppetclasses/index puppetclasses/show puppetclasses/auto_complete_search api/v2/puppetclasses/index api/v2/puppetclasses/show api/v2/smart_variables/index api/v2/smart_variables/show api/v2/smart_class_parameters/index api/v2/smart_class_parameters/show create_puppetclasses puppetclasses/new puppetclasses/create api/v2/puppetclasses/create edit_puppetclasses puppetclasses/edit puppetclasses/update puppetclasses/override api/v2/puppetclasses/update api/v2/smart_variables/create api/v2/smart_variables/update api/v2/smart_variables/destroy api/v2/smart_class_parameters/create api/v2/smart_class_parameters/update api/v2/smart_class_parameters/destroy destroy_puppetclasses puppetclasses/destroy api/v2/puppetclasses/destroy import_puppetclasses puppetclasses/import_environments puppetclasses/obsolete_and_new api/v2/environments/import_puppetclasses api/v2/smart_proxies/import_puppetclasses view_realms realms/index realms/show realms/auto_complete_search api/v2/realms/index api/v2/realms/show create_realms realms/new realms/create api/v2/realms/create edit_realms realms/edit realms/update api/v2/realms/update destroy_realms realms/destroy api/v2/realms/destroy view_search redhat_access/search/index view_cases redhat_access/cases/index redhat_access/cases/create attachments redhat_access/attachments/index redhat_access/attachments/create configuration redhat_access/configuration/index app_root redhat_access/redhat_access/index view_log_viewer redhat_access/logviewer/index logs redhat_access/logs/index rh_telemetry_api redhat_access/api/telemetry_api/proxy redhat_access/api/telemetry_api/connection_status rh_telemetry_view redhat_access/analytics_dashboard/index rh_telemetry_configurations redhat_access/telemetry_configurations/show redhat_access/telemetry_configurations/update view_roles roles/index roles/auto_complete_search api/v2/roles/index api/v2/roles/show create_roles roles/new roles/create roles/clone api/v2/roles/create edit_roles roles/edit roles/update api/v2/roles/update destroy_roles roles/destroy api/v2/roles/destroy access_settings home/settings view_smart_proxies smart_proxies/index smart_proxies/ping smart_proxies/auto_complete_search smart_proxies/version smart_proxies/show smart_proxies/plugin_version smart_proxies/tftp_server smart_proxies/puppet_environments smart_proxies/puppet_dashboard smart_proxies/log_pane smart_proxies/failed_modules smart_proxies/errors_card smart_proxies/modules_card api/v2/smart_proxies/index api/v2/smart_proxies/show api/v2/smart_proxies/version api/v2/smart_proxies/log create_smart_proxies smart_proxies/new smart_proxies/create api/v2/smart_proxies/create edit_smart_proxies smart_proxies/edit smart_proxies/update smart_proxies/refresh smart_proxies/expire_logs api/v2/smart_proxies/update api/v2/smart_proxies/refresh destroy_smart_proxies smart_proxies/destroy api/v2/smart_proxies/destroy view_smart_proxies_autosign autosign/index autosign/show autosign/counts api/v2/autosign/index create_smart_proxies_autosign autosign/new autosign/create destroy_smart_proxies_autosign autosign/destroy view_smart_proxies_puppetca puppetca/index puppetca/counts puppetca/expiry edit_smart_proxies_puppetca puppetca/update destroy_smart_proxies_puppetca puppetca/destroy view_subnets subnets/index subnets/show subnets/auto_complete_search api/v2/subnets/index api/v2/subnets/show create_subnets subnets/new subnets/create api/v2/subnets/create edit_subnets subnets/edit subnets/update api/v2/subnets/update destroy_subnets subnets/destroy api/v2/subnets/destroy import_subnets subnets/import subnets/create_multiple view_subscriptions katello/api/v2/subscriptions/index katello/api/v2/subscriptions/show katello/api/v2/subscriptions/available katello/api/v2/subscriptions/manifest_history katello/api/v2/subscriptions/auto_complete_search katello/api/v2/repository_sets/index katello/api/v2/repository_sets/show katello/api/v2/repository_sets/available_repositories Organization attach_subscriptions katello/api/v2/subscriptions/create Organization unattach_subscriptions katello/api/v2/subscriptions/destroy Organization import_manifest katello/products/available_repositories katello/products/toggle_repository katello/providers/redhat_provider katello/providers/redhat_provider_tab katello/api/v2/subscriptions/upload katello/api/v2/subscriptions/refresh_manifest katello/api/v2/repository_sets/enable katello/api/v2/repository_sets/disable Organization delete_manifest katello/api/v2/subscriptions/delete_manifest Organization view_sync_plans katello/sync_plans/all katello/sync_plans/index katello/sync_plans/auto_complete_search katello/api/v2/sync_plans/index katello/api/v2/sync_plans/show katello/api/v2/sync_plans/add_products katello/api/v2/sync_plans/remove_products katello/api/v2/sync_plans/available_products katello/api/v2/products/index Katello::SyncPlan create_sync_plans katello/api/v2/sync_plans/create Katello::SyncPlan edit_sync_plans katello/api/v2/sync_plans/update Katello::SyncPlan destroy_sync_plans katello/api/v2/sync_plans/destroy Katello::SyncPlan my_organizations katello/api/rhsm/candlepin_proxies/list_owners view_usergroups usergroups/index usergroups/show usergroups/auto_complete_search api/v2/usergroups/index api/v2/usergroups/show create_usergroups usergroups/new usergroups/create api/v2/usergroups/create edit_usergroups usergroups/edit usergroups/update api/v2/usergroups/update destroy_usergroups usergroups/destroy api/v2/usergroups/destroy view_users users/index users/show users/auto_complete_search api/v2/users/index api/v2/users/show create_users users/new users/create users/auth_source_selected api/v2/users/create edit_users users/edit users/update users/auth_source_selected users/test_mail api/v2/users/update destroy_users users/destroy api/v2/users/destroy
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/api_guide/apipermsmatrix
Chapter 8. Network requirements
Chapter 8. Network requirements OpenShift Data Foundation requires that at least one network interface that is used for the cluster network to be capable of at least 10 gigabit network speeds. This section further covers different network considerations for planning deployments. 8.1. IPv6 support Red Hat OpenShift Data Foundation version 4.12 introduced the support of IPv6. IPv6 is supported in single stack only, and cannot be used simultaneously with IPv4. IPv6 is the default behavior in OpenShift Data Foundation when IPv6 is turned on in Openshift Container Platform. Red Hat OpenShift Data Foundation version 4.14 introduces IPv6 auto detection and configuration. Clusters using IPv6 will automatically be configured accordingly. OpenShift Container Platform dual stack with Red Hat OpenShift Data Foundation IPv4 is supported from version 4.13 and later. Dual stack on Red Hat OpenShift Data Foundation IPv6 is not supported. 8.2. Multi network plug-in (Multus) support OpenShift Data Foundation supports the ability to use multi-network plug-in Multus on bare metal infrastructures to improve security and performance by isolating the different types of network traffic. By using Multus, one or more network interfaces on hosts can be reserved for exclusive use of OpenShift Data Foundation. To use Multus, first run the Multus prerequisite validation tool. For instructions to use the tool, see OpenShift Data Foundation - Multus prerequisite validation tool . For more information about Multus networks, see Multiple networks 8.2.1. Segregating storage traffic using Multus By default, Red Hat OpenShift Data Foundation is configured to use the Red Hat OpenShift Software Defined Network (SDN). The default SDN carries the following types of traffic: Pod-to-pod traffic Pod-to-storage traffic, known as public network traffic when the storage is OpenShift Data Foundation OpenShift Data Foundation internal replication and rebalancing traffic, known as cluster network traffic There are three ways to segregate OpenShift Data Foundation from OpenShift default network: Reserve a network interface on the host for the public network of OpenShift Data Foundation Pod-to-storage and internal storage replication traffic coexist on a network that is isolated from pod-to-pod network traffic. Application pods have access to the maximum public network storage bandwidth when the OpenShift Data Foundation cluster is healthy. When the OpenShift Data Foundation cluster is recovering from failure, the application pods will have reduced bandwidth due to ongoing replication and rebalancing traffic. Reserve a network interface on the host for OpenShift Data Foundation's cluster network Pod-to-pod and pod-to-storage traffic both continue to use OpenShift's default network. Pod-to-storage bandwidth is less affected by the health of the OpenShift Data Foundation cluster. Pod-to-pod and pod-to-storage OpenShift Data Foundation traffic might contend for network bandwidth in busy OpenShift clusters. The storage internal network often has an overabundance of bandwidth that is unused, reserved for use during failures. Reserve two network interfaces on the host for OpenShift Data Foundation: one for the public network and one for the cluster network Pod-to-pod, pod-to-storage, and storage internal traffic are all isolated, and none of the traffic types will contend for resources. Service level agreements for all traffic types are more able to be ensured. During healthy runtime, more network bandwidth is reserved but unused across all three networks. Dual network interface segregated configuration schematic example: Triple network interface full segregated configuration schematic example: 8.2.2. When to use Multus Use Multus for OpenShift Data Foundation when you need the following: Improved latency - Multus with ODF always improves latency. Use host interfaces at near-host network speeds and bypass OpenShift's software-defined Pod network. You can also perform Linux per interface level tuning for each interface. Improved bandwidth - Dedicated interfaces for OpenShift Data Foundation client data traffic and internal data traffic. These dedicated interfaces reserve full bandwidth. Improved security - Multus isolates storage network traffic from application network traffic for added security. Bandwidth or performance might not be isolated when networks share an interface, however, you can use QoS or traffic shaping to prioritize bandwidth on shared interfaces. 8.2.3. Multus configuration To use Multus, you must create network attachment definitions (NADs) before deploying the OpenShift Data Foundation cluster, which is later attached to the cluster. For more information, see Creating network attachment definitions . To attach additional network interfaces to a pod, you must create configurations that define how the interfaces are attached. You specify each interface by using a NetworkAttachmentDefinition custom resource (CR). A Container Network Interface (CNI) configuration inside each of these CRs defines how that interface is created. OpenShift Data Foundation supports two types of drivers. The following tables describes the drivers and their features: macvlan (recommended) ipvlan Each connection gets a sub-interface of the parent interface with its own MAC address and is isolated from the host network. Each connection gets its own IP address and shares the same MAC address. Uses less CPU and provides better throughput than Linux bridge or ipvlan . L2 mode is analogous to macvlan bridge mode. Almost always require bridge mode. L3 mode is analogous to a router existing on the parent interface. L3 is useful for Border Gateway Protocol (BGP), otherwise use macvlan for reduced CPU and better throughput. Near-host performance when network interface card (NIC) supports virtual ports/virtual local area networks (VLANs) in hardware. If NIC does not support VLANs in hardware, performance might be better than macvlan . OpenShift Data Foundation supports the following two types IP address management: whereabouts DHCP Uses OpenShift/Kubernetes leases to select unique IP addresses per Pod. Does not require range field. Does not require a DHCP server to provide IPs for Pods. Network DHCP server can give out the same range to Multus Pods as well as any other hosts on the same network. Caution If there is a DHCP server, ensure Multus configured IPAM does not give out the same range so that multiple MAC addresses on the network cannot have the same IP. 8.2.4. Requirements for Multus configuration Prerequisites The interface used for the public network must have the same interface name on each OpenShift storage and worker node, and the interfaces must all be connected to the same underlying network. The interface used for the cluster network must have the same interface name on each OpenShift storage node, and the interfaces must all be connected to the same underlying network. Cluster network interfaces do not have to be present on the OpenShift worker nodes. Each network interface used for the public or cluster network must be capable of at least 10 gigabit network speeds. Each network requires a separate virtual local area network (VLAN) or subnet. See Creating Multus networks for the necessary steps to configure a Multus based configuration on bare metal.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/planning_your_deployment/network-requirements_rhodf
Technology preview
Technology preview The Streams for Apache Kafka Console is a technology preview. Technology Preview features are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete; therefore, Red Hat does not recommend implementing any Technology Preview features in production environments. This Technology Preview feature provides early access to upcoming product innovations, enabling you to test functionality and provide feedback during the development process. For more information about the support scope, see Technology Preview Features Support Scope .
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/using_the_streams_for_apache_kafka_console/technology_preview
5.6. Controlling Traffic
5.6. Controlling Traffic 5.6.1. Predefined Services Services can be added and removed using the graphical firewall-config tool, firewall-cmd , and firewall-offline-cmd . Alternatively, you can edit the XML files in the /etc/firewalld/services/ directory. If a service is not added or changed by the user, then no corresponding XML file is found in /etc/firewalld/services/ . The files in the /usr/lib/firewalld/services/ directory can be used as templates if you want to add or change a service. 5.6.2. Disabling All Traffic in Case of Emergency using CLI In an emergency situation, such as a system attack, it is possible to disable all network traffic and cut off the attacker. To immediately disable networking traffic, switch panic mode on: Switching off panic mode reverts the firewall to its permanent settings. To switch panic mode off: To see whether panic mode is switched on or off, use: 5.6.3. Controlling Traffic with Predefined Services using CLI The most straightforward method to control traffic is to add a predefined service to firewalld . This opens all necessary ports and modifies other settings according to the service definition file . Check that the service is not already allowed: List all predefined services: Add the service to the allowed services: Make the new settings persistent: 5.6.4. Controlling Traffic with Predefined Services using GUI To enable or disable a predefined or custom service, start the firewall-config tool and select the network zone whose services are to be configured. Select the Services tab and select the check box for each type of service you want to trust. Clear the check box to block a service. To edit a service, start the firewall-config tool and select Permanent from the menu labeled Configuration . Additional icons and menu buttons appear at the bottom of the Services window. Select the service you want to configure. The Ports , Protocols , and Source Port tabs enable adding, changing, and removing of ports, protocols, and source port for the selected service. The modules tab is for configuring Netfilter helper modules. The Destination tab enables limiting traffic to a particular destination address and Internet Protocol ( IPv4 or IPv6 ). Note It is not possible to alter service settings in Runtime mode. 5.6.5. Adding New Services Services can be added and removed using the graphical firewall-config tool, firewall-cmd , and firewall-offline-cmd . Alternatively, you can edit the XML files in /etc/firewalld/services/ . If a service is not added or changed by the user, then no corresponding XML file are found in /etc/firewalld/services/ . The files /usr/lib/firewalld/services/ can be used as templates if you want to add or change a service. To add a new service in a terminal, use firewall-cmd , or firewall-offline-cmd in case of not active firewalld . enter the following command to add a new and empty service: To add a new service using a local file, use the following command: You can change the service name with the additional --name= service-name option. As soon as service settings are changed, an updated copy of the service is placed into /etc/firewalld/services/ . As root , you can enter the following command to copy a service manually: firewalld loads files from /usr/lib/firewalld/services in the first place. If files are placed in /etc/firewalld/services and they are valid, then these will override the matching files from /usr/lib/firewalld/services . The overriden files in /usr/lib/firewalld/services are used as soon as the matching files in /etc/firewalld/services have been removed or if firewalld has been asked to load the defaults of the services. This applies to the permanent environment only. A reload is needed to get these fallbacks also in the runtime environment. 5.6.6. Controlling Ports using CLI Ports are logical devices that enable an operating system to receive and distinguish network traffic and forward it accordingly to system services. These are usually represented by a daemon that listens on the port, that is it waits for any traffic coming to this port. Normally, system services listen on standard ports that are reserved for them. The httpd daemon, for example, listens on port 80. However, system administrators by default configure daemons to listen on different ports to enhance security or for other reasons. Opening a Port Through open ports, the system is accessible from the outside, which represents a security risk. Generally, keep ports closed and only open them if they are required for certain services. To get a list of open ports in the current zone: List all allowed ports: Add a port to the allowed ports to open it for incoming traffic: Make the new settings persistent: The port types are either tcp , udp , sctp , or dccp . The type must match the type of network communication. Closing a Port When an open port is no longer needed, close that port in firewalld . It is highly recommended to close all unnecessary ports as soon as they are not used because leaving a port open represents a security risk. To close a port, remove it from the list of allowed ports: List all allowed ports: Remove the port from the allowed ports to close it for the incoming traffic: Make the new settings persistent: 5.6.7. Opening Ports using GUI To permit traffic through the firewall to a certain port, start the firewall-config tool and select the network zone whose settings you want to change. Select the Ports tab and click the Add button on the right-hand side. The Port and Protocol window opens. Enter the port number or range of ports to permit. Select tcp or udp from the list. 5.6.8. Controlling Traffic with Protocols using GUI To permit traffic through the firewall using a certain protocol, start the firewall-config tool and select the network zone whose settings you want to change. Select the Protocols tab and click the Add button on the right-hand side. The Protocol window opens. Either select a protocol from the list or select the Other Protocol check box and enter the protocol in the field. 5.6.9. Opening Source Ports using GUI To permit traffic through the firewall from a certain port, start the firewall-config tool and select the network zone whose settings you want to change. Select the Source Port tab and click the Add button on the right-hand side. The Source Port window opens. Enter the port number or range of ports to permit. Select tcp or udp from the list.
[ "~]# firewall-cmd --panic-on", "~]# firewall-cmd --panic-off", "~]# firewall-cmd --query-panic", "~]# firewall-cmd --list-services ssh dhcpv6-client", "~]# firewall-cmd --get-services RH-Satellite-6 amanda-client amanda-k5-client bacula bacula-client bitcoin bitcoin-rpc bitcoin-testnet bitcoin-testnet-rpc ceph ceph-mon cfengine condor-collector ctdb dhcp dhcpv6 dhcpv6-client dns docker-registry [output truncated]", "~]# firewall-cmd --add-service= <service-name>", "~]# firewall-cmd --runtime-to-permanent", "~]USD firewall-cmd --new-service= service-name", "~]USD firewall-cmd --new-service-from-file= service-name .xml", "~]# cp /usr/lib/firewalld/services/ service-name .xml /etc/firewalld/services/ service-name .xml", "~]# firewall-cmd --list-ports", "~]# firewall-cmd --add-port= port-number / port-type", "~]# firewall-cmd --runtime-to-permanent", "~]# firewall-cmd --list-ports [WARNING] ==== This command will only give you a list of ports that have been opened as ports. You will not be able to see any open ports that have been opened as a service. Therefore, you should consider using the --list-all option instead of --list-ports. ====", "~]# firewall-cmd --remove-port= port-number / port-type", "~]# firewall-cmd --runtime-to-permanent" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/security_guide/sec-controlling_traffic
Chapter 11. Configuring OIDC for Red Hat Quay
Chapter 11. Configuring OIDC for Red Hat Quay Configuring OpenID Connect (OIDC) for Red Hat Quay can provide several benefits to your deployment. For example, OIDC allows users to authenticate to Red Hat Quay using their existing credentials from an OIDC provider, such as Red Hat Single Sign-On , Google, Github, Microsoft, or others. Other benefits of OIDC include centralized user management, enhanced security, and single sign-on (SSO). Overall, OIDC configuration can simplify user authentication and management, enhance security, and provide a seamless user experience for Red Hat Quay users. The following procedures show you how to configure Microsoft Entra ID on a standalone deployment of Red Hat Quay, and how to configure Red Hat Single Sign-On on an Operator-based deployment of Red Hat Quay. These procedures are interchangeable depending on your deployment type. Note By following these procedures, you will be able to add any OIDC provider to Red Hat Quay, regardless of which identity provider you choose to use. 11.1. Configuring Microsoft Entra ID OIDC on a standalone deployment of Red Hat Quay By integrating Microsoft Entra ID authentication with Red Hat Quay, your organization can take advantage of the centralized user management and security features offered by Microsoft Entra ID. Some features include the ability to manage user access to Red Hat Quay repositories based on their Microsoft Entra ID roles and permissions, and the ability to enable multi-factor authentication and other security features provided by Microsoft Entra ID. Azure Active Directory (Microsoft Entra ID) authentication for Red Hat Quay allows users to authenticate and access Red Hat Quay using their Microsoft Entra ID credentials. Use the following procedure to configure Microsoft Entra ID by updating the Red Hat Quay config.yaml file directly. Procedure Using the following procedure, you can add any ODIC provider to Red Hat Quay, regardless of which identity provider is being added. If your system has a firewall in use, or proxy enabled, you must whitelist all Azure API endpoints for each Oauth application that is created. Otherwise, the following error is returned: x509: certificate signed by unknown authority . Use the following reference and update your config.yaml file with your desired OIDC provider's credentials: AUTHENTICATION_TYPE: OIDC # ... AZURE_LOGIN_CONFIG: 1 CLIENT_ID: <client_id> 2 CLIENT_SECRET: <client_secret> 3 OIDC_SERVER: <oidc_server_address_> 4 SERVICE_NAME: Microsoft Entra ID 5 VERIFIED_EMAIL_CLAIM_NAME: <verified_email> 6 # ... 1 The parent key that holds the OIDC configuration settings. In this example, the parent key used is AZURE_LOGIN_CONFIG , however, the string AZURE can be replaced with any arbitrary string based on your specific needs, for example ABC123 .However, the following strings are not accepted: GOOGLE , GITHUB . These strings are reserved for their respective identity platforms and require a specific config.yaml entry contingent upon when platform you are using. 2 The client ID of the application that is being registered with the identity provider. 3 The client secret of the application that is being registered with the identity provider. 4 The address of the OIDC server that is being used for authentication. In this example, you must use sts.windows.net as the issuer identifier. Using https://login.microsoftonline.com results in the following error: Could not create provider for AzureAD. Error: oidc: issuer did not match the issuer returned by provider, expected "https://login.microsoftonline.com/73f2e714-xxxx-xxxx-xxxx-dffe1df8a5d5" got "https://sts.windows.net/73f2e714-xxxx-xxxx-xxxx-dffe1df8a5d5/" . 5 The name of the service that is being authenticated. 6 The name of the claim that is used to verify the email address of the user. Proper configuration of Microsoft Entra ID results three redirects with the following format: https://QUAY_HOSTNAME/oauth2/<name_of_service>/callback https://QUAY_HOSTNAME/oauth2/<name_of_service>/callback/attach https://QUAY_HOSTNAME/oauth2/<name_of_service>/callback/cli Restart your Red Hat Quay deployment. 11.2. Configuring Red Hat Single Sign-On for Red Hat Quay Based on the Keycloak project, Red Hat Single Sign-On (RH-SSO) is an open source identity and access management (IAM) solution provided by Red Hat. RH-SSO allows organizations to manage user identities, secure applications, and enforce access control policies across their systems and applications. It also provides a unified authentication and authorization framework, which allows users to log in one time and gain access to multiple applications and resources without needing to re-authenticate. For more information, see Red Hat Single Sign-On . By configuring Red Hat Single Sign-On on Red Hat Quay, you can create a seamless authentication integration between Red Hat Quay and other application platforms like OpenShift Container Platform. 11.2.1. Configuring the Red Hat Single Sign-On Operator for use with the Red Hat Quay Operator Use the following procedure to configure Red Hat Single Sign-On for the Red Hat Quay Operator on OpenShift Container Platform. Prerequisites You have set up the Red Hat Single Sign-On Operator. For more information, see Red Hat Single Sign-On Operator . You have configured SSL/TLS for your Red Hat Quay on OpenShift Container Platform deployment and for Red Hat Single Sign-On. You have generated a single Certificate Authority (CA) and uploaded it to your Red Hat Single Sign-On Operator and to your Red Hat Quay configuration. Procedure Navigate to the Red Hat Single Sign-On Admin Console . On the OpenShift Container Platform Web Console , navigate to Network Route . Select the Red Hat Single Sign-On project from the drop-down list. Find the Red Hat Single Sign-On Admin Console in the Routes table. Select the Realm that you will use to configure Red Hat Quay. Click Clients under the Configure section of the navigation panel, and then click the Create button to add a new OIDC for Red Hat Quay. Enter the following information. Client ID: quay-enterprise Client Protocol: openid-connect Root URL: https://<quay_endpoint>/ Click Save . This results in a redirect to the Clients setting panel. Navigate to Access Type and select Confidential . Navigate to Valid Redirect URIs . You must provide three redirect URIs. The value should be the fully qualified domain name of the Red Hat Quay registry appended with /oauth2/redhatsso/callback . For example: https://<quay_endpoint>/oauth2/redhatsso/callback https://<quay_endpoint>/oauth2/redhatsso/callback/attach https://<quay_endpoint>/oauth2/redhatsso/callback/cli Click Save and navigate to the new Credentials setting. Copy the value of the Secret. 11.2.1.1. Configuring the Red Hat Quay Operator to use Red Hat Single Sign-On Use the following procedure to configure Red Hat Single Sign-On with the Red Hat Quay Operator. Prerequisites You have set up the Red Hat Single Sign-On Operator. For more information, see Red Hat Single Sign-On Operator . You have configured SSL/TLS for your Red Hat Quay on OpenShift Container Platform deployment and for Red Hat Single Sign-On. You have generated a single Certificate Authority (CA) and uploaded it to your Red Hat Single Sign-On Operator and to your Red Hat Quay configuration. Procedure Edit your Red Hat Quay config.yaml file by navigating to Operators Installed Operators Red Hat Quay Quay Registry Config Bundle Secret . Then, click Actions Edit Secret . Alternatively, you can update the config.yaml file locally. Add the following information to your Red Hat Quay on OpenShift Container Platform config.yaml file: # ... RHSSO_LOGIN_CONFIG: 1 CLIENT_ID: <client_id> 2 CLIENT_SECRET: <client_secret> 3 OIDC_SERVER: <oidc_server_url> 4 SERVICE_NAME: <service_name> 5 SERVICE_ICON: <service_icon> 6 VERIFIED_EMAIL_CLAIM_NAME: <example_email_address> 7 PREFERRED_USERNAME_CLAIM_NAME: <preferred_username> 8 LOGIN_SCOPES: 9 - 'openid' # ... 1 The parent key that holds the OIDC configuration settings. In this example, the parent key used is AZURE_LOGIN_CONFIG , however, the string AZURE can be replaced with any arbitrary string based on your specific needs, for example ABC123 .However, the following strings are not accepted: GOOGLE , GITHUB . These strings are reserved for their respective identity platforms and require a specific config.yaml entry contingent upon when platform you are using. 2 The client ID of the application that is being registered with the identity provider, for example, quay-enterprise . 3 The Client Secret from the Credentials tab of the quay-enterprise OIDC client settings. 4 The fully qualified domain name (FQDN) of the Red Hat Single Sign-On instance, appended with /auth/realms/ and the Realm name. You must include the forward slash at the end, for example, https://sso-redhat.example.com//auth/realms/<keycloak_realm_name>/ . 5 The name that is displayed on the Red Hat Quay login page, for example, Red hat Single Sign On . 6 Changes the icon on the login screen. For example, /static/img/RedHat.svg . 7 The name of the claim that is used to verify the email address of the user. 8 The name of the claim that is used to verify the email address of the user. 9 The scopes to send to the OIDC provider when performing the login flow, for example, openid . Restart your Red Hat Quay on OpenShift Container Platform deployment with Red Hat Single Sign-On enabled. 11.3. Team synchronization for Red Hat Quay OIDC deployments Administrators can leverage an OpenID Connect (OIDC) identity provider that supports group or team synchronization to apply repository permissions to sets of users in Red Hat Quay. This allows administrators to avoid having to manually create and sync group definitions between Red Hat Quay and the OIDC group. 11.3.1. Enabling synchronization for Red Hat Quay OIDC deployments Use the following procedure to enable team synchronization when your Red Hat Quay deployment uses an OIDC authenticator. Important The following procedure does not use a specific OIDC provider. Instead, it provides a general outline of how best to approach team synchronization between an OIDC provider and Red Hat Quay. Any OIDC provider can be used to enable team synchronization, however, setup might vary depending on your provider. Procedure Update your config.yaml file with the following information: AUTHENTICATION_TYPE: OIDC # ... OIDC_LOGIN_CONFIG: CLIENT_ID: 1 CLIENT_SECRET: 2 OIDC_SERVER: 3 SERVICE_NAME: 4 PREFERRED_GROUP_CLAIM_NAME: 5 LOGIN_SCOPES: [ 'openid', '<example_scope>' ] 6 OIDC_DISABLE_USER_ENDPOINT: false 7 # ... FEATURE_TEAM_SYNCING: true 8 FEATURE_NONSUPERUSER_TEAM_SYNCING_SETUP: true 9 FEATURE_UI_V2: true # ... 1 Required. The registered OIDC client ID for this Red Hat Quay instance. 2 Required. The registered OIDC client secret for this Red Hat Quay instance. 3 Required. The address of the OIDC server that is being used for authentication. This URL should be such that a GET request to <OIDC_SERVER>/.well-known/openid-configuration returns the provider's configuration information. This configuration information is essential for the relying party (RP) to interact securely with the OpenID Connect provider and obtain necessary details for authentication and authorization processes. 4 Required. The name of the service that is being authenticated. 5 Required. The key name within the OIDC token payload that holds information about the user's group memberships. This field allows the authentication system to extract group membership information from the OIDC token so that it can be used with Red Hat Quay. 6 Required. Adds additional scopes that Red Hat Quay uses to communicate with the OIDC provider. Must include 'openid' . Additional scopes are optional. 7 Whether to allow or disable the /userinfo endpoint. If using Azure Entra ID, set this field to true . Defaults to false . 8 Required. Whether to allow for team membership to be synced from a backing group in the authentication engine. 9 Optional. If enabled, non-superusers can setup team synchronization. Restart your Red Hat Quay registry. 11.3.2. Setting up your Red Hat Quay deployment for team synchronization Log in to your Red Hat Quay registry via your OIDC provider. On the Red Hat Quay v2 UI dashboard, click Create Organization . Enter and Organization name, for example, test-org . Click the name of the Organization. In the navigation pane, click Teams and membership . Click Create new team and enter a name, for example, testteam . On the Create team pop-up: Optional. Add this team to a repository. Add a team member, for example, user1 , by typing in the user's account name. Add a robot account to this team. This page provides the option to create a robot account. Click . On the Review and Finish page, review the information that you have provided and click Review and Finish . To enable team synchronization for your Red Hat Quay OIDC deployment, click Enable Directory Sync on the Teams and membership page. You are prompted to enter the group Object ID if your OIDC authenticator is Azure Entra ID, or the group name if using a different provider. Note the message in the popup: Warning Please note that once team syncing is enabled, the membership of users who are already part of the team will be revoked. OIDC group will be the single source of truth. This is a non-reversible action. Team's user membership from within Quay will be ready-only. Click Enable Sync . You are returned to the Teams and membership page. Note that users of this team are removed and are re-added upon logging back in. At this stage, only the robot account is still part of the team. A banner at the top of the page confirms that the team is synced: This team is synchronized with a group in OIDC and its user membership is therefore read-only. By clicking the Directory Synchronization Config accordion, the OIDC group that your deployment is synchronized with appears. Log out of your Red Hat Quay registry and continue on to the verification steps. Verification Use the following verification procedure to ensure that user1 appears as a member of the team. Log back in to your Red Hat Quay registry. Click Organizations test-org test-team Teams and memberships . user1 now appears as a team member for this team. Verification Use the following procedure to remove user1 from a group via your OIDC provider, and subsequently remove them from the team on Red Hat Quay. Navigate to your OIDC provider's administration console. Navigate to the Users page of your OIDC provider. The name of this page varies depending on your provider. Click the name of the user associated with Red Hat Quay, for example, user1 . Remove the user from group in the configured identity provider. Remove, or unassign, the access permissions from the user. Log in to your Red Hat Quay registry. Click Organizations test-org test-team Teams and memberships . user1 has been removed from this team.
[ "AUTHENTICATION_TYPE: OIDC AZURE_LOGIN_CONFIG: 1 CLIENT_ID: <client_id> 2 CLIENT_SECRET: <client_secret> 3 OIDC_SERVER: <oidc_server_address_> 4 SERVICE_NAME: Microsoft Entra ID 5 VERIFIED_EMAIL_CLAIM_NAME: <verified_email> 6", "RHSSO_LOGIN_CONFIG: 1 CLIENT_ID: <client_id> 2 CLIENT_SECRET: <client_secret> 3 OIDC_SERVER: <oidc_server_url> 4 SERVICE_NAME: <service_name> 5 SERVICE_ICON: <service_icon> 6 VERIFIED_EMAIL_CLAIM_NAME: <example_email_address> 7 PREFERRED_USERNAME_CLAIM_NAME: <preferred_username> 8 LOGIN_SCOPES: 9 - 'openid'", "AUTHENTICATION_TYPE: OIDC OIDC_LOGIN_CONFIG: CLIENT_ID: 1 CLIENT_SECRET: 2 OIDC_SERVER: 3 SERVICE_NAME: 4 PREFERRED_GROUP_CLAIM_NAME: 5 LOGIN_SCOPES: [ 'openid', '<example_scope>' ] 6 OIDC_DISABLE_USER_ENDPOINT: false 7 FEATURE_TEAM_SYNCING: true 8 FEATURE_NONSUPERUSER_TEAM_SYNCING_SETUP: true 9 FEATURE_UI_V2: true", "This team is synchronized with a group in OIDC and its user membership is therefore read-only." ]
https://docs.redhat.com/en/documentation/red_hat_quay/3/html/manage_red_hat_quay/configuring-oidc-authentication
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Tell us how we can make it better. Providing documentation feedback in Jira Use the Create Issue form to provide feedback on the documentation. The Jira issue will be created in the Red Hat OpenStack Platform Jira project, where you can track the progress of your feedback. Ensure that you are logged in to Jira. If you do not have a Jira account, create an account to submit feedback. Click the following link to open a the Create Issue page: Create Issue Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/scaling_deployments_with_compute_cells/proc_providing-feedback-on-red-hat-documentation
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/deploying_openshift_data_foundation_on_any_platform/making-open-source-more-inclusive
Chapter 4. Installing Red Hat Ceph Storage using the Cockpit web interface
Chapter 4. Installing Red Hat Ceph Storage using the Cockpit web interface This chapter describes how to use the Cockpit web-based interface to install a Red Hat Ceph Storage cluster and other components, such as Metadata Servers, the Ceph client, or the Ceph Object Gateway. The process consists of installing the Cockpit Ceph Installer, logging into Cockpit, and configuring and starting the cluster install using different pages within the installer. Note The Cockpit Ceph Installer uses Ansible and the Ansible playbooks provided by the ceph-ansible RPM to perform the actual install. It is still possible to use these playbooks to install Ceph without Cockpit. That process is relevant to this chapter and is referred to as a direct Ansible install , or using the Ansible playbooks directly . Important The Cockpit Ceph installer does not currently support IPv6 networking. If you require IPv6 networking, install Ceph using the Ansible playbooks directly . Note The dashboard web interface, used for administration and monitoring of Ceph, is installed by default by the Ansible playbooks in the ceph-ansible RPM, which Cockpit uses on the back-end. Therefore, whether you use Ansible playbooks directly, or use Cockpit to install Ceph, the dashboard web interface will be installed as well. 4.1. Prerequisites Complete the general prerequisites required for direct Ansible Red Hat Ceph Storage installs. A recent version of Firefox or Chrome. If using multiple networks to segment intra-cluster traffic, client-to-cluster traffic, RADOS Gateway traffic, or iSCSI traffic, ensure the relevant networks are already configured on the hosts. For more information, see network considerations in the Hardware Guide and the section in this chapter on completing the Network page of the Cockpit Ceph Installer Ensure the default port for Cockpit web-based interface, 9090 , is accessible. 4.2. Installation requirements One node to act as the Ansible administration node. One node to provide the performance metrics and alerting platform. This may be colocated with the Ansible administration node. One or more nodes to form the Ceph cluster. The installer supports an all-in-one installation called Development/POC . In this mode all Ceph services can run from the same node, and data replication defaults to disk rather than host level protection. 4.3. Install and configure the Cockpit Ceph Installer Before you can use the Cockpit Ceph Installer to install a Red Hat Ceph Storage cluster, you must install the Cockpit Ceph Installer on the Ansible administration node. Prerequisites Root-level access to the Ansible administration node. The ansible user account for use with the Ansible application. Procedure Verify Cockpit is installed. Example: If you see similar output to the example above, skip to the step Verify Cockpit is running . If the output is package cockpit is not installed , continue to the step Install Cockpit . Optional: Install Cockpit. For Red Hat Enterprise Linux 8: For Red Hat Enterprise Linux 7: Verify Cockpit is running. If you see Active: active (listening) in the output, skip to the step Install the Cockpit plugin for Red Hat Ceph Storage . If instead you see Active: inactive (dead) , continue to the step Enable Cockpit . Optional: Enable Cockpit. Use the systemctl command to enable Cockpit: You will see a line like the following: Verify Cockpit is running: You will see a line like the following: Install the Cockpit Ceph Installer for Red Hat Ceph Storage. For Red Hat Enterprise Linux 8: For Red Hat Enterprise Linux 7: As the Ansible user, log in to the container catalog using sudo: Note By default, the Cockpit Ceph Installer uses the root user to install Ceph. To use the Ansible user created as a part of the prerequisites to install Ceph, run the rest of the commands in this procedure with sudo as the Ansible user. Red Hat Enterprise Linux 7 Example Red Hat Enterprise Linux 8 Example Verify registry.redhat.io is in the container registry search path. Open for editing the /etc/containers/registries.conf file: If registry.redhat.io is not included in the file, add it: As the Ansible user, start the ansible-runner-service using sudo. Example The last line of output includes the URL to the Cockpit Ceph Installer. In the example above the URL is https://jb-ceph4-admin:9090/cockpit-ceph-installer . Take note of the URL printed in your environment. 4.4. Copy the Cockpit Ceph Installer SSH key to all nodes in the cluster The Cockpit Ceph Installer uses SSH to connect to and configure the nodes in the cluster. In order for it to do this automatically the installer generates an SSH key pair so it can access the nodes without being prompted for a password. The SSH public key must be transferred to all nodes in the cluster. Prerequisites An Ansible user with sudo access has been created. The Cockpit Ceph Installer is installed and configured . Procedure Log in to the Ansible administration node as the Ansible user. Example: Copy the SSH public key to the first node: Example: Repeat this step for all nodes in the cluster 4.5. Log in to Cockpit You can view the Cockpit Ceph Installer web interface by logging into Cockpit. Prerequisites The Cockpit Ceph Installer is installed and configured . You have the URL printed as a part of configuring the Cockpit Ceph Installer Procedure Open the URL in a web browser. Enter the Ansible user name and its password. Click the radio button for Reuse my password for privileged tasks . Click Log In . Review the welcome page to understand how the installer works and the overall flow of the installation process. Click the Environment button at the bottom right corner of the web page after you have reviewed the information in the welcome page. 4.6. Complete the Environment page of the Cockpit Ceph Installer The Environment page allows you to configure overall aspects of the cluster, like what installation source to use and how to use Hard Disk Drives (HDDs) and Solid State Drives (SSDs) for storage. Prerequisites The Cockpit Ceph Installer is installed and configured . You have the URL printed as a part of configuring the Cockpit Ceph Installer. You have created a registry service account . Note In the dialogs to follow, there are tooltips to the right of some of the settings. To view them, hover the mouse cursor over the icon that looks like an i with a circle around it. Procedure Select the Installation Source . Choose Red Hat to use repositories from Red Hat Subscription Manager, or ISO to use a CD image downloaded from the Red Hat Customer Portal. If you choose Red Hat , Target Version will be set to RHCS 4 without any other options. If you choose ISO , Target Version will be set to the ISO image file. Important If you choose ISO, the image file must be in the /usr/share/ansible-runner-service/iso directory and its SELinux context must be set to container_file_t . Important The Community and Distribution options for Installation Source are not supported. Select the Cluster Type . The Production selection prohibits the install from proceeding if certain resource requirements like CPU number and memory size are not met. To allow the cluster installation to proceed even if the resource requirements are not met, select Development/POC . Important Do not use Development/POC mode to install a Ceph cluster that will be used in production. Set the Service Account Login and Service Account Token . If you do not have a Red Hat Registry Service Account, create one using the Registry Service Account webpage . Set Configure Firewall to ON to apply rules to firewalld to open ports for Ceph services. Use the OFF setting if you are not using firewalld . Currently, the Cockpit Ceph Installer only supports IPv4. If you require IPv6 support, discountinue use of the Cockpit Ceph Installer and proceed with installing Ceph using the Ansible scripts directly . Set OSD Type to BlueStore or FileStore . Important BlueStore is the default OSD type. Previously, Ceph used FileStore as the object store. This format is deprecated for new Red Hat Ceph Storage 4.0 installs because BlueStore offers more features and improved performance. It is still possible to use FileStore, but using it requires a support exception. For more information on BlueStore, see Ceph BlueStore in the Architecture Guide . Set Flash Configuration to Journal/Logs or OSD data . If you have Solid State Drives (SSDs), whether they use NVMe or a traditional SATA/SAS interface, you can choose to use them just for write journaling and logs while the actual data goes on Hard Disk Drives (HDDs), or you can use the SSDs for journaling, logs, and data, and not use HDDs for any Ceph OSD functions. Set Encryption to None or Encrypted . This refers to at rest encryption of storage devices using the LUKS1 format. Set Installation type to Container or RPM . Traditionally, Red Hat Package Manager (RPM) was used to install software on Red Hat Enterprise Linux. Now, you can install Ceph using RPM or containers. Installing Ceph using containers can provide improved hardware utilization since services can be isolated and collocated. Review all the Environment settings and click the Hosts button at the bottom right corner of the webpage. 4.7. Complete the Hosts page of the Cockpit Ceph Installer The Hosts page allows you inform the Cockpit Ceph Installer what hosts to install Ceph on, and what roles each host will be used for. As you add the hosts, the installer will check them for SSH and DNS connectivity. Prerequisites The Environment page of the Cockpit Ceph Installer has been completed. The Cockpit Ceph Installer SSH key has been copied to all nodes in the cluster . Procedure Click the Add Host(s) button. Enter the hostname for a Ceph OSD node, check the box for OSD , and click the Add button. The first Ceph OSD node is added. For production clusters, repeat this step until you have added at least three Ceph OSD nodes. Optional: Use a host name pattern to define a range of nodes. For example, to add jb-ceph4-osd2 and jb-ceph4-osd3 at the same time, enter jb-ceph4-osd[2-3] . Both jb-ceph4-osd2 and jb-ceph4-ods3 are added. Repeat the above steps for the other nodes in your cluster. For production clusters, add at least three Ceph Monitor nodes. In the dialog, the role is listed as MON . Add a node with the Metrics role. The Metrics role installs Grafana and Prometheus to provide real-time insights into the performance of the Ceph cluster. These metrics are presented in the Ceph Dashboard, which allows you to monitor and manage the cluster. The installation of the dashboard, Grafana, and Prometheus are required. You can colocate the metrics functions on the Ansible Administration node. If you do, ensure the system resources of the node are greater than what is required for a stand alone metrics node . Optional: Add a node with the MDS role. The MDS role installs the Ceph Metadata Server (MDS). Metadata Server daemons are necessary for deploying a Ceph File System. Optional: Add a node with the RGW role. The RGW role installs the Ceph Object Gateway, also know as the RADOS gateway, which is an object storage interface built on top of the librados API to provide applications with a RESTful gateway to Ceph storage clusters. It supports the Amazon S3 and OpenStack Swift APIs. Optional: Add a node with the iSCSI role. The iSCSI role installs an iSCSI gateway so you can share Ceph Block Devices over iSCSI. To use iSCSI with Ceph, you must install the iSCSI gateway on at least two nodes for multipath I/O. Optional: Colocate more than one service on the same node by selecting multiple roles when adding the node. For more information on colocating daemons, see Colocation of containerized Ceph daemons in the Installation Guide . Optional: Modify the roles assigned to a node by checking or unchecking roles in the table. Optional: To delete a node, on the far right side of the row of the node you want to delete, click the kebab icon and then click Delete . Click the Validate button at the bottom right corner of the page after you have added all the nodes in your cluster and set all the required roles. Note For production clusters, the Cockpit Ceph installer will not proceed unless you have three or five monitors. In these examples Cluster Type is set to Development/POC so the install can proceed with only one monitor. 4.8. Complete the Validate page of the Cockpit Ceph Installer The Validate page allows you to probe the nodes you provided on the Hosts page to verify they meet the hardware requirements for the roles you intend to use them for. Prerequisites The Hosts page of the Cockpit Ceph Installer has been completed. Procedure Click the Probe Hosts button. To continue you must select at least three hosts which have an OK Status . Optional: If warnings or errors were generated for hosts, click the arrow to the left of the check mark for the host to view the issues. Important If you set Cluster Type to Production , any errors generated will cause Status to be NOTOK and you will not be able to select them for installation. Read the step for information on how to resolve errors. Important If you set Cluster Type to Development/POC , any errors generated will be listed as warnings so Status is always OK . This allows you to select the hosts and install Ceph on them regardless of whether the hosts meet the requirements or suggestions. You can still resolve warnings if you want to. Read the step for information on how to resolve warnings. Optional: To resolve errors and warnings use one or more of the following methods. The easiest way to resolve errors or warnings is to disable certain roles completely or to disable a role on one host and enable it on another host which has the required resources. Experiment with enabling or disabling roles until you find a combination where, if you are installing a Development/POC cluster, you are comfortable proceeding with any remaining warnings, or if you are installing a Production cluster, at least three hosts have all the resources required for the roles assigned to them and you are comfortable proceeding with any remaining warnings. You can also use a new host which meets the requirements for the roles required. First go back to the Hosts page and delete the hosts with issues. Then, add the new hosts . If you want to upgrade the hardware on a host or modify it in some other way so it will meet the requirements or suggestions, first make the desired changes to the host, and then click Probe Hosts again. If you have to reinstall the operating system you will have to copy the SSH key again. Select the hosts to install Red Hat Ceph Storage on by checking the box to the host. Important If installing a production cluster, you must resolve any errors before you can select them for installation. Click the Network button at the bottom right corner of the page to review and configure networking for the cluster. 4.9. Complete the Network page of the Cockpit Ceph Installer The Network page allows you to isolate certain cluster communication types to specific networks. This requires multiple different networks configured across the hosts in the cluster. Important The Network page uses information gathered from the probes done on the Validate page to display the networks your hosts have access to. Currently, if you have already proceeded to the Network page, you cannot add new networks to hosts, go back to the Validate page, reprobe the hosts, and proceed to the Network page again and use the new networks. They will not be displayed for selection. To use networks added to the hosts after already going to the Network page you must refresh the web page completely and restart the install from the beginning. Important For production clusters you must segregate intra-cluster-traffic from client-to-cluster traffic on separate NICs. In addition to segregating cluster traffic types, there are other networking considerations to take into account when setting up a Ceph cluster. For more information, see Network considerations in the Hardware Guide . Prerequisites The Validate page of the Cockpit Ceph Installer has been completed. Procedure Take note of the network types you can configure on the Network page. Each type has its own column. Columns for Cluster Network and Public Network are always displayed. If you are installing hosts with the RADOS Gateway role, the S3 Network column will be displayed. If you are installing hosts with the iSCSI role, the iSCSI Network column will be displayed. In the example below, columns for Cluster Network , Public Network , and S3 Network are shown. Take note of the networks you can select for each network type. Only the networks which are available on all hosts that make up a particular network type are shown. In the example below, there are three networks which are available on all hosts in the cluster. Because all three networks are available on every set of hosts which make up a network type, each network type lists the same three networks. The three networks available are 192.168.122.0/24 , 192.168.123.0/24 , and 192.168.124.0/24 . Take note of the speed each network operates at. This is the speed of the NICs used for the particular network. In the example below, 192.168.123.0/24 , and 192.168.124.0/24 are at 1,000 mbps. The Cockpit Ceph Installer could not determine the speed for the 192.168.122.0/24 network. Select the networks you want to use for each network type. For production clusters, you must select separate networks for Cluster Network and Public Network . For development/POC clusters, you can select the same network for both types, or if you only have one network configured on all hosts, only that network will be displayed and you will not be able to select other networks. The 192.168.122.0/24 network will be used for the Public Network , the 192.168.123.0/24 network will be used for the Cluster Network , and the 192.168.124.0/24 network will be used for the S3 Network . Click the Review button at the bottom right corner of the page to review the entire cluster configuration before installation. 4.10. Review the installation configuration The Review page allows you to view all the details of the Ceph cluster installation configuration that you set on the pages, and details about the hosts, some of which were not included in pages. Prerequisites The Network page of the Cockpit Ceph Installer has been completed. Procedure View the review page. Verify the information from each page is as you expect it as shown on the Review page. A summary of information from the Environment page is at 1 , followed by the Hosts page at 2 , the Validate page at 3 , the Network page at 4 , and details about the hosts, including some additional details which were not included in pages, are at 5 . Click the Deploy button at the bottom right corner of the page to go to the Deploy page where you can finalize and start the actual installation process. 4.11. Deploy the Ceph cluster The Deploy page allows you save the installation settings in their native Ansible format, review or modify them if required, start the install, monitor its progress, and view the status of the cluster after the install finishes successfully. Prerequisites Installation configuration settings on the Review page have been verified. Procedure Click the Save button at the bottom right corner of the page to save the installation settings to the Ansible playbooks that will be used by Ansible to perform the actual install. Optional: View or further customize the settings in the Ansible playbooks located on the Ansible administration node. The playbooks are located in /usr/share/ceph-ansible . For more information about the Ansible playbooks and how to use them to customize the install, see Installing a Red Hat Ceph Storage cluster . Secure the default user names and passwords for Grafana and dashboard. Starting with Red Hat Ceph Storage 4.1, you must uncomment or set dashboard_admin_password and grafana_admin_password in /usr/share/ceph-ansible/group_vars/all.yml . Set secure passwords for each. Also set custom user names for dashboard_admin_user and grafana_admin_user . Click the Deploy button at the bottom right corner of the page to start the install. Observe the installation progress while it is running. The information at 1 shows whether the install is running or not, the start time, and elapsed time. The information at 2 shows a summary of the Ansible tasks that have been attempted. The information at 3 shows which roles have been installed or are installing. Green represents a role where all hosts that were assigned that role have had that role installed on them. Blue represents a role where hosts that have that role assigned to them are still being installed. At 4 you can view details about the current task or view failed tasks. Use the Filter by menu to switch between current task and failed tasks. The role names come from the Ansible inventory file. The equivalency is: mons are Monitors, mgrs are Managers, note the Manager role is installed alongside the Monitor role, osds are Object Storage Devices, mdss are Metadata Servers, rgws are RADOS Gateways, metrics are Grafana and Prometheus services for dashboard metrics. Not shown in the example screenshot: iscsigws are iSCSI Gateways. After the installation finishes, click the Complete button at the bottom right corner of the page. This opens a window which displays the output of the command ceph status , as well as dashboard access information. Compare cluster status information in the example below with the cluster status information on your cluster. The example shows a healthy cluster, with all OSDs up and in, and all services active. PGs are in the active+clean state. If some aspects of your cluster are not the same, refer to the Troubleshoting Guide for information on how to resolve the issues. At the bottom of the Ceph Cluster Status window, the dashboard access information is displayed, including the URL, user name, and password. Take note of this information. Use the information from the step along with the Dashboard Guide to access the dashboard . The dashboard provides a web interface so you can administer and monitor the Red Hat Ceph Storage cluster. For more information, see the Dashboard Guide . Optional: View the cockpit-ceph-installer.log file. This file records a log of the selections made and any associated warnings the probe process generated. It is located in the home directory of the user that ran the installer script, ansible-runner-service.sh .
[ "rpm -q cockpit", "[admin@jb-ceph4-admin ~]USD rpm -q cockpit cockpit-196.3-1.el8.x86_64", "dnf install cockpit", "yum install cockpit", "systemctl status cockpit.socket", "systemctl enable --now cockpit.socket", "Created symlink /etc/systemd/system/sockets.target.wants/cockpit.socket /usr/lib/systemd/system/cockpit.socket.", "systemctl status cockpit.socket", "Active: active (listening) since Tue 2020-01-07 18:49:07 EST; 7min ago", "dnf install cockpit-ceph-installer", "yum install cockpit-ceph-installer", "sudo docker login -u CUSTOMER_PORTAL_USERNAME https://registry.redhat.io", "[admin@jb-ceph4-admin ~]USD sudo docker login -u myusername https://registry.redhat.io Password: Login Succeeded!", "sudo podman login -u CUSTOMER_PORTAL_USERNAME https://registry.redhat.io", "[admin@jb-ceph4-admin ~]USD sudo podman login -u myusername https://registry.redhat.io Password: Login Succeeded!", "[registries.search] registries = [ 'registry.access.redhat.com', 'registry.fedoraproject.org', 'registry.centos.org', 'docker.io']", "[registries.search] registries = ['registry.redhat.io', 'registry.access.redhat.com', 'registry.fedoraproject.org', 'registry.centos.org', 'docker.io']", "sudo ansible-runner-service.sh -s", "[admin@jb-ceph4-admin ~]USD sudo ansible-runner-service.sh -s Checking environment is ready Checking/creating directories Checking SSL certificate configuration Generating RSA private key, 4096 bit long modulus (2 primes) ..................................................................................................................................................................................................................................++++ ......................................................++++ e is 65537 (0x010001) Generating RSA private key, 4096 bit long modulus (2 primes) ........................................++++ ..............................................................................................................................................................................++++ e is 65537 (0x010001) writing RSA key Signature ok subject=C = US, ST = North Carolina, L = Raleigh, O = Red Hat, OU = RunnerServer, CN = jb-ceph4-admin Getting CA Private Key Generating RSA private key, 4096 bit long modulus (2 primes) .....................................................................................................++++ ..++++ e is 65537 (0x010001) writing RSA key Signature ok subject=C = US, ST = North Carolina, L = Raleigh, O = Red Hat, OU = RunnerClient, CN = jb-ceph4-admin Getting CA Private Key Setting ownership of the certs to your user account(admin) Setting target user for ansible connections to admin Applying SELINUX container_file_t context to '/etc/ansible-runner-service' Applying SELINUX container_file_t context to '/usr/share/ceph-ansible' Ansible API (runner-service) container set to rhceph/ansible-runner-rhel8:latest Fetching Ansible API container (runner-service). Please wait Trying to pull registry.redhat.io/rhceph/ansible-runner-rhel8:latest...Getting image source signatures Copying blob c585fd5093c6 done Copying blob 217d30c36265 done Copying blob e61d8721e62e done Copying config b96067ea93 done Writing manifest to image destination Storing signatures b96067ea93c8d6769eaea86854617c63c61ea10c4ff01ecf71d488d5727cb577 Starting Ansible API container (runner-service) Started runner-service container Waiting for Ansible API container (runner-service) to respond The Ansible API container (runner-service) is available and responding to requests Login to the cockpit UI at https://jb-ceph4-admin:9090/cockpit-ceph-installer to start the install", "ssh ANSIBLE_USER @ HOST_NAME", "ssh admin@jb-ceph4-admin", "sudo ssh-copy-id -f -i /usr/share/ansible-runner-service/env/ssh_key.pub _ANSIBLE_USER_@_HOST_NAME_", "sudo ssh-copy-id -f -i /usr/share/ansible-runner-service/env/ssh_key.pub admin@jb-ceph4-mon /bin/ssh-copy-id: INFO: Source of key(s) to be installed: \"/usr/share/ansible-runner-service/env/ssh_key.pub\" [email protected]'s password: Number of key(s) added: 1 Now try logging into the machine, with: \"ssh 'admin@jb-ceph4-mon'\" and check to make sure that only the key(s) you wanted were added." ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/installation_guide/installing-red-hat-ceph-storage-using-the-cockpit-web-interface
Chapter 5. Access Control Lists
Chapter 5. Access Control Lists Files and directories have permission sets for the owner of the file, the group associated with the file, and all other users for the system. However, these permission sets have limitations. For example, different permissions cannot be configured for different users. Thus, Access Control Lists (ACLs) were implemented. The Red Hat Enterprise Linux kernel provides ACL support for the ext3 file system and NFS-exported file systems. ACLs are also recognized on ext3 file systems accessed via Samba. Along with support in the kernel, the acl package is required to implement ACLs. It contains the utilities used to add, modify, remove, and retrieve ACL information. The cp and mv commands copy or move any ACLs associated with files and directories. 5.1. Mounting File Systems Before using ACLs for a file or directory, the partition for the file or directory must be mounted with ACL support. If it is a local ext3 file system, it can mounted with the following command: mount -t ext3 -o acl device-name partition For example: mount -t ext3 -o acl /dev/VolGroup00/LogVol02 /work Alternatively, if the partition is listed in the /etc/fstab file, the entry for the partition can include the acl option: If an ext3 file system is accessed via Samba and ACLs have been enabled for it, the ACLs are recognized because Samba has been compiled with the --with-acl-support option. No special flags are required when accessing or mounting a Samba share. 5.1.1. NFS By default, if the file system being exported by an NFS server supports ACLs and the NFS client can read ACLs, ACLs are utilized by the client system. To disable ACLs on NFS shares when configuring the server, include the no_acl option in the /etc/exports file. To disable ACLs on an NFS share when mounting it on a client, mount it with the no_acl option via the command line or the /etc/fstab file. 5.2. Setting Access ACLs There are two types of ACLs: access ACLs and default ACLs . An access ACL is the access control list for a specific file or directory. A default ACL can only be associated with a directory; if a file within the directory does not have an access ACL, it uses the rules of the default ACL for the directory. Default ACLs are optional. ACLs can be configured: Per user Per group Via the effective rights mask For users not in the user group for the file The setfacl utility sets ACLs for files and directories. Use the -m option to add or modify the ACL of a file or directory: Rules ( rules ) must be specified in the following formats. Multiple rules can be specified in the same command if they are separated by commas. u: uid : perms Sets the access ACL for a user. The user name or UID may be specified. The user may be any valid user on the system. g: gid : perms Sets the access ACL for a group. The group name or GID may be specified. The group may be any valid group on the system. m: perms Sets the effective rights mask. The mask is the union of all permissions of the owning group and all of the user and group entries. o: perms Sets the access ACL for users other than the ones in the group for the file. Permissions ( perms ) must be a combination of the characters r , w , and x for read, write, and execute. If a file or directory already has an ACL, and the setfacl command is used, the additional rules are added to the existing ACL or the existing rule is modified. Example 5.1. Give read and write permissions For example, to give read and write permissions to user andrius: To remove all the permissions for a user, group, or others, use the -x option and do not specify any permissions: Example 5.2. Remove all permissions For example, to remove all permissions from the user with UID 500: 5.3. Setting Default ACLs To set a default ACL, add d: before the rule and specify a directory instead of a file name. Example 5.3. Setting default ACLs For example, to set the default ACL for the /share/ directory to read and execute for users not in the user group (an access ACL for an individual file can override it): 5.4. Retrieving ACLs To determine the existing ACLs for a file or directory, use the getfacl command. In the example below, the getfacl is used to determine the existing ACLs for a file. Example 5.4. Retrieving ACLs The above command returns the following output: If a directory with a default ACL is specified, the default ACL is also displayed as illustrated below. For example, getfacl home/sales/ will display similar output: 5.5. Archiving File Systems With ACLs By default, the dump command now preserves ACLs during a backup operation. When archiving a file or file system with tar , use the --acls option to preserve ACLs. Similarly, when using cp to copy files with ACLs, include the --preserve=mode option to ensure that ACLs are copied across too. In addition, the -a option (equivalent to -dR --preserve=all ) of cp also preserves ACLs during a backup along with other information such as timestamps, SELinux contexts, and the like. For more information about dump , tar , or cp , refer to their respective man pages. The star utility is similar to the tar utility in that it can be used to generate archives of files; however, some of its options are different. Refer to Table 5.1, "Command Line Options for star " for a listing of more commonly used options. For all available options, refer to man star . The star package is required to use this utility. Table 5.1. Command Line Options for star Option Description -c Creates an archive file. -n Do not extract the files; use in conjunction with -x to show what extracting the files does. -r Replaces files in the archive. The files are written to the end of the archive file, replacing any files with the same path and file name. -t Displays the contents of the archive file. -u Updates the archive file. The files are written to the end of the archive if they do not exist in the archive, or if the files are newer than the files of the same name in the archive. This option only works if the archive is a file or an unblocked tape that may backspace. -x Extracts the files from the archive. If used with -U and a file in the archive is older than the corresponding file on the file system, the file is not extracted. -help Displays the most important options. -xhelp Displays the least important options. -/ Do not strip leading slashes from file names when extracting the files from an archive. By default, they are stripped when files are extracted. -acl When creating or extracting, archives or restores any ACLs associated with the files and directories. 5.6. Compatibility with Older Systems If an ACL has been set on any file on a given file system, that file system has the ext_attr attribute. This attribute can be seen using the following command: A file system that has acquired the ext_attr attribute can be mounted with older kernels, but those kernels do not enforce any ACLs which have been set. Versions of the e2fsck utility included in version 1.22 and higher of the e2fsprogs package (including the versions in Red Hat Enterprise Linux 2.1 and 4) can check a file system with the ext_attr attribute. Older versions refuse to check it. 5.7. ACL References Refer to the following man pages for more information. man acl - Description of ACLs man getfacl - Discusses how to get file access control lists man setfacl - Explains how to set file access control lists man star - Explains more about the star utility and its many options
[ "LABEL=/work /work ext3 acl 1 2", "setfacl -m rules files", "setfacl -m u:andrius:rw /project/somefile", "setfacl -x rules files", "setfacl -x u:500 /project/somefile", "setfacl -m d:o:rx /share", "getfacl home/john/picture.png", "file: home/john/picture.png owner: john group: john user::rw- group::r-- other::r--", "file: home/sales/ owner: john group: john user::rw- user:barryg:r-- group::r-- mask::r-- other::r-- default:user::rwx default:user:john:rwx default:group::r-x default:mask::rwx default:other::r-x", "tune2fs -l filesystem-device" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system_administrators_guide/ch-Access_Control_Lists
Chapter 1. Red Hat OpenStack Platform high availability overview and planning
Chapter 1. Red Hat OpenStack Platform high availability overview and planning Red Hat OpenStack Platform (RHOSP) high availability (HA) is a collection of services that orchestrate failover and recovery for your deployment. When you plan your HA deployment, ensure that you review the considerations for different aspects of the environment, such as hardware assignments and network configuration. 1.1. Red Hat OpenStack Platform high availability services Red Hat OpenStack Platform (RHOSP) employs several technologies to provide the services required to implement high availability (HA). These services include Galera, RabbitMQ, Redis, HAProxy, individual services that Pacemaker manages, and Systemd and plain container services that Podman manages. 1.1.1. Service types Core container Core container services are Galera, RabbitMQ, Redis, and HAProxy. These services run on all Controller nodes and require specific management and constraints for the start, stop and restart actions. You use Pacemaker to launch, manage, and troubleshoot core container services. Note RHOSP uses the MariaDB Galera Cluster to manage database replication. Active-passive Active-passive services run on one Controller node at a time, and include services such as openstack-cinder-volume . To move an active-passive service, you must use Pacemaker to ensure that the correct stop-start sequence is followed. Systemd and plain container Systemd and plain container services are independent services that can withstand a service interruption. Therefore, if you restart a high availability service such as Galera, you do not need to manually restart any other service, such as nova-api . You can use systemd or Podman to directly manage systemd and plain container services. When orchestrating your HA deployment, director uses templates and Puppet modules to ensure that all services are configured and launched correctly. In addition, when troubleshooting HA issues, you must interact with services in the HA framework using the podman command or the systemctl command. 1.1.2. Service modes HA services can run in one of the following modes: Active-active Pacemaker runs the same service on multiple Controller nodes, and uses HAProxy to distribute traffic across the nodes or to a specific Controller with a single IP address. In some cases, HAProxy distributes traffic to active-active services with Round Robin scheduling. You can add more Controller nodes to improve performance. Important Active-active mode is supported only in distributed compute node (DCN) architecture at Edge sites. Active-passive Services that are unable to run in active-active mode must run in active-passive mode. In this mode, only one instance of the service is active at a time. For example, HAProxy uses stick-table options to direct incoming Galera database connection requests to a single back-end service. This helps prevent too many simultaneous connections to the same data from multiple Galera nodes. 1.2. Planning high availability hardware assignments When you plan hardware assignments, consider the number of nodes that you want to run in your deployment, as well as the number of Virtual Machine (vm) instances that you plan to run on Compute nodes. Controller nodes Most non-storage services run on Controller nodes. All services are replicated across the three nodes and are configured as active-active or active-passive services. A high availability (HA) environment requires a minimum of three nodes. Red Hat Ceph Storage nodes Storage services run on these nodes and provide pools of Red Hat Ceph Storage areas to the Compute nodes. A minimum of three nodes are required. Compute nodes Virtual machine (VM) instances run on Compute nodes. You can deploy as many Compute nodes as you need to meet your capacity requirements, as well as migration and reboot operations. You must connect Compute nodes to the storage network and to the project network to ensure that VMs can access storage nodes, VMs on other Compute nodes, and public networks. STONITH You must configure a STONITH device for each node that is a part of the Pacemaker cluster in a highly available overcloud. Deploying a highly available overcloud without STONITH is not supported. For more information on STONITH and Pacemaker, see Fencing in a Red Hat High Availability Cluster and Support Policies for RHEL High Availability Clusters . 1.3. Planning high availability networking When you plan the virtual and physical networks, consider the provisioning network switch configuration and the external network switch configuration. In addition to the network configuration, you must deploy the following components: Provisioning network switch This switch must be able to connect the undercloud to all the physical computers in the overcloud. The NIC on each overcloud node that is connected to this switch must be able to PXE boot from the undercloud. The portfast parameter must be enabled. Controller/External network switch This switch must be configured to perform VLAN tagging for the other VLANs in the deployment. Allow only VLAN 100 traffic to external networks. Networking hardware and keystone endpoint To prevent a Controller node network card or network switch failure disrupting overcloud services availability, ensure that the keystone admin endpoint is located on a network that uses bonded network cards or networking hardware redundancy. If you move the keystone endpoint to a different network, such as internal_api , ensure that the undercloud can reach the VLAN or subnet. For more information, see the Red Hat Knowledgebase solution How to migrate Keystone Admin Endpoint to internal_api network . 1.4. Accessing the high availability environment To investigate high availability (HA) nodes, use the stack user to log in to the overcloud nodes and run the openstack server list command to view the status and details of the nodes. Prerequisites High availability is deployed and running. Procedure In a running HA environment, log in to the undercloud as the stack user. Identify the IP addresses of your overcloud nodes: Log in to one of the overcloud nodes: Replace <node_ip> with the IP address of the node that you want to log in to. 1.5. Additional resources Chapter 2, Example deployment: High availability cluster with Compute and Ceph
[ "source ~/stackrc (undercloud) USD openstack server list +-------+------------------------+---+----------------------+---+ | ID | Name |...| Networks |...| +-------+------------------------+---+----------------------+---+ | d1... | overcloud-controller-0 |...| ctlplane=*10.200.0.11* |...|", "(undercloud) USD ssh heat-admin@<node_IP>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/high_availability_deployment_and_usage/assembly_ha-overview-planning_rhosp
Installing on AWS
Installing on AWS OpenShift Container Platform 4.14 Installing OpenShift Container Platform on Amazon Web Services Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_aws/index
Chapter 4. External storage services
Chapter 4. External storage services Red Hat OpenShift Data Foundation can use IBM FlashSystems or make services from an external Red Hat Ceph Storage cluster available for consumption through OpenShift Container Platform clusters running on the following platforms: VMware vSphere Bare metal Red Hat OpenStack platform (Technology Preview) IBM Power IBM Z The OpenShift Data Foundation operators create and manage services to satisfy Persistent Volume (PV) and Object Bucket Claims (OBCs) against the external services. External cluster can serve block, file and object storage classes for applications that run on OpenShift Container Platform. The operators do not deploy or manage the external clusters.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/planning_your_deployment/external-storage-services_rhodf
Chapter 11. Configuring alert notifications
Chapter 11. Configuring alert notifications In OpenShift Container Platform, an alert is fired when the conditions defined in an alerting rule are true. An alert provides a notification that a set of circumstances are apparent within a cluster. Firing alerts can be viewed in the Alerting UI in the OpenShift Container Platform web console by default. After an installation, you can configure OpenShift Container Platform to send alert notifications to external systems. 11.1. Sending notifications to external systems In OpenShift Container Platform 4.17, firing alerts can be viewed in the Alerting UI. Alerts are not configured by default to be sent to any notification systems. You can configure OpenShift Container Platform to send alerts to the following receiver types: PagerDuty Webhook Email Slack Microsoft Teams Routing alerts to receivers enables you to send timely notifications to the appropriate teams when failures occur. For example, critical alerts require immediate attention and are typically paged to an individual or a critical response team. Alerts that provide non-critical warning notifications might instead be routed to a ticketing system for non-immediate review. Checking that alerting is operational by using the watchdog alert OpenShift Container Platform monitoring includes a watchdog alert that fires continuously. Alertmanager repeatedly sends watchdog alert notifications to configured notification providers. The provider is usually configured to notify an administrator when it stops receiving the watchdog alert. This mechanism helps you quickly identify any communication issues between Alertmanager and the notification provider. 11.2. Additional resources About OpenShift Container Platform monitoring Configuring alert notifications
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/postinstallation_configuration/configuring-alert-notifications
Installing on bare metal
Installing on bare metal OpenShift Container Platform 4.16 Installing OpenShift Container Platform on bare metal Red Hat OpenShift Documentation Team
[ "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s", "interfaces: - name: enp2s0 1 type: ethernet 2 state: up 3 ipv4: enabled: false 4 ipv6: enabled: false - name: br-ex type: ovs-bridge state: up ipv4: enabled: false dhcp: false ipv6: enabled: false dhcp: false bridge: port: - name: enp2s0 5 - name: br-ex - name: br-ex type: ovs-interface state: up copy-mac-from: enp2s0 ipv4: enabled: true dhcp: true ipv6: enabled: false dhcp: false", "cat <nmstate_configuration>.yaml | base64 1", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 10-br-ex-worker 2 spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<base64_encoded_nmstate_configuration> 3 mode: 0644 overwrite: true path: /etc/nmstate/openshift/cluster.yml", "oc edit mc <machineconfig_custom_resource_name>", "oc apply -f ./extraworker-secret.yaml", "apiVersion: metal3.io/v1alpha1 kind: BareMetalHost spec: preprovisioningNetworkDataName: ostest-extraworker-0-network-config-secret", "oc project openshift-machine-api", "oc get machinesets", "oc scale machineset <machineset_name> --replicas=<n> 1", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "compute: - name: worker platform: {} replicas: 0", "./openshift-install create manifests --dir <installation_directory> 1", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "sha512sum <installation_directory>/bootstrap.ign", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep '\\.iso[^.]'", "\"location\": \"<url>/art/storage/releases/rhcos-4.16-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.16-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.16-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.16/<release>/x86_64/rhcos-<release>-live.x86_64.iso\",", "sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2", "sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b", "Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep -Eo '\"https.*(kernel-|initramfs.|rootfs.)\\w+(\\.img)?\"'", "\"<url>/art/storage/releases/rhcos-4.16-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64\" \"<url>/art/storage/releases/rhcos-4.16-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.16-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.16-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le\" \"<url>/art/storage/releases/rhcos-4.16-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.16-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.16-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x\" \"<url>/art/storage/releases/rhcos-4.16-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.16-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.16/<release>/x86_64/rhcos-<release>-live-kernel-x86_64\" \"<url>/art/storage/releases/rhcos-4.16/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img\" \"<url>/art/storage/releases/rhcos-4.16/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img\"", "DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3", "kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot", "menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd rhcos-<version>-live-initramfs.<architecture>.img 3 }", "Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied", "sudo coreos-installer install --copy-network --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number>", "openshift-install create manifests --dir <installation_directory>", "variant: openshift version: 4.16.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true", "butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml", "openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partlabel 'data*' /dev/disk/by-id/scsi-<serial_number>", "coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 6 /dev/disk/by-id/scsi-<serial_number>", "coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 5- /dev/disk/by-id/scsi-<serial_number>", "coreos.inst.save_partlabel=data*", "coreos.inst.save_partindex=5-", "coreos.inst.save_partindex=6", "coreos-installer install --console=tty0 \\ 1 --console=ttyS0,<options> \\ 2 --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number>", "coreos-installer iso customize rhcos-<version>-live.x86_64.iso --dest-ignition bootstrap.ign \\ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> 2", "coreos-installer iso reset rhcos-<version>-live.x86_64.iso", "coreos-installer iso customize rhcos-<version>-live.x86_64.iso --dest-ignition <path> \\ 1 --dest-console tty0 \\ 2 --dest-console ttyS0,<options> \\ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> 4", "coreos-installer iso reset rhcos-<version>-live.x86_64.iso", "coreos-installer iso customize rhcos-<version>-live.x86_64.iso --ignition-ca cert.pem", "[connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto", "[connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond", "[connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond", "coreos-installer iso customize rhcos-<version>-live.x86_64.iso --network-keyfile bond0.nmconnection --network-keyfile bond0-proxy-em1.nmconnection --network-keyfile bond0-proxy-em2.nmconnection", "coreos-installer iso customize --pre-install mount-iscsi.sh \\ 1 --post-install unmount-iscsi.sh \\ 2 --dest-device /dev/disk/by-path/<IP_address>:<port>-iscsi-<target_iqn>-lun-<lun> \\ 3 --dest-ignition config.ign \\ 4 --dest-karg-append rd.iscsi.initiator=<initiator_iqn> \\ 5 --dest-karg-append netroot=<target_iqn> \\ 6 -o custom.iso rhcos-<version>-live.x86_64.iso", "coreos-installer iso customize --pre-install mount-iscsi.sh \\ 1 --post-install unmount-iscsi.sh \\ 2 --dest-device /dev/mapper/mpatha \\ 3 --dest-ignition config.ign \\ 4 --dest-karg-append rd.iscsi.firmware=1 \\ 5 --dest-karg-append rd.multipath=default \\ 6 -o custom.iso rhcos-<version>-live.x86_64.iso", "coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --dest-ignition bootstrap.ign \\ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> \\ 2 -o rhcos-<version>-custom-initramfs.x86_64.img 3", "coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --dest-ignition <path> \\ 1 --dest-console tty0 \\ 2 --dest-console ttyS0,<options> \\ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> \\ 4 -o rhcos-<version>-custom-initramfs.x86_64.img 5", "coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --ignition-ca cert.pem -o rhcos-<version>-custom-initramfs.x86_64.img", "[connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto", "[connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond", "[connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond", "coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --network-keyfile bond0.nmconnection --network-keyfile bond0-proxy-em1.nmconnection --network-keyfile bond0-proxy-em2.nmconnection -o rhcos-<version>-custom-initramfs.x86_64.img", "coreos-installer pxe customize --pre-install mount-iscsi.sh \\ 1 --post-install unmount-iscsi.sh \\ 2 --dest-device /dev/disk/by-path/<IP_address>:<port>-iscsi-<target_iqn>-lun-<lun> \\ 3 --dest-ignition config.ign \\ 4 --dest-karg-append rd.iscsi.initiator=<initiator_iqn> \\ 5 --dest-karg-append netroot=<target_iqn> \\ 6 -o custom.img rhcos-<version>-live-initramfs.x86_64.img", "coreos-installer pxe customize --pre-install mount-iscsi.sh \\ 1 --post-install unmount-iscsi.sh \\ 2 --dest-device /dev/mapper/mpatha \\ 3 --dest-ignition config.ign \\ 4 --dest-karg-append rd.iscsi.firmware=1 \\ 5 --dest-karg-append rd.multipath=default \\ 6 -o custom.img rhcos-<version>-live-initramfs.x86_64.img", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=::10.10.10.254::::", "rd.route=20.20.20.0/24:20.20.20.254:enp2s0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none", "ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0", "ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0", "nameserver=1.1.1.1 nameserver=8.8.8.8", "bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp", "bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp", "bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "team=team0:em1,em2 ip=team0:dhcp", "mpathconf --enable && systemctl start multipathd.service", "coreos-installer install /dev/mapper/mpatha \\ 1 --ignition-url=http://host/worker.ign --append-karg rd.multipath=default --append-karg root=/dev/disk/by-label/dm-mpath-root --append-karg rw", "coreos-installer install /dev/disk/by-id/wwn-<wwn_ID> \\ 1 --ignition-url=http://host/worker.ign --append-karg rd.multipath=default --append-karg root=/dev/disk/by-label/dm-mpath-root --append-karg rw", "oc debug node/ip-10-0-141-105.ec2.internal", "Starting pod/ip-10-0-141-105ec2internal-debug To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline rd.multipath=default root=/dev/disk/by-label/dm-mpath-root sh-4.2# exit", "variant: openshift version: 4.16.0 systemd: units: - name: mpath-configure.service enabled: true contents: | [Unit] Description=Configure Multipath on Secondary Disk ConditionFirstBoot=true ConditionPathExists=!/etc/multipath.conf Before=multipathd.service 1 DefaultDependencies=no [Service] Type=oneshot ExecStart=/usr/sbin/mpathconf --enable 2 [Install] WantedBy=multi-user.target - name: mpath-var-lib-container.service enabled: true contents: | [Unit] Description=Set Up Multipath On /var/lib/containers ConditionFirstBoot=true 3 Requires=dev-mapper-mpatha.device After=dev-mapper-mpatha.device After=ostree-remount.service Before=kubelet.service DefaultDependencies=no [Service] 4 Type=oneshot ExecStart=/usr/sbin/mkfs.xfs -L containers -m reflink=1 /dev/mapper/mpatha ExecStart=/usr/bin/mkdir -p /var/lib/containers [Install] WantedBy=multi-user.target - name: var-lib-containers.mount enabled: true contents: | [Unit] Description=Mount /var/lib/containers After=mpath-var-lib-containers.service Before=kubelet.service 5 [Mount] 6 What=/dev/disk/by-label/dm-mpath-containers Where=/var/lib/containers Type=xfs [Install] WantedBy=multi-user.target", "butane --pretty --strict multipath-config.bu > multipath-config.ign", "iscsiadm --mode discovery --type sendtargets --portal <IP_address> \\ 1 --login", "coreos-installer install /dev/disk/by-path/ip-<IP_address>:<port>-iscsi-<target_iqn>-lun-<lun> \\ 1 --append-karg rd.iscsi.initiator=<initiator_iqn> \\ 2 --append.karg netroot=<target_iqn> \\ 3 --console ttyS0,115200n8 --ignition-file <path_to_file>", "iscsiadm --mode node --logoutall=all", "iscsiadm --mode discovery --type sendtargets --portal <IP_address> \\ 1 --login", "mpathconf --enable && systemctl start multipathd.service", "coreos-installer install /dev/mapper/mpatha \\ 1 --append-karg rd.iscsi.firmware=1 \\ 2 --append-karg rd.multipath=default \\ 3 --console ttyS0 --ignition-file <path_to_file>", "iscsiadm --mode node --logout=all", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.29.4 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.29.4 master-1 Ready master 63m v1.29.4 master-2 Ready master 64m v1.29.4", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.29.4 master-1 Ready master 73m v1.29.4 master-2 Ready master 74m v1.29.4 worker-0 Ready worker 11m v1.29.4 worker-1 Ready worker 11m v1.29.4", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.16.0 True False False 19m baremetal 4.16.0 True False False 37m cloud-credential 4.16.0 True False False 40m cluster-autoscaler 4.16.0 True False False 37m config-operator 4.16.0 True False False 38m console 4.16.0 True False False 26m csi-snapshot-controller 4.16.0 True False False 37m dns 4.16.0 True False False 37m etcd 4.16.0 True False False 36m image-registry 4.16.0 True False False 31m ingress 4.16.0 True False False 30m insights 4.16.0 True False False 31m kube-apiserver 4.16.0 True False False 26m kube-controller-manager 4.16.0 True False False 36m kube-scheduler 4.16.0 True False False 36m kube-storage-version-migrator 4.16.0 True False False 37m machine-api 4.16.0 True False False 29m machine-approver 4.16.0 True False False 37m machine-config 4.16.0 True False False 36m marketplace 4.16.0 True False False 37m monitoring 4.16.0 True False False 29m network 4.16.0 True False False 38m node-tuning 4.16.0 True False False 37m openshift-apiserver 4.16.0 True False False 32m openshift-controller-manager 4.16.0 True False False 30m openshift-samples 4.16.0 True False False 32m operator-lifecycle-manager 4.16.0 True False False 37m operator-lifecycle-manager-catalog 4.16.0 True False False 37m operator-lifecycle-manager-packageserver 4.16.0 True False False 32m service-ca 4.16.0 True False False 38m storage 4.16.0 True False False 37m", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resources found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim:", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.16 True False False 6h50m", "oc edit configs.imageregistry/cluster", "managementState: Removed", "managementState: Managed", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4", "oc create -f pvc.yaml -n openshift-image-registry", "oc edit config.imageregistry.operator.openshift.io -o yaml", "storage: pvc: claim: 1", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.16.0 True False False 19m baremetal 4.16.0 True False False 37m cloud-credential 4.16.0 True False False 40m cluster-autoscaler 4.16.0 True False False 37m config-operator 4.16.0 True False False 38m console 4.16.0 True False False 26m csi-snapshot-controller 4.16.0 True False False 37m dns 4.16.0 True False False 37m etcd 4.16.0 True False False 36m image-registry 4.16.0 True False False 31m ingress 4.16.0 True False False 30m insights 4.16.0 True False False 31m kube-apiserver 4.16.0 True False False 26m kube-controller-manager 4.16.0 True False False 36m kube-scheduler 4.16.0 True False False 36m kube-storage-version-migrator 4.16.0 True False False 37m machine-api 4.16.0 True False False 29m machine-approver 4.16.0 True False False 37m machine-config 4.16.0 True False False 36m marketplace 4.16.0 True False False 37m monitoring 4.16.0 True False False 29m network 4.16.0 True False False 38m node-tuning 4.16.0 True False False 37m openshift-apiserver 4.16.0 True False False 32m openshift-controller-manager 4.16.0 True False False 30m openshift-samples 4.16.0 True False False 32m operator-lifecycle-manager 4.16.0 True False False 37m operator-lifecycle-manager-catalog 4.16.0 True False False 37m operator-lifecycle-manager-packageserver 4.16.0 True False False 32m service-ca 4.16.0 True False False 38m storage 4.16.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s", "interfaces: - name: enp2s0 1 type: ethernet 2 state: up 3 ipv4: enabled: false 4 ipv6: enabled: false - name: br-ex type: ovs-bridge state: up ipv4: enabled: false dhcp: false ipv6: enabled: false dhcp: false bridge: port: - name: enp2s0 5 - name: br-ex - name: br-ex type: ovs-interface state: up copy-mac-from: enp2s0 ipv4: enabled: true dhcp: true ipv6: enabled: false dhcp: false", "cat <nmstate_configuration>.yaml | base64 1", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 10-br-ex-worker 2 spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<base64_encoded_nmstate_configuration> 3 mode: 0644 overwrite: true path: /etc/nmstate/openshift/cluster.yml", "oc edit mc <machineconfig_custom_resource_name>", "oc apply -f ./extraworker-secret.yaml", "apiVersion: metal3.io/v1alpha1 kind: BareMetalHost spec: preprovisioningNetworkDataName: ostest-extraworker-0-network-config-secret", "oc project openshift-machine-api", "oc get machinesets", "oc scale machineset <machineset_name> --replicas=<n> 1", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16", "./openshift-install create manifests --dir <installation_directory> 1", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: mode: Full", "rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "sha512sum <installation_directory>/bootstrap.ign", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep '\\.iso[^.]'", "\"location\": \"<url>/art/storage/releases/rhcos-4.16-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.16-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.16-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.16/<release>/x86_64/rhcos-<release>-live.x86_64.iso\",", "sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2", "sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b", "Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep -Eo '\"https.*(kernel-|initramfs.|rootfs.)\\w+(\\.img)?\"'", "\"<url>/art/storage/releases/rhcos-4.16-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64\" \"<url>/art/storage/releases/rhcos-4.16-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.16-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.16-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le\" \"<url>/art/storage/releases/rhcos-4.16-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.16-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.16-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x\" \"<url>/art/storage/releases/rhcos-4.16-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.16-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.16/<release>/x86_64/rhcos-<release>-live-kernel-x86_64\" \"<url>/art/storage/releases/rhcos-4.16/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img\" \"<url>/art/storage/releases/rhcos-4.16/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img\"", "DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3", "kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot", "menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd rhcos-<version>-live-initramfs.<architecture>.img 3 }", "Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied", "sudo coreos-installer install --copy-network --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number>", "openshift-install create manifests --dir <installation_directory>", "variant: openshift version: 4.16.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true", "butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml", "openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partlabel 'data*' /dev/disk/by-id/scsi-<serial_number>", "coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 6 /dev/disk/by-id/scsi-<serial_number>", "coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 5- /dev/disk/by-id/scsi-<serial_number>", "coreos.inst.save_partlabel=data*", "coreos.inst.save_partindex=5-", "coreos.inst.save_partindex=6", "coreos-installer install --console=tty0 \\ 1 --console=ttyS0,<options> \\ 2 --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number>", "coreos-installer iso customize rhcos-<version>-live.x86_64.iso --dest-ignition bootstrap.ign \\ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> 2", "coreos-installer iso reset rhcos-<version>-live.x86_64.iso", "coreos-installer iso customize rhcos-<version>-live.x86_64.iso --dest-ignition <path> \\ 1 --dest-console tty0 \\ 2 --dest-console ttyS0,<options> \\ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> 4", "coreos-installer iso reset rhcos-<version>-live.x86_64.iso", "coreos-installer iso customize rhcos-<version>-live.x86_64.iso --ignition-ca cert.pem", "[connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto", "[connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond", "[connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond", "coreos-installer iso customize rhcos-<version>-live.x86_64.iso --network-keyfile bond0.nmconnection --network-keyfile bond0-proxy-em1.nmconnection --network-keyfile bond0-proxy-em2.nmconnection", "coreos-installer iso customize --pre-install mount-iscsi.sh \\ 1 --post-install unmount-iscsi.sh \\ 2 --dest-device /dev/disk/by-path/<IP_address>:<port>-iscsi-<target_iqn>-lun-<lun> \\ 3 --dest-ignition config.ign \\ 4 --dest-karg-append rd.iscsi.initiator=<initiator_iqn> \\ 5 --dest-karg-append netroot=<target_iqn> \\ 6 -o custom.iso rhcos-<version>-live.x86_64.iso", "coreos-installer iso customize --pre-install mount-iscsi.sh \\ 1 --post-install unmount-iscsi.sh \\ 2 --dest-device /dev/mapper/mpatha \\ 3 --dest-ignition config.ign \\ 4 --dest-karg-append rd.iscsi.firmware=1 \\ 5 --dest-karg-append rd.multipath=default \\ 6 -o custom.iso rhcos-<version>-live.x86_64.iso", "coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --dest-ignition bootstrap.ign \\ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> \\ 2 -o rhcos-<version>-custom-initramfs.x86_64.img 3", "coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --dest-ignition <path> \\ 1 --dest-console tty0 \\ 2 --dest-console ttyS0,<options> \\ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> \\ 4 -o rhcos-<version>-custom-initramfs.x86_64.img 5", "coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --ignition-ca cert.pem -o rhcos-<version>-custom-initramfs.x86_64.img", "[connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto", "[connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond", "[connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond", "coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --network-keyfile bond0.nmconnection --network-keyfile bond0-proxy-em1.nmconnection --network-keyfile bond0-proxy-em2.nmconnection -o rhcos-<version>-custom-initramfs.x86_64.img", "coreos-installer pxe customize --pre-install mount-iscsi.sh \\ 1 --post-install unmount-iscsi.sh \\ 2 --dest-device /dev/disk/by-path/<IP_address>:<port>-iscsi-<target_iqn>-lun-<lun> \\ 3 --dest-ignition config.ign \\ 4 --dest-karg-append rd.iscsi.initiator=<initiator_iqn> \\ 5 --dest-karg-append netroot=<target_iqn> \\ 6 -o custom.img rhcos-<version>-live-initramfs.x86_64.img", "coreos-installer pxe customize --pre-install mount-iscsi.sh \\ 1 --post-install unmount-iscsi.sh \\ 2 --dest-device /dev/mapper/mpatha \\ 3 --dest-ignition config.ign \\ 4 --dest-karg-append rd.iscsi.firmware=1 \\ 5 --dest-karg-append rd.multipath=default \\ 6 -o custom.img rhcos-<version>-live-initramfs.x86_64.img", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=::10.10.10.254::::", "rd.route=20.20.20.0/24:20.20.20.254:enp2s0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none", "ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0", "ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0", "nameserver=1.1.1.1 nameserver=8.8.8.8", "bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp", "bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp", "bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "team=team0:em1,em2 ip=team0:dhcp", "mpathconf --enable && systemctl start multipathd.service", "coreos-installer install /dev/mapper/mpatha \\ 1 --ignition-url=http://host/worker.ign --append-karg rd.multipath=default --append-karg root=/dev/disk/by-label/dm-mpath-root --append-karg rw", "coreos-installer install /dev/disk/by-id/wwn-<wwn_ID> \\ 1 --ignition-url=http://host/worker.ign --append-karg rd.multipath=default --append-karg root=/dev/disk/by-label/dm-mpath-root --append-karg rw", "oc debug node/ip-10-0-141-105.ec2.internal", "Starting pod/ip-10-0-141-105ec2internal-debug To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline rd.multipath=default root=/dev/disk/by-label/dm-mpath-root sh-4.2# exit", "variant: openshift version: 4.16.0 systemd: units: - name: mpath-configure.service enabled: true contents: | [Unit] Description=Configure Multipath on Secondary Disk ConditionFirstBoot=true ConditionPathExists=!/etc/multipath.conf Before=multipathd.service 1 DefaultDependencies=no [Service] Type=oneshot ExecStart=/usr/sbin/mpathconf --enable 2 [Install] WantedBy=multi-user.target - name: mpath-var-lib-container.service enabled: true contents: | [Unit] Description=Set Up Multipath On /var/lib/containers ConditionFirstBoot=true 3 Requires=dev-mapper-mpatha.device After=dev-mapper-mpatha.device After=ostree-remount.service Before=kubelet.service DefaultDependencies=no [Service] 4 Type=oneshot ExecStart=/usr/sbin/mkfs.xfs -L containers -m reflink=1 /dev/mapper/mpatha ExecStart=/usr/bin/mkdir -p /var/lib/containers [Install] WantedBy=multi-user.target - name: var-lib-containers.mount enabled: true contents: | [Unit] Description=Mount /var/lib/containers After=mpath-var-lib-containers.service Before=kubelet.service 5 [Mount] 6 What=/dev/disk/by-label/dm-mpath-containers Where=/var/lib/containers Type=xfs [Install] WantedBy=multi-user.target", "butane --pretty --strict multipath-config.bu > multipath-config.ign", "iscsiadm --mode discovery --type sendtargets --portal <IP_address> \\ 1 --login", "coreos-installer install /dev/disk/by-path/ip-<IP_address>:<port>-iscsi-<target_iqn>-lun-<lun> \\ 1 --append-karg rd.iscsi.initiator=<initiator_iqn> \\ 2 --append.karg netroot=<target_iqn> \\ 3 --console ttyS0,115200n8 --ignition-file <path_to_file>", "iscsiadm --mode node --logoutall=all", "iscsiadm --mode discovery --type sendtargets --portal <IP_address> \\ 1 --login", "mpathconf --enable && systemctl start multipathd.service", "coreos-installer install /dev/mapper/mpatha \\ 1 --append-karg rd.iscsi.firmware=1 \\ 2 --append-karg rd.multipath=default \\ 3 --console ttyS0 --ignition-file <path_to_file>", "iscsiadm --mode node --logout=all", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.29.4 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.29.4 master-1 Ready master 63m v1.29.4 master-2 Ready master 64m v1.29.4", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.29.4 master-1 Ready master 73m v1.29.4 master-2 Ready master 74m v1.29.4 worker-0 Ready worker 11m v1.29.4 worker-1 Ready worker 11m v1.29.4", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.16.0 True False False 19m baremetal 4.16.0 True False False 37m cloud-credential 4.16.0 True False False 40m cluster-autoscaler 4.16.0 True False False 37m config-operator 4.16.0 True False False 38m console 4.16.0 True False False 26m csi-snapshot-controller 4.16.0 True False False 37m dns 4.16.0 True False False 37m etcd 4.16.0 True False False 36m image-registry 4.16.0 True False False 31m ingress 4.16.0 True False False 30m insights 4.16.0 True False False 31m kube-apiserver 4.16.0 True False False 26m kube-controller-manager 4.16.0 True False False 36m kube-scheduler 4.16.0 True False False 36m kube-storage-version-migrator 4.16.0 True False False 37m machine-api 4.16.0 True False False 29m machine-approver 4.16.0 True False False 37m machine-config 4.16.0 True False False 36m marketplace 4.16.0 True False False 37m monitoring 4.16.0 True False False 29m network 4.16.0 True False False 38m node-tuning 4.16.0 True False False 37m openshift-apiserver 4.16.0 True False False 32m openshift-controller-manager 4.16.0 True False False 30m openshift-samples 4.16.0 True False False 32m operator-lifecycle-manager 4.16.0 True False False 37m operator-lifecycle-manager-catalog 4.16.0 True False False 37m operator-lifecycle-manager-packageserver 4.16.0 True False False 32m service-ca 4.16.0 True False False 38m storage 4.16.0 True False False 37m", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4", "oc create -f pvc.yaml -n openshift-image-registry", "oc edit config.imageregistry.operator.openshift.io -o yaml", "storage: pvc: claim: 1", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.16.0 True False False 19m baremetal 4.16.0 True False False 37m cloud-credential 4.16.0 True False False 40m cluster-autoscaler 4.16.0 True False False 37m config-operator 4.16.0 True False False 38m console 4.16.0 True False False 26m csi-snapshot-controller 4.16.0 True False False 37m dns 4.16.0 True False False 37m etcd 4.16.0 True False False 36m image-registry 4.16.0 True False False 31m ingress 4.16.0 True False False 30m insights 4.16.0 True False False 31m kube-apiserver 4.16.0 True False False 26m kube-controller-manager 4.16.0 True False False 36m kube-scheduler 4.16.0 True False False 36m kube-storage-version-migrator 4.16.0 True False False 37m machine-api 4.16.0 True False False 29m machine-approver 4.16.0 True False False 37m machine-config 4.16.0 True False False 36m marketplace 4.16.0 True False False 37m monitoring 4.16.0 True False False 29m network 4.16.0 True False False 38m node-tuning 4.16.0 True False False 37m openshift-apiserver 4.16.0 True False False 32m openshift-controller-manager 4.16.0 True False False 30m openshift-samples 4.16.0 True False False 32m operator-lifecycle-manager 4.16.0 True False False 37m operator-lifecycle-manager-catalog 4.16.0 True False False 37m operator-lifecycle-manager-packageserver 4.16.0 True False False 32m service-ca 4.16.0 True False False 38m storage 4.16.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s", "interfaces: - name: enp2s0 1 type: ethernet 2 state: up 3 ipv4: enabled: false 4 ipv6: enabled: false - name: br-ex type: ovs-bridge state: up ipv4: enabled: false dhcp: false ipv6: enabled: false dhcp: false bridge: port: - name: enp2s0 5 - name: br-ex - name: br-ex type: ovs-interface state: up copy-mac-from: enp2s0 ipv4: enabled: true dhcp: true ipv6: enabled: false dhcp: false", "cat <nmstate_configuration>.yaml | base64 1", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 10-br-ex-worker 2 spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<base64_encoded_nmstate_configuration> 3 mode: 0644 overwrite: true path: /etc/nmstate/openshift/cluster.yml", "oc edit mc <machineconfig_custom_resource_name>", "oc apply -f ./extraworker-secret.yaml", "apiVersion: metal3.io/v1alpha1 kind: BareMetalHost spec: preprovisioningNetworkDataName: ostest-extraworker-0-network-config-secret", "oc project openshift-machine-api", "oc get machinesets", "oc scale machineset <machineset_name> --replicas=<n> 1", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "compute: - name: worker platform: {} replicas: 0", "./openshift-install create manifests --dir <installation_directory> 1", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "variant: openshift version: 4.16.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 storage: files: - path: /etc/chrony.conf mode: 0644 3 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony", "butane 99-worker-chrony.bu -o 99-worker-chrony.yaml", "oc apply -f ./99-worker-chrony.yaml", "sha512sum <installation_directory>/bootstrap.ign", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep '\\.iso[^.]'", "\"location\": \"<url>/art/storage/releases/rhcos-4.16-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.16-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.16-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.16/<release>/x86_64/rhcos-<release>-live.x86_64.iso\",", "sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2", "sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b", "Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep -Eo '\"https.*(kernel-|initramfs.|rootfs.)\\w+(\\.img)?\"'", "\"<url>/art/storage/releases/rhcos-4.16-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64\" \"<url>/art/storage/releases/rhcos-4.16-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.16-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.16-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le\" \"<url>/art/storage/releases/rhcos-4.16-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.16-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.16-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x\" \"<url>/art/storage/releases/rhcos-4.16-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.16-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.16/<release>/x86_64/rhcos-<release>-live-kernel-x86_64\" \"<url>/art/storage/releases/rhcos-4.16/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img\" \"<url>/art/storage/releases/rhcos-4.16/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img\"", "DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3", "kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot", "menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd rhcos-<version>-live-initramfs.<architecture>.img 3 }", "Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied", "sudo coreos-installer install --copy-network --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number>", "openshift-install create manifests --dir <installation_directory>", "variant: openshift version: 4.16.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true", "butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml", "openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partlabel 'data*' /dev/disk/by-id/scsi-<serial_number>", "coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 6 /dev/disk/by-id/scsi-<serial_number>", "coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 5- /dev/disk/by-id/scsi-<serial_number>", "coreos.inst.save_partlabel=data*", "coreos.inst.save_partindex=5-", "coreos.inst.save_partindex=6", "coreos-installer install --console=tty0 \\ 1 --console=ttyS0,<options> \\ 2 --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number>", "coreos-installer iso customize rhcos-<version>-live.x86_64.iso --dest-ignition bootstrap.ign \\ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> 2", "coreos-installer iso reset rhcos-<version>-live.x86_64.iso", "coreos-installer iso customize rhcos-<version>-live.x86_64.iso --dest-ignition <path> \\ 1 --dest-console tty0 \\ 2 --dest-console ttyS0,<options> \\ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> 4", "coreos-installer iso reset rhcos-<version>-live.x86_64.iso", "coreos-installer iso customize rhcos-<version>-live.x86_64.iso --ignition-ca cert.pem", "[connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto", "[connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond", "[connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond", "coreos-installer iso customize rhcos-<version>-live.x86_64.iso --network-keyfile bond0.nmconnection --network-keyfile bond0-proxy-em1.nmconnection --network-keyfile bond0-proxy-em2.nmconnection", "coreos-installer iso customize --pre-install mount-iscsi.sh \\ 1 --post-install unmount-iscsi.sh \\ 2 --dest-device /dev/disk/by-path/<IP_address>:<port>-iscsi-<target_iqn>-lun-<lun> \\ 3 --dest-ignition config.ign \\ 4 --dest-karg-append rd.iscsi.initiator=<initiator_iqn> \\ 5 --dest-karg-append netroot=<target_iqn> \\ 6 -o custom.iso rhcos-<version>-live.x86_64.iso", "coreos-installer iso customize --pre-install mount-iscsi.sh \\ 1 --post-install unmount-iscsi.sh \\ 2 --dest-device /dev/mapper/mpatha \\ 3 --dest-ignition config.ign \\ 4 --dest-karg-append rd.iscsi.firmware=1 \\ 5 --dest-karg-append rd.multipath=default \\ 6 -o custom.iso rhcos-<version>-live.x86_64.iso", "coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --dest-ignition bootstrap.ign \\ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> \\ 2 -o rhcos-<version>-custom-initramfs.x86_64.img 3", "coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --dest-ignition <path> \\ 1 --dest-console tty0 \\ 2 --dest-console ttyS0,<options> \\ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> \\ 4 -o rhcos-<version>-custom-initramfs.x86_64.img 5", "coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --ignition-ca cert.pem -o rhcos-<version>-custom-initramfs.x86_64.img", "[connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto", "[connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond", "[connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond", "coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --network-keyfile bond0.nmconnection --network-keyfile bond0-proxy-em1.nmconnection --network-keyfile bond0-proxy-em2.nmconnection -o rhcos-<version>-custom-initramfs.x86_64.img", "coreos-installer pxe customize --pre-install mount-iscsi.sh \\ 1 --post-install unmount-iscsi.sh \\ 2 --dest-device /dev/disk/by-path/<IP_address>:<port>-iscsi-<target_iqn>-lun-<lun> \\ 3 --dest-ignition config.ign \\ 4 --dest-karg-append rd.iscsi.initiator=<initiator_iqn> \\ 5 --dest-karg-append netroot=<target_iqn> \\ 6 -o custom.img rhcos-<version>-live-initramfs.x86_64.img", "coreos-installer pxe customize --pre-install mount-iscsi.sh \\ 1 --post-install unmount-iscsi.sh \\ 2 --dest-device /dev/mapper/mpatha \\ 3 --dest-ignition config.ign \\ 4 --dest-karg-append rd.iscsi.firmware=1 \\ 5 --dest-karg-append rd.multipath=default \\ 6 -o custom.img rhcos-<version>-live-initramfs.x86_64.img", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=::10.10.10.254::::", "rd.route=20.20.20.0/24:20.20.20.254:enp2s0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none", "ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0", "ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0", "nameserver=1.1.1.1 nameserver=8.8.8.8", "bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp", "bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp", "bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "team=team0:em1,em2 ip=team0:dhcp", "mpathconf --enable && systemctl start multipathd.service", "coreos-installer install /dev/mapper/mpatha \\ 1 --ignition-url=http://host/worker.ign --append-karg rd.multipath=default --append-karg root=/dev/disk/by-label/dm-mpath-root --append-karg rw", "coreos-installer install /dev/disk/by-id/wwn-<wwn_ID> \\ 1 --ignition-url=http://host/worker.ign --append-karg rd.multipath=default --append-karg root=/dev/disk/by-label/dm-mpath-root --append-karg rw", "oc debug node/ip-10-0-141-105.ec2.internal", "Starting pod/ip-10-0-141-105ec2internal-debug To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline rd.multipath=default root=/dev/disk/by-label/dm-mpath-root sh-4.2# exit", "variant: openshift version: 4.16.0 systemd: units: - name: mpath-configure.service enabled: true contents: | [Unit] Description=Configure Multipath on Secondary Disk ConditionFirstBoot=true ConditionPathExists=!/etc/multipath.conf Before=multipathd.service 1 DefaultDependencies=no [Service] Type=oneshot ExecStart=/usr/sbin/mpathconf --enable 2 [Install] WantedBy=multi-user.target - name: mpath-var-lib-container.service enabled: true contents: | [Unit] Description=Set Up Multipath On /var/lib/containers ConditionFirstBoot=true 3 Requires=dev-mapper-mpatha.device After=dev-mapper-mpatha.device After=ostree-remount.service Before=kubelet.service DefaultDependencies=no [Service] 4 Type=oneshot ExecStart=/usr/sbin/mkfs.xfs -L containers -m reflink=1 /dev/mapper/mpatha ExecStart=/usr/bin/mkdir -p /var/lib/containers [Install] WantedBy=multi-user.target - name: var-lib-containers.mount enabled: true contents: | [Unit] Description=Mount /var/lib/containers After=mpath-var-lib-containers.service Before=kubelet.service 5 [Mount] 6 What=/dev/disk/by-label/dm-mpath-containers Where=/var/lib/containers Type=xfs [Install] WantedBy=multi-user.target", "butane --pretty --strict multipath-config.bu > multipath-config.ign", "iscsiadm --mode discovery --type sendtargets --portal <IP_address> \\ 1 --login", "coreos-installer install /dev/disk/by-path/ip-<IP_address>:<port>-iscsi-<target_iqn>-lun-<lun> \\ 1 --append-karg rd.iscsi.initiator=<initiator_iqn> \\ 2 --append.karg netroot=<target_iqn> \\ 3 --console ttyS0,115200n8 --ignition-file <path_to_file>", "iscsiadm --mode node --logoutall=all", "iscsiadm --mode discovery --type sendtargets --portal <IP_address> \\ 1 --login", "mpathconf --enable && systemctl start multipathd.service", "coreos-installer install /dev/mapper/mpatha \\ 1 --append-karg rd.iscsi.firmware=1 \\ 2 --append-karg rd.multipath=default \\ 3 --console ttyS0 --ignition-file <path_to_file>", "iscsiadm --mode node --logout=all", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.29.4 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.29.4 master-1 Ready master 63m v1.29.4 master-2 Ready master 64m v1.29.4", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.29.4 master-1 Ready master 73m v1.29.4 master-2 Ready master 74m v1.29.4 worker-0 Ready worker 11m v1.29.4 worker-1 Ready worker 11m v1.29.4", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.16.0 True False False 19m baremetal 4.16.0 True False False 37m cloud-credential 4.16.0 True False False 40m cluster-autoscaler 4.16.0 True False False 37m config-operator 4.16.0 True False False 38m console 4.16.0 True False False 26m csi-snapshot-controller 4.16.0 True False False 37m dns 4.16.0 True False False 37m etcd 4.16.0 True False False 36m image-registry 4.16.0 True False False 31m ingress 4.16.0 True False False 30m insights 4.16.0 True False False 31m kube-apiserver 4.16.0 True False False 26m kube-controller-manager 4.16.0 True False False 36m kube-scheduler 4.16.0 True False False 36m kube-storage-version-migrator 4.16.0 True False False 37m machine-api 4.16.0 True False False 29m machine-approver 4.16.0 True False False 37m machine-config 4.16.0 True False False 36m marketplace 4.16.0 True False False 37m monitoring 4.16.0 True False False 29m network 4.16.0 True False False 38m node-tuning 4.16.0 True False False 37m openshift-apiserver 4.16.0 True False False 32m openshift-controller-manager 4.16.0 True False False 30m openshift-samples 4.16.0 True False False 32m operator-lifecycle-manager 4.16.0 True False False 37m operator-lifecycle-manager-catalog 4.16.0 True False False 37m operator-lifecycle-manager-packageserver 4.16.0 True False False 32m service-ca 4.16.0 True False False 38m storage 4.16.0 True False False 37m", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"managementState\":\"Managed\"}}'", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resources found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim:", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.16 True False False 6h50m", "oc edit configs.imageregistry/cluster", "managementState: Removed", "managementState: Managed", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4", "oc create -f pvc.yaml -n openshift-image-registry", "oc edit config.imageregistry.operator.openshift.io -o yaml", "storage: pvc: claim: 1", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.16.0 True False False 19m baremetal 4.16.0 True False False 37m cloud-credential 4.16.0 True False False 40m cluster-autoscaler 4.16.0 True False False 37m config-operator 4.16.0 True False False 38m console 4.16.0 True False False 26m csi-snapshot-controller 4.16.0 True False False 37m dns 4.16.0 True False False 37m etcd 4.16.0 True False False 36m image-registry 4.16.0 True False False 31m ingress 4.16.0 True False False 30m insights 4.16.0 True False False 31m kube-apiserver 4.16.0 True False False 26m kube-controller-manager 4.16.0 True False False 36m kube-scheduler 4.16.0 True False False 36m kube-storage-version-migrator 4.16.0 True False False 37m machine-api 4.16.0 True False False 29m machine-approver 4.16.0 True False False 37m machine-config 4.16.0 True False False 36m marketplace 4.16.0 True False False 37m monitoring 4.16.0 True False False 29m network 4.16.0 True False False 38m node-tuning 4.16.0 True False False 37m openshift-apiserver 4.16.0 True False False 32m openshift-controller-manager 4.16.0 True False False 30m openshift-samples 4.16.0 True False False 32m operator-lifecycle-manager 4.16.0 True False False 37m operator-lifecycle-manager-catalog 4.16.0 True False False 37m operator-lifecycle-manager-packageserver 4.16.0 True False False 32m service-ca 4.16.0 True False False 38m storage 4.16.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1", "apiVersion: metal3.io/v1alpha1 kind: Provisioning metadata: name: provisioning-configuration spec: provisioningNetwork: \"Disabled\" watchAllNamespaces: false", "oc create -f provisioning.yaml", "provisioning.metal3.io/provisioning-configuration created", "oc get pods -n openshift-machine-api", "NAME READY STATUS RESTARTS AGE cluster-autoscaler-operator-678c476f4c-jjdn5 2/2 Running 0 5d21h cluster-baremetal-operator-6866f7b976-gmvgh 2/2 Running 0 5d21h control-plane-machine-set-operator-7d8566696c-bh4jz 1/1 Running 0 5d21h ironic-proxy-64bdw 1/1 Running 0 5d21h ironic-proxy-rbggf 1/1 Running 0 5d21h ironic-proxy-vj54c 1/1 Running 0 5d21h machine-api-controllers-544d6849d5-tgj9l 7/7 Running 1 (5d21h ago) 5d21h machine-api-operator-5c4ff4b86d-6fjmq 2/2 Running 0 5d21h metal3-6d98f84cc8-zn2mx 5/5 Running 0 5d21h metal3-image-customization-59d745768d-bhrp7 1/1 Running 0 5d21h", "--- apiVersion: v1 kind: Secret metadata: name: openshift-worker-<num>-network-config-secret 1 namespace: openshift-machine-api type: Opaque stringData: nmstate: | 2 interfaces: 3 - name: <nic1_name> 4 type: ethernet state: up ipv4: address: - ip: <ip_address> 5 prefix-length: 24 enabled: true dns-resolver: config: server: - <dns_ip_address> 6 routes: config: - destination: 0.0.0.0/0 next-hop-address: <next_hop_ip_address> 7 next-hop-interface: <next_hop_nic1_name> 8 --- apiVersion: v1 kind: Secret metadata: name: openshift-worker-<num>-bmc-secret namespace: openshift-machine-api type: Opaque data: username: <base64_of_uid> 9 password: <base64_of_pwd> --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: openshift-worker-<num> namespace: openshift-machine-api spec: online: true bootMACAddress: <nic1_mac_address> 10 bmc: address: <protocol>://<bmc_url> 11 credentialsName: openshift-worker-<num>-bmc-secret disableCertificateVerification: false customDeploy: method: install_coreos userData: name: worker-user-data-managed namespace: openshift-machine-api rootDeviceHints: deviceName: <root_device_hint> 12 preprovisioningNetworkDataName: openshift-worker-<num>-network-config-secret", "--- apiVersion: v1 kind: Secret metadata: name: openshift-worker-<num>-network-config-secret namespace: openshift-machine-api # interfaces: - name: <nic_name> type: ethernet state: up ipv4: enabled: false ipv6: enabled: false", "--- apiVersion: v1 kind: Secret metadata: name: openshift-worker-<num>-bmc-secret 1 namespace: openshift-machine-api type: Opaque data: username: <base64_of_uid> 2 password: <base64_of_pwd> --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: openshift-worker-<num> namespace: openshift-machine-api spec: online: true bootMACAddress: <nic1_mac_address> 3 bmc: address: <protocol>://<bmc_url> 4 credentialsName: openshift-worker-<num>-bmc disableCertificateVerification: false customDeploy: method: install_coreos userData: name: worker-user-data-managed namespace: openshift-machine-api rootDeviceHints: deviceName: <root_device_hint> 5", "oc create -f bmh.yaml", "secret/openshift-worker-<num>-network-config-secret created secret/openshift-worker-<num>-bmc-secret created baremetalhost.metal3.io/openshift-worker-<num> created", "oc -n openshift-machine-api get bmh openshift-worker-<num>", "NAME STATE CONSUMER ONLINE ERROR openshift-worker-<num> provisioned true", "oc get csr", "NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION csr-gfm9f 33s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-o perator:node-bootstrapper <none> Pending", "oc adm certificate approve <csr_name>", "certificatesigningrequest.certificates.k8s.io/<csr_name> approved", "oc get nodes", "NAME STATUS ROLES AGE VERSION app1 Ready worker 47s v1.24.0+dc5a2fd controller1 Ready master,worker 2d22h v1.24.0+dc5a2fd", "--- apiVersion: v1 kind: Secret metadata: name: controller1-bmc namespace: openshift-machine-api type: Opaque data: username: <base64_of_uid> password: <base64_of_pwd> --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: controller1 namespace: openshift-machine-api spec: bmc: address: <protocol>://<bmc_url> 1 credentialsName: \"controller1-bmc\" bootMACAddress: <nic1_mac_address> customDeploy: method: install_coreos externallyProvisioned: true 2 online: true userData: name: controller-user-data-managed namespace: openshift-machine-api", "oc create -f controller.yaml", "secret/controller1-bmc created baremetalhost.metal3.io/controller1 created", "oc get bmh -A", "NAMESPACE NAME STATE CONSUMER ONLINE ERROR AGE openshift-machine-api controller1 externally provisioned true 13s", "oc adm drain app1 --force --ignore-daemonsets=true", "node/app1 cordoned WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-node-tuning-operator/tuned-tvthg, openshift-dns/dns- default-9q6rz, openshift-dns/node-resolver-zvt42, openshift-image-registry/node-ca-mzxth, openshift-ingress-cana ry/ingress-canary-qq5lf, openshift-machine-config-operator/machine-config-daemon-v79dm, openshift-monitoring/nod e-exporter-2vn59, openshift-multus/multus-additional-cni-plugins-wssvj, openshift-multus/multus-fn8tg, openshift -multus/network-metrics-daemon-5qv55, openshift-network-diagnostics/network-check-target-jqxn2, openshift-ovn-ku bernetes/ovnkube-node-rsvqg evicting pod openshift-operator-lifecycle-manager/collect-profiles-27766965-258vp evicting pod openshift-operator-lifecycle-manager/collect-profiles-27766950-kg5mk evicting pod openshift-operator-lifecycle-manager/collect-profiles-27766935-stf4s pod/collect-profiles-27766965-258vp evicted pod/collect-profiles-27766950-kg5mk evicted pod/collect-profiles-27766935-stf4s evicted node/app1 drained", "oc edit bmh -n openshift-machine-api <host_name>", "customDeploy: method: install_coreos", "oc get bmh -A", "NAMESPACE NAME STATE CONSUMER ONLINE ERROR AGE openshift-machine-api controller1 externally provisioned true 58m openshift-machine-api worker1 deprovisioning true 57m", "oc delete bmh -n openshift-machine-api <bmh_name>", "oc delete node <node_name>", "oc get nodes", "NAME STATUS ROLES AGE VERSION controller1 Ready master,worker 2d23h v1.24.0+dc5a2fd", "apiVersion:", "baseDomain:", "metadata:", "metadata: name:", "platform:", "pullSecret:", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112", "networking:", "networking: networkType:", "networking: clusterNetwork:", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd01::/48 hostPrefix: 64", "networking: clusterNetwork: cidr:", "networking: clusterNetwork: hostPrefix:", "networking: serviceNetwork:", "networking: serviceNetwork: - 172.30.0.0/16 - fd02::/112", "networking: machineNetwork:", "networking: machineNetwork: - cidr: 10.0.0.0/16", "networking: machineNetwork: cidr:", "additionalTrustBundle:", "capabilities:", "capabilities: baselineCapabilitySet:", "capabilities: additionalEnabledCapabilities:", "cpuPartitioningMode:", "compute:", "compute: architecture:", "compute: hyperthreading:", "compute: name:", "compute: platform:", "compute: replicas:", "featureSet:", "controlPlane:", "controlPlane: architecture:", "controlPlane: hyperthreading:", "controlPlane: name:", "controlPlane: platform:", "controlPlane: replicas:", "credentialsMode:", "fips:", "imageContentSources:", "imageContentSources: source:", "imageContentSources: mirrors:", "publish:", "sshKey:" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/installing_on_bare_metal/index
B.3. autofs
B.3. autofs B.3.1. RHBA-2011:0403 - autofs bug fix update An updated autofs package that fixes one bug is now available for Red Hat Enterprise Linux 6. The autofs utility controls the operation of the automount daemon. The automount daemon automatically mounts file systems when you use them, and unmounts them when they are not busy. Bug Fix BZ# 689754 Prior to this update, an attempt to restart the autofs service while a mounted file system was in use caused the service to stop responding upon its startup. This was due to inappropriate locking during the recursive reconstruction of mount trees of pre-existing mounted multi-mount map entries. With this update, the underlying source code has been adapted to avoid the deadlock during the mount tree reconstruction, so that autofs now starts as expected. Additionally, this update prevents autofs from occasionally terminating with a segmentation fault upon a map entry lookup. All users of autofs are advised to upgrade to this updated package, which fixes this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/autofs
Chapter 5. Installation configuration parameters for Azure Stack Hub
Chapter 5. Installation configuration parameters for Azure Stack Hub Before you deploy an OpenShift Container Platform cluster on Azure Stack Hub, you provide a customized install-config.yaml installation configuration file that describes the details for your environment. 5.1. Available installation configuration parameters for Azure Stack Hub The following tables specify the required, optional, and Azure Stack Hub-specific installation configuration parameters that you can set as part of the installation process. Note After installation, you cannot modify these parameters in the install-config.yaml file. 5.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 5.1. Required parameters Parameter Description Values The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The configuration for the specific platform upon which to perform the installation: aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object Get a pull secret from Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 5.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 5.2. Network parameters Parameter Description Values The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. The Red Hat OpenShift Networking network plugin to install. OVNKubernetes . OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . The IP address block for services. The default value is 172.30.0.0/16 . The OVN-Kubernetes network plugins supports only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power(R) Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power(R) Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. Configures the IPv4 join subnet that is used internally by ovn-kubernetes . This subnet must not overlap with any other subnet that OpenShift Container Platform is using, including the node network. The size of the subnet must be larger than the number of nodes. You cannot change the value after installation. An IP network block in CIDR notation. The default value is 100.64.0.0/16 . 5.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 5.3. Optional parameters Parameter Description Values A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use compute . The name of the machine pool. worker Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . The configuration for the machines that comprise the control plane. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use controlPlane . The name of the machine pool. master Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of control plane machines to provision. Supported values are 3 , or 1 when deploying single-node OpenShift. The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. Mint , Passthrough , Manual or an empty string ( "" ). Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String Specify one or more repositories that may also contain the same images. Array of strings How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . 5.1.4. Additional Azure Stack Hub configuration parameters Additional Azure configuration parameters are described in the following table: Table 5.4. Additional Azure Stack Hub parameters Parameter Description Values The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 128 . Defines the type of disk. standard_LRS or premium_LRS . The default is premium_LRS . Defines the azure instance type for compute machines. String The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 1024 . Defines the type of disk. premium_LRS . Defines the azure instance type for control plane machines. String The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 128 . Defines the type of disk. standard_LRS or premium_LRS . The default is premium_LRS . The Azure instance type for control plane and compute machines. The Azure instance type. The URL of the Azure Resource Manager endpoint that your Azure Stack Hub operator provides. String The name of the resource group that contains the DNS zone for your base domain. String, for example production_cluster . The name of your Azure Stack Hub local region. String The name of an already existing resource group to install your cluster to. This resource group must be empty and only used for this specific cluster; the cluster components assume ownership of all resources in the resource group. If you limit the service principal scope of the installation program to this resource group, you must ensure all other resources used by the installation program in your environment have the necessary permissions, such as the public DNS zone and virtual network. Destroying the cluster by using the installation program deletes this resource group. String, for example existing_resource_group . The outbound routing strategy used to connect your cluster to the internet. If you are using user-defined routing, you must have pre-existing networking available where the outbound routing has already been configured prior to installing a cluster. The installation program is not responsible for configuring user-defined routing. LoadBalancer or UserDefinedRouting . The default is LoadBalancer . The name of the Azure cloud environment that is used to configure the Azure SDK with the appropriate Azure API endpoints. AzureStackCloud The URL of a storage blob in the Azure Stack environment that contains an RHCOS VHD. String, for example, https://vhdsa.blob.example.example.com/vhd/rhcos-410.84.202112040202-0-azurestack.x86_64.vhd
[ "apiVersion:", "baseDomain:", "metadata:", "metadata: name:", "platform:", "pullSecret:", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking:", "networking: networkType:", "networking: clusterNetwork:", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: clusterNetwork: cidr:", "networking: clusterNetwork: hostPrefix:", "networking: serviceNetwork:", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork:", "networking: machineNetwork: - cidr: 10.0.0.0/16", "networking: machineNetwork: cidr:", "networking: ovnKubernetesConfig: ipv4: internalJoinSubnet:", "additionalTrustBundle:", "capabilities:", "capabilities: baselineCapabilitySet:", "capabilities: additionalEnabledCapabilities:", "cpuPartitioningMode:", "compute:", "compute: architecture:", "compute: hyperthreading:", "compute: name:", "compute: platform:", "compute: replicas:", "featureSet:", "controlPlane:", "controlPlane: architecture:", "controlPlane: hyperthreading:", "controlPlane: name:", "controlPlane: platform:", "controlPlane: replicas:", "credentialsMode:", "fips:", "imageContentSources:", "imageContentSources: source:", "imageContentSources: mirrors:", "publish:", "sshKey:", "compute: platform: azure: osDisk: diskSizeGB:", "compute: platform: azure: osDisk: diskType:", "compute: platform: azure: type:", "controlPlane: platform: azure: osDisk: diskSizeGB:", "controlPlane: platform: azure: osDisk: diskType:", "controlPlane: platform: azure: type:", "platform: azure: defaultMachinePlatform: osDisk: diskSizeGB:", "platform: azure: defaultMachinePlatform: osDisk: diskType:", "platform: azure: defaultMachinePlatform: type:", "platform: azure: armEndpoint:", "platform: azure: baseDomainResourceGroupName:", "platform: azure: region:", "platform: azure: resourceGroupName:", "platform: azure: outboundType:", "platform: azure: cloudName:", "clusterOSImage:" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_azure_stack_hub/installation-config-parameters-ash
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code and documentation. We are beginning with these four terms: master, slave, blacklist, and whitelist. Due to the enormity of this endeavor, these changes will be gradually implemented over upcoming releases. For more details on making our language more inclusive, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_software_certification_workflow_guide/con-conscious-language-message
Chapter 5. Configuring Network Connection Settings
Chapter 5. Configuring Network Connection Settings This chapter describes various configurations of the network connection settings and shows how to configure them by using NetworkManager . 5.1. Configuring 802.3 Link Settings You can configure the 802.3 link settings of an Ethernet connection by modifying the following configuration parameters: 802-3-ethernet.auto-negotiate 802-3-ethernet.speed 802-3-ethernet.duplex You can configure the 802.3 link settings to three main modes: Ignore link negotiation Enforce auto-negotiation activation Manually set the speed and duplex link settings Ignoring link negotiation In this case, NetworkManager ignores link configuration for an ethernet connection, keeping the already configuration on the device. To ignore link negotiation, set the following parameters: Important If the auto-negotiate parameter is set to no , but the speed and duplex values are not set, that does not mean that auto-negotiation is disabled. Enforcing auto-negotiation activation In this case, NetworkManager enforces auto-negotiation on a device. To enforce auto-negotiation activation, set the following options: Manually setting the link speed and duplex In this case, you can manually configure the speed and duplex settings on the link. To manually set the speed and duplex link settings, set the aforementioned parameters as follows: Important Make sure to set both the speed and the duplex values, otherwise NetworkManager does not update the link configuration. As a system administrator, you can configure 802.3 link settings using one of the following options: the nmcli tool the nm-connection-editor utility Configuring 802.3 Link Settings with the nmcli Tool Procedure Create a new ethernet connection for the enp1s0 device. Set the 802.3 link setting to a configuration of your choice. For details, see Section 5.1, "Configuring 802.3 Link Settings" For example, to manually set the speed option 100 Mbit/s and duplex to full : Configuring 802.3 Link Settings with nm-connection-editor Procedure Enter nm-connection-editor in a terminal. Select the ethernet connection you want to edit and click the gear wheel icon to move to the editing dialog. See Section 3.4.3, "Common Configuration Options Using nm-connection-editor" for more information. Select the link negotiation of your choice. Ignore : link configuration is skipped (default). Automatic : link auto-negotiation is enforced on the device. Manual : the Speed and Duplex options can be specified to enforce the link negotiation. Figure 5.1. Configure 802.3 link settings using nm-connection-editor
[ "802-3-ethernet.auto-negotiate = no 802-3-ethernet.speed = 0 802-3-ethernet.duplex = NULL", "802-3-ethernet.auto-negotiate = yes 802-3-ethernet.speed = 0 802-3-ethernet.duplex = NULL", "802-3-ethernet.auto-negotiate = no 802-3-ethernet.speed = [speed in Mbit/s] 802-3-ethernet.duplex = [half |full]", "nmcli connection add con-name MyEthernet type ethernet ifname enp1s0 802-3-ethernet.auto-negotiate no 802-3-ethernet.speed 100 802-3-ethernet.duplex full" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/ch-configuring_network_connection_settings
40.2.2. Setting Events to Monitor
40.2.2. Setting Events to Monitor Most processors contain counters , which are used by OProfile to monitor specific events. As shown in Table 40.2, "OProfile Processors and Counters" , the number of counters available depends on the processor. Table 40.2. OProfile Processors and Counters Processor cpu_type Number of Counters Pentium Pro i386/ppro 2 Pentium II i386/pii 2 Pentium III i386/piii 2 Pentium 4 (non-hyper-threaded) i386/p4 8 Pentium 4 (hyper-threaded) i386/p4-ht 4 Athlon i386/athlon 4 AMD64 x86-64/hammer 4 Itanium ia64/itanium 4 Itanium 2 ia64/itanium2 4 TIMER_INT timer 1 IBM eServer iSeries and pSeries timer 1 ppc64/power4 8 ppc64/power5 6 ppc64/970 8 IBM eServer S/390 and S/390x timer 1 IBM eServer zSeries timer 1 Use Table 40.2, "OProfile Processors and Counters" to verify that the correct processor type was detected and to determine the number of events that can be monitored simultaneously. timer is used as the processor type if the processor does not have supported performance monitoring hardware. If timer is used, events cannot be set for any processor because the hardware does not have support for hardware performance counters. Instead, the timer interrupt is used for profiling. If timer is not used as the processor type, the events monitored can be changed, and counter 0 for the processor is set to a time-based event by default. If more than one counter exists on the processor, the counters other than counter 0 are not set to an event by default. The default events monitored are shown in Table 40.3, "Default Events" . Table 40.3. Default Events Processor Default Event for Counter Description Pentium Pro, Pentium II, Pentium III, Athlon, AMD64 CPU_CLK_UNHALTED The processor's clock is not halted Pentium 4 (HT and non-HT) GLOBAL_POWER_EVENTS The time during which the processor is not stopped Itanium 2 CPU_CYCLES CPU Cycles TIMER_INT (none) Sample for each timer interrupt ppc64/power4 CYCLES Processor Cycles ppc64/power5 CYCLES Processor Cycles ppc64/970 CYCLES Processor Cycles The number of events that can be monitored at one time is determined by the number of counters for the processor. However, it is not a one-to-one correlation; on some processors, certain events must be mapped to specific counters. To determine the number of counters available, execute the following command: The events available vary depending on the processor type. To determine the events available for profiling, execute the following command as root (the list is specific to the system's processor type): The events for each counter can be configured via the command line or with a graphical interface. For more information on the graphical interface, refer to Section 40.8, "Graphical Interface" . If the counter cannot be set to a specific event, an error message is displayed. To set the event for each configurable counter via the command line, use opcontrol : Replace <event-name> with the exact name of the event from op_help , and replace <sample-rate> with the number of events between samples. 40.2.2.1. Sampling Rate By default, a time-based event set is selected. It creates a sample every 100,000 clock cycles per processor. If the timer interrupt is used, the timer is set to whatever the jiffy rate is and is not user-settable. If the cpu_type is not timer , each event can have a sampling rate set for it. The sampling rate is the number of events between each sample snapshot. When setting the event for the counter, a sample rate can also be specified: Replace <sample-rate> with the number of events to wait before sampling again. The smaller the count, the more frequent the samples. For events that do not happen frequently, a lower count may be needed to capture the event instances. Warning Be extremely careful when setting sampling rates. Sampling too frequently can overload the system, causing the system to appear as if it is frozen or causing the system to actually freeze.
[ "cat /dev/oprofile/cpu_type", "op_help", "opcontrol --event= <event-name> : <sample-rate>", "opcontrol --event= <event-name> : <sample-rate>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Configuring_OProfile-Setting_Events_to_Monitor
18.5. Limitations of ACIs
18.5. Limitations of ACIs When you set ACIs, the following restrictions apply: If your directory database is distributed over multiple servers, the following restrictions apply to the keywords you can use in ACIs: ACIs depending on group entries using the groupdn keyword must be located on the same server as the group entry. If the group is dynamic, all members of the group must have an entry on the server. Member entries of static groups can be located on the remote server. ACIs depending on role definitions using the roledn keyword, must be located on the same server as the role definition entry. Every entry that is intended to have the role must also be located on the same server. However, you can match values stored in the target entry with values stored in the entry of the bind user by, for example, using the userattr keyword. In this case, access is evaluated normally even if the bind user does not have an entry on the server that stores the ACI. For further details, see Section 2.3.3, "Database Links and Access Control Evaluation" . You cannot use virtual attributes, such as Class of Service (CoS) attributes, in the following ACI keywords: targetfilter targattrfilters userattr For details, see Chapter 8, Organizing and Grouping Entries . Access control rules are evaluated only on the local server. For example, if you specify the host name of a server in LDAP URLs in ACI keywords, the URL will be ignored.
null
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/limitations_of_acis
14.2.7. Editing a Logical Volume
14.2.7. Editing a Logical Volume The LVM utility allows you to select a logical volume in the volume group and modify its name, size and specify file system options. In this example, the logical volume named 'Backups" was extended onto the remaining space for the volume group. Clicking on the Edit Properties button will display the 'Edit Logical Volume' pop-up window from which you can edit the properties of the logical volume. On this window, you can also mount the volume after making the changes and mount it when the system is rebooted. You should indicate the mount point. If the mount point you specify does not exist, a pop-up window will be displayed prompting you to create it. The 'Edit Logical Volume' window is illustrated below. Figure 14.17. Edit logical volume If you wish to mount the volume, select the 'Mount' checkbox indicating the preferred mount point. To mount the volume when the system is rebooted, select the 'Mount when rebooted' checkbox. In this example, the new volume will be mounted in /mnt/backups . This is illustrated in the figure below. Figure 14.18. Edit logical volume - specifying mount options The figure below illustrates the logical and physical view of the volume group after the logical volume was extended to the unused space. In this example that the logical volume named 'Backups' spans across two hard disks. A volume can be stripped across two or more physical devices using LVM. Figure 14.19. Edit logical volume
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/s1-system-config-lvm-editing-lv
probe::nfsd.lookup
probe::nfsd.lookup Name probe::nfsd.lookup - NFS server opening or searching file for a file for client Synopsis nfsd.lookup Values filename file name client_ip the ip address of client fh file handle of parent dir(the first part is the length of the file handle) filelen the length of file name
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-nfsd-lookup
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/querying_data_grid_caches/making-open-source-more-inclusive_datagrid
Chapter 1. Introduction to the Ceph File System
Chapter 1. Introduction to the Ceph File System As a storage administrator, you can gain an understanding of the features, system components, and limitations to manage a Ceph File System (CephFS) environment. 1.1. Ceph File System features and enhancements The Ceph File System (CephFS) is a file system compatible with POSIX standards that is built on top of Ceph's distributed object store, called RADOS (Reliable Autonomic Distributed Object Storage). CephFS provides file access to a Red Hat Ceph Storage cluster, and uses the POSIX semantics wherever possible. For example, in contrast to many other common network file systems like NFS, CephFS maintains strong cache coherency across clients. The goal is for processes using the file system to behave the same when they are on different hosts as when they are on the same host. However, in some cases, CephFS diverges from the strict POSIX semantics. The Ceph File System has the following features and enhancements: Scalability The Ceph File System is highly scalable due to horizontal scaling of metadata servers and direct client reads and writes with individual OSD nodes. Shared File System The Ceph File System is a shared file system so multiple clients can work on the same file system at once. High Availability The Ceph File System provides a cluster of Ceph Metadata Servers (MDS). One is active and others are in standby mode. If the active MDS terminates unexpectedly, one of the standby MDS becomes active. As a result, client mounts continue working through a server failure. This behavior makes the Ceph File System highly available. In addition, you can configure multiple active metadata servers. Configurable File and Directory Layouts The Ceph File System allows users to configure file and directory layouts to use multiple pools, pool namespaces, and file striping modes across objects. POSIX Access Control Lists (ACL) The Ceph File System supports the POSIX Access Control Lists (ACL). ACL are enabled by default with the Ceph File Systems mounted as kernel clients with kernel version kernel-3.10.0-327.18.2.el7 or newer. To use an ACL with the Ceph File Systems mounted as FUSE clients, you must enable them. Client Quotas The Ceph File System supports setting quotas on any directory in a system. The quota can restrict the number of bytes or the number of files stored beneath that point in the directory hierarchy. CephFS client quotas are enabled by default. Resizing The Ceph File System size is only bound by the capacity of the OSDs servicing its data pool. To increase the capacity, add more OSDs to the CephFS data pool. To decrease the capacity, use either client quotas or pool quotas. Snapshots The Ceph File System supports read-only snapshots but not writable clones. POSIX file system operations The Ceph File System supports standard and consistent POSIX file system operations including the following access patterns: Buffered write operations via the Linux page cache. Cached read operations via the Linux page cache. Direct I/O asynchronous or synchronous read/write operations, bypassing the page cache. Memory mapped I/O. Additional Resources See the Installing Metadata servers section in the Installation Guide to install Ceph Metadata servers. See the Deploying Ceph File Systems section in the File System Guide to create Ceph File Systems. 1.2. Ceph File System components The Ceph File System has two primary components: Clients The CephFS clients perform I/O operations on behalf of applications using CephFS, such as, ceph-fuse for FUSE clients and kcephfs for kernel clients. CephFS clients send metadata requests to an active Metadata Server. In return, the CephFS client learns of the file metadata, and can begin safely caching both metadata and file data. Metadata Servers (MDS) The MDS does the following: Provides metadata to CephFS clients. Manages metadata related to files stored on the Ceph File System. Coordinates access to the shared Red Hat Ceph Storage cluster. Caches hot metadata to reduce requests to the backing metadata pool store. Manages the CephFS clients' caches to maintain cache coherence. Replicates hot metadata between active MDS. Coalesces metadata mutations to a compact journal with regular flushes to the backing metadata pool. CephFS requires at least one Metadata Server daemon ( ceph-mds ) to run. The diagram below shows the component layers of the Ceph File System. The bottom layer represents the underlying core storage cluster components: Ceph OSDs ( ceph-osd ) where the Ceph File System data and metadata are stored. Ceph Metadata Servers ( ceph-mds ) that manages Ceph File System metadata. Ceph Monitors ( ceph-mon ) that manages the master copy of the cluster map. The Ceph Storage protocol layer represents the Ceph native librados library for interacting with the core storage cluster. The CephFS library layer includes the CephFS libcephfs library that works on top of librados and represents the Ceph File System. The top layer represents two types of Ceph clients that can access the Ceph File Systems. The diagram below shows more details on how the Ceph File System components interact with each other. Additional Resources See the Installing Metadata servers section in the Red Hat Ceph Storage Installation Guide to install Ceph Metadata servers. See the Deploying Ceph File Systems section in the Red Hat Ceph Storage File System Guide to create Ceph File Systems. 1.3. Ceph File System and SELinux Starting with Red Hat Enterprise Linux 8.3 and Red Hat Ceph Storage 4.2, support for using Security-Enhanced Linux (SELinux) on Ceph File Systems (CephFS) environments is available. You can now set any SELinux file type with CephFS, along with assigning a particular SELinux type on individual files. This support applies to the Ceph File System Metadata Server (MDS), the CephFS File System in User Space (FUSE) clients, and the CephFS kernel clients. Additional Resources See the Using SELinux Guide on Red Hat Enterprise Linux 8 for more information on SELinux. 1.4. Ceph File System limitations and the POSIX standards Creation of multiple Ceph File Systems on one Red Hat Ceph Storage cluster is disabled by default. An attempt to create an additional Ceph File System fails with the following error message: Important While technically possible, Red Hat does not support having multiple Ceph File Systems on one Red Hat Ceph Storage cluster. Doing so can cause the MDS or CephFS client nodes to terminate unexpectedly. The Ceph File System diverges from the strict POSIX semantics in the following ways: If a client's attempt to write a file fails, the write operations are not necessarily atomic. That is, the client might call the write() system call on a file opened with the O_SYNC flag with an 8MB buffer and then terminates unexpectedly and the write operation can be only partially applied. Almost all file systems, even local file systems, have this behavior. In situations when the write operations occur simultaneously, a write operation that exceeds object boundaries is not necessarily atomic. For example, writer A writes "aa|aa" and writer B writes "bb|bb" simultaneously, where "|" is the object boundary, and "aa|bb" is written rather than the proper "aa|aa" or "bb|bb" . POSIX includes the telldir() and seekdir() system calls that allow you to obtain the current directory offset and seek back to it. Because CephFS can fragment directories at any time, it is difficult to return a stable integer offset for a directory. As such, calling the seekdir() system call to a non-zero offset might often work but is not guaranteed to do so. Calling seekdir() to offset 0 will always work. This is an equivalent to the rewinddir() system call. Sparse files propagate incorrectly to the st_blocks field of the stat() system call. CephFS does not explicitly track parts of a file that are allocated or written to, because the st_blocks field is always populated by the quotient of file size divided by block size. This behavior causes utilities, such as du , to overestimate used space. When the mmap() system call maps a file into memory on multiple hosts, write operations are not coherently propagated to caches of other hosts. That is, if a page is cached on host A, and then updated on host B, host A page is not coherently invalidated. CephFS clients present a hidden .snap directory that is used to access, create, delete, and rename snapshots. Although this directory is excluded from the readdir() system call, any process that tries to create a file or directory with the same name returns an error. The name of this hidden directory can be changed at mount time with the -o snapdirname=.<new_name> option or by using the client_snapdir configuration option. Additional Resources See the Installing Metadata servers section in the Red Hat Ceph Storage Installation Guide to install Ceph Metadata servers. See the Deploying Ceph File Systems section in the Red Hat Ceph Storage File System Guide to create Ceph File Systems. 1.5. Additional Resources See the Installing Metadata Servers section in the Red Hat Ceph Storage Installation Guide for more details. If you want to use NFS Ganesha as an interface to the Ceph File System with Red Hat OpenStack Platform, see the CephFS with NFS-Ganesha deployment section in the Deploying the Shared File Systems service with CephFS through NFS guide for instructions on how to deploy such an environment.
[ "Error EINVAL: Creation of multiple filesystems is disabled." ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/file_system_guide/introduction-to-the-ceph-file-system
Chapter 1. Provisioning APIs
Chapter 1. Provisioning APIs 1.1. BMCEventSubscription [metal3.io/v1alpha1] Description BMCEventSubscription is the Schema for the fast eventing API Type object 1.2. BareMetalHost [metal3.io/v1alpha1] Description BareMetalHost is the Schema for the baremetalhosts API Type object 1.3. DataImage [metal3.io/v1alpha1] Description DataImage is the Schema for the dataimages API. Type object 1.4. FirmwareSchema [metal3.io/v1alpha1] Description FirmwareSchema is the Schema for the firmwareschemas API. Type object 1.5. HardwareData [metal3.io/v1alpha1] Description HardwareData is the Schema for the hardwaredata API. Type object 1.6. HostFirmwareComponents [metal3.io/v1alpha1] Description HostFirmwareComponents is the Schema for the hostfirmwarecomponents API. Type object 1.7. HostFirmwareSettings [metal3.io/v1alpha1] Description HostFirmwareSettings is the Schema for the hostfirmwaresettings API. Type object 1.8. Metal3Remediation [infrastructure.cluster.x-k8s.io/v1beta1] Description Metal3Remediation is the Schema for the metal3remediations API. Type object 1.9. Metal3RemediationTemplate [infrastructure.cluster.x-k8s.io/v1beta1] Description Metal3RemediationTemplate is the Schema for the metal3remediationtemplates API. Type object 1.10. PreprovisioningImage [metal3.io/v1alpha1] Description PreprovisioningImage is the Schema for the preprovisioningimages API. Type object 1.11. Provisioning [metal3.io/v1alpha1] Description Provisioning contains configuration used by the Provisioning service (Ironic) to provision baremetal hosts. Provisioning is created by the OpenShift installer using admin or user provided information about the provisioning network and the NIC on the server that can be used to PXE boot it. This CR is a singleton, created by the installer and currently only consumed by the cluster-baremetal-operator to bring up and update containers in a metal3 cluster. Type object
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/provisioning_apis/provisioning-apis
Chapter 1. Preparing to install on Nutanix
Chapter 1. Preparing to install on Nutanix Before you install an OpenShift Container Platform cluster, be sure that your Nutanix environment meets the following requirements. 1.1. Nutanix version requirements You must install the OpenShift Container Platform cluster to a Nutanix environment that meets the following requirements. Table 1.1. Version requirements for Nutanix virtual environments Component Required version Nutanix AOS 6.5.2.7 or later Prism Central pc.2022.6 or later 1.2. Environment requirements Before you install an OpenShift Container Platform cluster, review the following Nutanix AOS environment requirements. 1.2.1. Required account privileges The installation program requires access to a Nutanix account with the necessary permissions to deploy the cluster and to maintain the daily operation of it. The following options are available to you: You can use a local Prism Central user account with administrative privileges. Using a local account is the quickest way to grant access to an account with the required permissions. If your organization's security policies require that you use a more restrictive set of permissions, use the permissions that are listed in the following table to create a custom Cloud Native role in Prism Central. You can then assign the role to a user account that is a member of a Prism Central authentication directory. Consider the following when managing this user account: When assigning entities to the role, ensure that the user can access only the Prism Element and subnet that are required to deploy the virtual machines. Ensure that the user is a member of the project to which it needs to assign virtual machines. For more information, see the Nutanix documentation about creating a Custom Cloud Native role , assigning a role , and adding a user to a project . Example 1.1. Required permissions for creating a Custom Cloud Native role Nutanix Object When required Required permissions in Nutanix API Description Categories Always Create_Category_Mapping Create_Or_Update_Name_Category Create_Or_Update_Value_Category Delete_Category_Mapping Delete_Name_Category Delete_Value_Category View_Category_Mapping View_Name_Category View_Value_Category Create, read, and delete categories that are assigned to the OpenShift Container Platform machines. Images Always Create_Image Delete_Image View_Image Create, read, and delete the operating system images used for the OpenShift Container Platform machines. Virtual Machines Always Create_Virtual_Machine Delete_Virtual_Machine View_Virtual_Machine Create, read, and delete the OpenShift Container Platform machines. Clusters Always View_Cluster View the Prism Element clusters that host the OpenShift Container Platform machines. Subnets Always View_Subnet View the subnets that host the OpenShift Container Platform machines. Projects If you will associate a project with compute machines, control plane machines, or all machines. View_Project View the projects defined in Prism Central and allow a project to be assigned to the OpenShift Container Platform machines. 1.2.2. Cluster limits Available resources vary between clusters. The number of possible clusters within a Nutanix environment is limited primarily by available storage space and any limitations associated with the resources that the cluster creates, and resources that you require to deploy the cluster, such a IP addresses and networks. 1.2.3. Cluster resources A minimum of 800 GB of storage is required to use a standard cluster. When you deploy a OpenShift Container Platform cluster that uses installer-provisioned infrastructure, the installation program must be able to create several resources in your Nutanix instance. Although these resources use 856 GB of storage, the bootstrap node is destroyed as part of the installation process. A standard OpenShift Container Platform installation creates the following resources: 1 label Virtual machines: 1 disk image 1 temporary bootstrap node 3 control plane nodes 3 compute machines 1.2.4. Networking requirements You must use either AHV IP Address Management (IPAM) or Dynamic Host Configuration Protocol (DHCP) for the network and ensure that it is configured to provide persistent IP addresses to the cluster machines. Additionally, create the following networking resources before you install the OpenShift Container Platform cluster: IP addresses DNS records Note It is recommended that each OpenShift Container Platform node in the cluster have access to a Network Time Protocol (NTP) server that is discoverable via DHCP. Installation is possible without an NTP server. However, an NTP server prevents errors typically associated with asynchronous server clocks. 1.2.4.1. Required IP Addresses An installer-provisioned installation requires two static virtual IP (VIP) addresses: A VIP address for the API is required. This address is used to access the cluster API. A VIP address for ingress is required. This address is used for cluster ingress traffic. You specify these IP addresses when you install the OpenShift Container Platform cluster. 1.2.4.2. DNS records You must create DNS records for two static IP addresses in the appropriate DNS server for the Nutanix instance that hosts your OpenShift Container Platform cluster. In each record, <cluster_name> is the cluster name and <base_domain> is the cluster base domain that you specify when you install the cluster. If you use your own DNS or DHCP server, you must also create records for each node, including the bootstrap, control plane, and compute nodes. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 1.2. Required DNS records Component Record Description API VIP api.<cluster_name>.<base_domain>. This DNS A/AAAA or CNAME record must point to the load balancer for the control plane machines. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Ingress VIP *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that points to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. 1.3. Configuring the Cloud Credential Operator utility The Cloud Credential Operator (CCO) manages cloud provider credentials as Kubernetes custom resource definitions (CRDs). To install a cluster on Nutanix, you must set the CCO to manual mode as part of the installation process. To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. Additional resources Preparing to update a cluster with manually maintained credentials
[ "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret", "chmod 775 ccoctl", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command." ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_nutanix/preparing-to-install-on-nutanix
Chapter 3. Binding [v1]
Chapter 3. Binding [v1] Description Binding ties one object to another; for example, a pod is bound to a node by a scheduler. Deprecated in 1.7, please use the bindings subresource of pods instead. Type object Required target 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata target object ObjectReference contains enough information to let you inspect or modify the referred object. 3.1.1. .target Description ObjectReference contains enough information to let you inspect or modify the referred object. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 3.2. API endpoints The following API endpoints are available: /api/v1/namespaces/{namespace}/bindings POST : create a Binding /api/v1/namespaces/{namespace}/pods/{name}/binding POST : create binding of a Pod 3.2.1. /api/v1/namespaces/{namespace}/bindings Table 3.1. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. HTTP method POST Description create a Binding Table 3.2. Body parameters Parameter Type Description body Binding schema Table 3.3. HTTP responses HTTP code Reponse body 200 - OK Binding schema 201 - Created Binding schema 202 - Accepted Binding schema 401 - Unauthorized Empty 3.2.2. /api/v1/namespaces/{namespace}/pods/{name}/binding Table 3.4. Global path parameters Parameter Type Description name string name of the Binding Table 3.5. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. HTTP method POST Description create binding of a Pod Table 3.6. Body parameters Parameter Type Description body Binding schema Table 3.7. HTTP responses HTTP code Reponse body 200 - OK Binding schema 201 - Created Binding schema 202 - Accepted Binding schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/metadata_apis/binding-v1
Chapter 7. Ensuring system integrity with Keylime
Chapter 7. Ensuring system integrity with Keylime With Keylime, you can continuously monitor the integrity of remote systems and verify the state of systems at boot. You can also send encrypted files to the monitored systems, and specify automated actions triggered whenever a monitored system fails the integrity test. 7.1. How Keylime works You can configure Keylime agents to perform one or more of the following actions: Runtime integrity monitoring Keylime runtime integrity monitoring continuously monitors the system on which the agent is deployed and measures the integrity of the files included in the allowlist and not included in the excludelist. Measured boot Keylime measured boot verifies the system state at boot. Keylime's concept of trust is based on the Trusted Platform Module (TPM) technology. A TPM is a hardware, firmware, or virtual component with integrated cryptographic keys. By polling TPM quotes and comparing the hashes of objects, Keylime provides initial and runtime monitoring of remote systems. Important Keylime running in a virtual machine or using a virtual TPM depends upon the integrity of the underlying host. Ensure you trust the host environment before relying upon Keylime measurements in a virtual environment. Keylime consists of three main components: Verifier Initially and continuously verifies the integrity of the systems that run the agent. You can deploy the verifier from a package, as a container, or by usign the keylime_server RHEL system role. Registrar Contains a database of all agents and it hosts the public keys of the TPM vendors. You can deploy the registrar from a package, as a container, or by usign the keylime_server RHEL system role. Agent Deployed to remote systems measured by the verifier. In addition, Keylime uses the keylime_tenant utility for many functions, including provisioning the agents on the target systems. Figure 7.1. Connections between Keylime components through configurations Keylime ensures the integrity of the monitored systems in a chain of trust by using keys and certificates exchanged between the components and the tenant. For a secure foundation of this chain, use a certificate authority (CA) that you can trust. Note If the agent receives no key and certificate, it generates a key and a self-signed certificate with no involvement from the CA. Figure 7.2. Connections between Keylime components certificates and keys 7.2. Deploying Keylime verifier from a package The verifier is the most important component in Keylime. It performs initial and periodic checks of system integrity and supports bootstrapping a cryptographic key securely with the agent. The verifier uses mutual TLS encryption for its control interface. Important To maintain the chain of trust, keep the system that runs the verifier secure and under your control. You can install the verifier on a separate system or on the same system as the Keylime registrar, depending on your requirements. Running the verifier and registrar on separate systems provides better performance. Note To keep the configuration files organized within the drop-in directories, use file names with a two-digit number prefix, for example /etc/keylime/verifier.conf.d/00-verifier-ip.conf . The configuration processing reads the files inside the drop-in directory in lexicographic order and sets each option to the last value it reads. Prerequisites You have root permissions and network connection to the system or systems on which you want to install Keylime components. You have valid keys and certificates from your certificate authority. Optional: You have access to the databases where Keylime saves data from the verifier. You can use any of the following database management systems: SQLite (default) PostgreSQL MySQL MariaDB Procedure Install the Keylime verifier: Define the IP address and port of verifier by creating a new .conf file in the /etc/keylime/verifier.conf.d/ directory, for example, /etc/keylime/verifier.conf.d/00-verifier-ip.conf , with the following content: Replace <verifier_IP_address> with the verifier's IP address. Alternatively, use ip = * or ip = 0.0.0.0 to bind the verifier to all available IP addresses. Optionally, you can also change the verifier's port from the default value 8881 by using the port option. Optional: Configure the verifier's database for the list of agents. The default configuration uses an SQLite database in the verifier's /var/lib/keylime/cv_data.sqlite/ directory. You can define a different database by creating a new .conf file in the /etc/keylime/verifier.conf.d/ directory, for example, /etc/keylime/verifier.conf.d/00-db-url.conf , with the following content: Replace <protocol>://<name>:<password>@<ip_address_or_hostname>/<properties> with the URL of the database, for example, postgresql://verifier:[email protected]/verifierdb . Ensure that the credentials you use provide the permissions for Keylime to create the database structure. Add certificates and keys to the verifier. You can either let Keylime generate them, or use existing keys and certificates: With the default tls_dir = generate option, Keylime generates new certificates for the verifier, registrar, and tenant in the /var/lib/keylime/cv_ca/ directory. To load existing keys and certificates in the configuration, define their location in the verifier configuration. The certificates must be accessible by the keylime user, under which the Keylime services are running. Create a new .conf file in the /etc/keylime/verifier.conf.d/ directory, for example, /etc/keylime/verifier.conf.d/00-keys-and-certs.conf , with the following content: Note Use absolute paths to define key and certificate locations. Alternatively, relative paths are resolved from the directory defined in the tls_dir option. Open the port in firewall: If you use a different port, replace 8881 with the port number defined in the .conf file. Start the verifier service: Note In the default configuration, start the keylime_verifier before starting the keylime_registrar service because the verifier creates the CA and certificates for the other Keylime components. This order is not necessary when you use custom certificates. Verification Check that the keylime_verifier service is active and running: steps Section 7.4, "Deploying Keylime registrar from a package" . 7.3. Deploying Keylime verifier as a container The Keylime verifier performs initial and periodic checks of system integrity and supports bootstrapping a cryptographic key securely with the agent. You can configure the Keylime verifier as a container instead of the RPM method, without any binaries or packages on the host. The container deployment provides better isolation, modularity, and reproducibility of Keylime components. After you start the container, the Keylime verifier is deployed with default configuration files. You can customize the configuration by using one or more of following methods: Mounting the host's directories that contain the configuration files to the container. This is available in all versions of RHEL 9. Modifying the environment variables directly on the container. This is available in RHEL 9.3 and later versions. Modifying the environment variables overrides the values from the configuration files. Prerequisites The podman package and its dependencies are installed on the system. Optional: You have access to a database where Keylime saves data from the verifier. You can use any of the following database management systems: SQLite (default) PostgreSQL MySQL MariaDB You have valid keys and certificates from your certificate authority. Procedure Optional: Install the keylime-verifier package to access the configuration files. You can configure the container without this package, but it might be easier to modify the configuration files provided with the package. Bind the verifier to all available IP addresses by creating a new .conf file in the /etc/keylime/verifier.conf.d/ directory, for example, /etc/keylime/verifier.conf.d/00-verifier-ip.conf , with the following content: Optionally, you can also change the verifier's port from the default value 8881 by using the port option. Optional: Configure the verifier's database for the list of agents. The default configuration uses an SQLite database in the verifier's /var/lib/keylime/cv_data.sqlite/ directory. You can define a different database by creating a new .conf file in the /etc/keylime/verifier.conf.d/ directory, for example, /etc/keylime/verifier.conf.d/00-db-url.conf , with the following content: Replace <protocol>://<name>:<password>@<ip_address_or_hostname>/<properties> with the URL of the database, for example, postgresql://verifier:[email protected]/verifierdb . Ensure that the credentials you use have the permissions for Keylime to create the database structure. Add certificates and keys to the verifier. You can either let Keylime generate them, or use existing keys and certificates: With the default tls_dir = generate option, Keylime generates new certificates for the verifier, registrar, and tenant in the /var/lib/keylime/cv_ca/ directory. To load existing keys and certificates in the configuration, define their location in the verifier configuration. The certificates must be accessible by the keylime user, under which the Keylime processes are running. Create a new .conf file in the /etc/keylime/verifier.conf.d/ directory, for example, /etc/keylime/verifier.conf.d/00-keys-and-certs.conf , with the following content: Note Use absolute paths to define key and certificate locations. Alternatively, relative paths are resolved from the directory defined in the tls_dir option. Open the port in firewall: If you use a different port, replace 8881 with the port number defined in the .conf file. Run the container: The -p option opens the default port 8881 on the host and on the container. The -v option creates a bind mount for the directory to the container. With the Z option, Podman marks the content with a private unshared label. This means only the current container can use the private volume. The -d option runs the container detached and in the background. The option -e KEYLIME_VERIFIER_SERVER_KEY_PASSWORD= <passphrase1> defines the server key passphrase. The option -e KEYLIME_VERIFIER_CLIENT_KEY_PASSWORD= <passphrase2> defines the client key passphrase. You can override configuration options with environment variables by using the option -e KEYLIME_VERIFIER _<ENVIRONMENT_VARIABLE> = <value> . To modify additional options, insert the -e option separately for each environment variable. For a complete list of environment variables and their default values, see Keylime environment variables . Verification Check that the container is running: USD podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 80b6b9dbf57c registry.access.redhat.com/rhel9/keylime-verifier:latest keylime_verifier 14 seconds ago Up 14 seconds 0.0.0.0:8881->8881/tcp keylime-verifier steps Install the Keylime registrar as a container . Additional resources For more information about Keylime components, see How Keylime works . For more information about configuring the Keylime verifier, see Configuring Keylime verifier . For more information about the podman run command, see the podman-run(1) man page on your system. 7.4. Deploying Keylime registrar from a package The registrar is the Keylime component that contains a database of all agents, and it hosts the public keys of the TPM vendors. After the registrar's HTTPS service accepts trusted platform module (TPM) public keys, it presents an interface to obtain these public keys for checking quotes. Important To maintain the chain of trust, keep the system that runs the registrar secure and under your control. You can install the registrar on a separate system or on the same system as the Keylime verifier, depending on your requirements. Running the verifier and registrar on separate systems provides better performance. Note To keep the configuration files organized within the drop-in directories, use file names with a two-digit number prefix, for example /etc/keylime/registrar.conf.d/00-registrar-ip.conf . The configuration processing reads the files inside the drop-in directory in lexicographic order and sets each option to the last value it reads. Prerequisites You have network access to the systems where the Keylime verifier is installed and running. For more information, see Section 7.2, "Deploying Keylime verifier from a package" . You have root permissions and network connection to the system or systems on which you want to install Keylime components. You have access to the database where Keylime saves data from the registrar. You can use any of the following database management systems: SQLite (default) PostgreSQL MySQL MariaDB You have valid keys and certificates from your certificate authority. Procedure Install the Keylime registrar: Define the IP address and port of the registrar by creating a new .conf file in the /etc/keylime/registrar.conf.d/ directory, for example, /etc/keylime/registrar.conf.d/00-registrar-ip.conf , with the following content: Replace <registrar_IP_address> with the registrar's IP address. Alternatively, use ip = * or ip = 0.0.0.0 to bind the registrar to all available IP addresses. Optionally, change the port to which the Keylime agents connect by using the port option. The default value is 8890 . Optionally, change the TLS port to which the Keylime verifier and tenant connect by using the tls_port option. The default value is 8891 . Optional: Configure the registrar's database for the list of agents. The default configuration uses an SQLite database in the registrar's /var/lib/keylime/reg_data.sqlite directory. You can create a new .conf file in the /etc/keylime/registrar.conf.d/ directory, for example, /etc/keylime/registrar.conf.d/00-db-url.conf , with the following content: Replace <protocol>://<name>:<password>@<ip_address_or_hostname>/<properties> with the URL of the database, for example, postgresql://registrar:EKYYX-bqY2?#[email protected]/registrardb . Ensure that the credentials you use have the permissions for Keylime to create the database structure. Add certificates and keys to the registrar: You can use the default configuration and load the keys and certificates to the /var/lib/keylime/reg_ca/ directory. Alternatively, you can define the location of the keys and certificates in the configuration. Create a new .conf file in the /etc/keylime/registrar.conf.d/ directory, for example, /etc/keylime/registrar.conf.d/00-keys-and-certs.conf , with the following content: Note Use absolute paths to define key and certificate locations. Alternatively, you can define a directory in the tls_dir option and use paths relative to that directory. Open the ports in firewall: If you use a different port, replace 8890 or 8891 with the port number defined in the .conf file. Start the keylime_registrar service: Note In the default configuration, start the keylime_verifier before starting the keylime_registrar service because the verifier creates the CA and certificates for the other Keylime components. This order is not necessary when you use custom certificates. Verification Check that the keylime_registrar service is active and running: steps Section 7.8, "Deploying Keylime tenant from a package" 7.5. Deploying Keylime registrar as a container The registrar is the Keylime component that contains a database of all agents, and it hosts the public keys of the trusted platform module (TPM) vendors. After the registrar's HTTPS service accepts TPM public keys, it presents an interface to obtain these public keys for checking quotes. You can configure the Keylime registrar as a container instead of the RPM method, without any binaries or packages on the host. The container deployment provides better isolation, modularity, and reproducibility of Keylime components. After you start the container, the Keylime registrar is deployed with default configuration files. You can customize the configuration by using one or more of following methods: Mounting the host's directories that contain the configuration files to the container. This is available in all versions of RHEL 9. Modifying the environment variables directly on the container. This is available in RHEL 9.3 and later versions. Modifying the environment variables overrides the values from the configuration files. Prerequisites The podman package and its dependencies are installed on the system. Optional: You have access to a database where Keylime saves data from the registrar. You can use any of the following database management systems: SQLite (default) PostgreSQL MySQL MariaDB You have valid keys and certificates from your certificate authority. Procedure Optional: Install the keylime-registrar package to access the configuration files. You can configure the container without this package, but it might be easier to modify the configuration files provided with the package. Bind the registrar to all available IP addresses by creating a new .conf file in the /etc/keylime/registrar.conf.d/ directory, for example, /etc/keylime/registrar.conf.d/00-registrar-ip.conf , with the following content: Optionally, change the port to which the Keylime agents connect by using the port option. The default value is 8890 . Optionally, change the TLS port to which the Keylime tenant connects by using the tls_port option. The default value is 8891 . Optional: Configure the registrar's database for the list of agents. The default configuration uses an SQLite database in the registrar's /var/lib/keylime/reg_data.sqlite directory. You can create a new .conf file in the /etc/keylime/registrar.conf.d/ directory, for example, /etc/keylime/registrar.conf.d/00-db-url.conf , with the following content: Replace <protocol>://<name>:<password>@<ip_address_or_hostname>/<properties> with the URL of the database, for example, postgresql://registrar:EKYYX-bqY2?#[email protected]/registrardb . Ensure that the credentials you use have the permissions for Keylime to create the database structure. Add certificates and keys to the registrar: You can use the default configuration and load the keys and certificates to the /var/lib/keylime/reg_ca/ directory. Alternatively, you can define the location of the keys and certificates in the configuration. Create a new .conf file in the /etc/keylime/registrar.conf.d/ directory, for example, /etc/keylime/registrar.conf.d/00-keys-and-certs.conf , with the following content: Note Use absolute paths to define key and certificate locations. Alternatively, you can define a directory in the tls_dir option and use paths relative to that directory. Open the ports in firewall: If you use a different port, replace 8890 or 8891 with the port number defined in the .conf file. Run the container: The -p option opens the default ports 8890 and 8881 on the host and on the container. The -v option creates a bind mount for the directory to the container. With the Z option, Podman marks the content with a private unshared label. This means only the current container can use the private volume. The -d option runs the container detached and in the background. The option -e KEYLIME_VERIFIER_SERVER_KEY_PASSWORD= <passphrase1> defines the server key passphrase. You can override configuration options with environment variables by using the option -e KEYLIME_REGISTRAR _<ENVIRONMENT_VARIABLE> = <value> . To modify additional options, insert the -e option separately for each environment variable. For a complete list of environment variables and their default values, see Section 7.12, "Keylime environment variables" . Verification Check that the container is running: steps Section 7.8, "Deploying Keylime tenant from a package" . Additional resources For more information about Keylime components, see Section 7.1, "How Keylime works" . For more information about configuring the Keylime registrar, see Section 7.4, "Deploying Keylime registrar from a package" . For more information about the podman run command, see the podman-run(1) man page on your system. 7.6. Deploying a Keylime server by using RHEL system roles You can set up the verifier and registrar, which are the Keylime server components, by using the keylime_server RHEL system role. The keylime_server role installs and configures both the verifier and registrar components together on each node. Perform this procedure on the Ansible control node. For more information about Keylime, see 8.1. How Keylime works . Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. The managed nodes or groups of managed nodes on which you want to run this playbook are listed in the Ansible inventory file. Procedure Create a playbook that defines the required role: Create a new YAML file and open it in a text editor, for example: Insert the following content: You can find out more about the variables in Variables for the keylime_server RHEL system role . Run the playbook: Verification Check that the keylime_verifier service is active and running on the managed host: Check that the keylime_registrar service is active and running: steps Section 7.8, "Deploying Keylime tenant from a package" 7.7. Variables for the keylime_server RHEL system role When setting up a Keylime server by using the keylime_server RHEL system role, you can customize the following variables for registrar and verifier. List of keylime_server RHEL system role variables for configuring the Keylime verifier keylime_server_verifier_ip Defines the IP address of the verifier. keylime_server_verifier_tls_dir Specifies the directory where the keys and certificates are stored. If set to default, the verifier uses the /var/lib/keylime/cv_ca directory. keylime_server_verifier_server_key_passphrase Specifies a passphrase to decrypt the server private key. If the value is empty, the private key is not encrypted. keylime_server_verifier_server_cert : Specifies the Keylime verifier server certificate file. keylime_server_verifier_trusted_client_ca Defines the list of trusted client CA certificates. You must store the files in the directory set in the keylime_server_verifier_tls_dir option. keylime_server_verifier_client_key Defines the file containing the Keylime verifier private client key. keylime_server_verifier_client_key_passphrase Defines the passphrase to decrypt the client private key file. If the value is empty, the private key is not encrypted. keylime_server_verifier_client_cert Defines the Keylime verifier client certificate file. keylime_server_verifier_trusted_server_ca Defines the list of trusted server CA certificates. You must store the files in the directory set in the keylime_server_verifier_tls_dir option. List of registrar variables for setting up keylime_server RHEL system role keylime_server_registrar_ip Defines the IP address of the registrar. keylime_server_registrar_tls_dir Specifies the directory where you store the keys and certificates for the registrar. If you set it to default, the registrar uses the /var/lib/keylime/reg_ca directory. keylime_server_registrar_server_key Defines the Keylime registrar private server key file. keylime_server_registrar_server_key_passphrase Specifies the passphrase to decrypt the server private key of the registrar. If the value is empty, the private key is not encrypted. keylime_server_registrar_server_cert Specifies the Keylime registrar server certificate file. keylime_server_registrar_trusted_client_ca Defines the list of trusted client CA certificates. You must store the files in the directory set in the keylime_server_registrar_tls_dir option. 7.8. Deploying Keylime tenant from a package Keylime uses the keylime_tenant utility for many functions, including provisioning the agents on the target systems. You can install keylime_tenant on any system, including the systems that run other Keylime components, or on a separate system, depending on your requirements. Prerequisites You have root permissions and network connection to the system or systems on which you want to install Keylime components. You have network access to the systems where the other Keylime components are configured: Verifier For more information, see Section 7.2, "Deploying Keylime verifier from a package" . Registrar For more information, see Section 7.4, "Deploying Keylime registrar from a package" . Procedure Install the Keylime tenant: Define the tenant's connection to the Keylime verifier by editing the /etc/keylime/tenant.conf.d/00-verifier-ip.conf file: Replace <verifier_ip> with the IP address to the verifier's system. If the verifier uses a different port than the default value 8881 , add the verifier_port = <verifier_port> setting. Define the tenant's connection to the Keylime registrar by editing the /etc/keylime/tenant.conf.d/00-registrar-ip.conf file: Replace <registrar_ip> with the IP address to the registrar's system. If the registrar uses a different port than the default value 8891 , add the registrar_port = <registrar_port> setting. Add certificates and keys to the tenant: You can use the default configuration and load the keys and certificates to the /var/lib/keylime/cv_ca directory. Alternatively, you can define the location of the keys and certificates in the configuration. Create a new .conf file in the /etc/keylime/tenant.conf.d/ directory, for example, /etc/keylime/tenant.conf.d/00-keys-and-certs.conf , with the following content: The trusted_server_ca parameter accepts paths to the verifier and registrar server CA certificate. You can provide multiple comma-separated paths, for example if the verifier and registrar use different CAs. Note Use absolute paths to define key and certificate locations. Alternatively, you can define a directory in the tls_dir option and use paths relative to that directory. Optional: If the trusted platform module (TPM) endorsement key (EK) cannot be verified by using certificates in the /var/lib/keylime/tpm_cert_store directory, add the certificate to that directory. This can occur particularly when using virtual machines with emulated TPMs. Verification Check the status of the verifier: If correctly set up, and if no agent is configured, the verifier responds that it does not recognize the default agent UUID. Check the status of the registrar: If correctly set up, and if no agent is configured, the registrar responds that it does not recognize the default agent UUID. Additional resources For additional advanced options for the keylime_tenant utility, enter the keylime_tenant -h command. 7.9. Deploying Keylime agent from a package The Keylime agent is the component deployed to all systems to be monitored by Keylime. By default, the Keylime agent stores all its data in the /var/lib/keylime/ directory of the monitored system. Note To keep the configuration files organized within the drop-in directories, use file names with a two-digit number prefix, for example /etc/keylime/agent.conf.d/00-registrar-ip.conf . The configuration processing reads the files inside the drop-in directory in lexicographic order and sets each option to the last value it reads. Prerequisites You have root permissions to the monitored system. The monitored system has a Trusted Platform Module (TPM). To verify, enter the tpm2_pcrread command. If the output returns several hashes, a TPM is available. You have network access to the systems where the other Keylime components are configured: Verifier For more information, see Configuring Keylime verifier . Registrar For more information, see Configuring Keylime registrar . Tenant For more information, see Configuring Keylime tenant . Integrity measurement architecture (IMA) is enabled on the monitored system. For more information, see Enabling integrity measurement architecture and extended verification module . Procedure Install the Keylime agent: This command installs the keylime-agent-rust package. Define the agent's IP address and port in the configuration files. Create a new .conf file in the /etc/keylime/agent.conf.d/ directory, for example, /etc/keylime/agent.conf.d/00-agent-ip.conf , with the following content: Note The Keylime agent configuration uses the TOML format, which is different from the INI format used for configuration of the other components. Therefore, enter values in valid TOML syntax, for example, paths in single quotation marks and arrays of multiple paths in square brackets. Replace <agent_IP_address> with the agent's IP address. Alternatively, use ip = '*' or ip = '0.0.0.0' to bind the agent to all available IP addresses. Optionally, you can also change the agent's port from the default value 9002 by using the port = ' <agent_port> ' option. Define the registrar's IP address and port in the configuration files. Create a new .conf file in the /etc/keylime/agent.conf.d/ directory, for example, /etc/keylime/agent.conf.d/00-registrar-ip.conf , with the following content: Replace <registrar_IP_address> with the registrar's IP address. Optionally, you can also change the registrar's port from the default value 8890 by using the registrar_port = ' <registrar_port> ' option. Optional: Define the agent's universally unique identifier (UUID). If it is not defined, the default UUID is used. Create a new .conf file in the /etc/keylime/agent.conf.d/ directory, for example, /etc/keylime/agent.conf.d/00-agent-uuid.conf , with the following content: Replace <agent_UUID> with the agent's UUID, for example d432fbb3-d2f1-4a97-9ef7-abcdef012345 . You can use the uuidgen utility to generate a UUID. Optional: Load existing keys and certificates for the agent. If the agent receives no server_key and server_cert , it generates its own key and a self-signed certificate. Define the location of the keys and certificates in the configuration. Create a new .conf file in the /etc/keylime/agent.conf.d/ directory, for example, /etc/keylime/agent.conf.d/00-keys-and-certs.conf , with the following content: Note Use absolute paths to define key and certificate locations. The Keylime agent does not accept relative paths. Open the port in firewall: If you use a different port, replace 9002 with the port number defined in the .conf file. Enable and start the keylime_agent service: Optional: From the system where the Keylime tenant is configured, verify that the agent is correctly configured and can connect to the registrar. Replace <agent_uuid> with the agent's UUID. If the registrar and agent are correctly configured, the output displays the agent's IP address and port, followed by "operational_state": "Registered" . Create a new IMA policy by entering the following content into the /etc/ima/ima-policy file: This policy targets runtime monitoring of executed applications. You can adjust this policy according to your scenario. You can find the MAGIC constants in the statfs(2) man page on your system. Update kernel parameters: Reboot the system to apply the new IMA policy. Verification Verify that the agent is running: steps After the agent is configured on all systems you want to monitor, you can deploy Keylime to perform one or both of the following functions: Deploying Keylime for runtime monitoring Deploying Keylime for measured boot attestation Additional resources Integrity Measurement Architecture (IMA) Wiki 7.10. Configuring Keylime for runtime monitoring To verify that the state of monitored systems is correct, the Keylime agent must be running on the monitored systems. Important Because Keylime runtime monitoring uses integrity measurement architecture (IMA) to measure large numbers of files, it might have a significant impact on the performance of your system. When provisioning the agent, you can also define a file that Keylime sends to the monitored system. Keylime encrypts the file sent to the agent, and decrypts it only if the agent's system complies with the TPM policy and with the IMA allowlist. You can make Keylime ignore changes of specific files or within specific directories by configuring a Keylime excludelist. The excluded files are still measured by IMA. From Keylime version 7.3.0, provided in RHEL 9.3, the allowlist and excludelist are combined into the Keylime runtime policy. Prerequisites You have network access to the systems where the Keylime components are configured: Verifier For more information, see Section 7.2, "Deploying Keylime verifier from a package" . Registrar For more information, see Section 7.4, "Deploying Keylime registrar from a package" . Tenant For more information, see Section 7.8, "Deploying Keylime tenant from a package" . Agent For more information, see Section 7.9, "Deploying Keylime agent from a package" . Procedure On the monitored system where the Keylime agent is configured and running, generate an allowlist from the current state of the system: Replace <allowlist.txt> with the file name of the allowlist. Important Use the SHA-256 hash function. SHA-1 is not secure and has been deprecated in RHEL 9. For additional information, see SHA-1 deprecation in Red Hat Enterprise Linux 9 . Copy the generated allowlist to the system where the keylime_tenant utility is configured, for example: Optional: You can define a list of files or directories excluded from Keylime measurements by creating a file on the tenant system and entering the paths of files and directories to exclude. The excludelist accepts Python regular expressions with one regular expression per line. See Regular expression operations at docs.python.org for the complete list of special characters. Save the excludelist on the tenant system. Combine the allowlist and excludelist into the Keylime runtime policy: On the system where the Keylime tenant is configured, provision the agent by using the keylime_tenant utility: Replace <agent_ip> with the agent's IP address. Replace <agent_uuid> with the agent's UUID. Replace <policy.json> with the path to the Keylime runtime policy file. With the --cert option, the tenant generates and signs a certificate for the agent by using the CA certificates and keys located in the specified directory, or the default /var/lib/keylime/ca/ directory. If the directory contains no CA certificates and keys, the tenant will generate them automatically according to the configuration in the /etc/keylime/ca.conf file and save them to the specified directory. The tenant then sends these keys and certificates to the agent. When generating CA certificates or signing agent certificates, you might be prompted for the password to access the CA private key: Please enter the password to decrypt your keystore: . If you do not want to use a certificate, use the -f option instead for delivering a file to the agent. Provisioning an agent requires sending any file, even an empty file. Note Keylime encrypts the file sent to the agent, and decrypts it only if the agent's system complies with the TPM policy and the IMA allowlist. By default, Keylime decompresses sent .zip files. As an example, with the following command, keylime_tenant provisions a new Keylime agent at 127.0.0.1 with UUID d432fbb3-d2f1-4a97-9ef7-75bd81c00000 and loads a runtime policy policy.json . It also generates a certificate in the default directory and sends the certificate file to the agent. Keylime decrypts the file only if the TPM policy configured in /etc/keylime/verifier.conf is satisfied: Note You can stop Keylime from monitoring a node by using the # keylime_tenant -c delete -u <agent_uuid> command. You can modify the configuration of an already registered agent by using the keylime_tenant -c update command. Verification Optional: Reboot the monitored system to verify that the settings are persistent. Verify a successful attestation of the agent: Replace <agent.uuid> with the agent's UUID. If the value of operational_state is Get Quote and attestation_count is nonzero, the attestation of this agent is successful. If the value of operational_state is Invalid Quote or Failed attestation fails, the command displays output similar to the following: If the attestation fails, display more details in the verifier log: Additional resources For more information about IMA, see Enhancing security with the kernel integrity subsystem . 7.11. Configuring Keylime for measured boot attestation When you configure Keylime for measured boot attestation, Keylime checks that the boot process on the measured system corresponds to the state you defined. Prerequisites You have network access to the systems where the Keylime components are configured: Verifier For more information, see Section 7.2, "Deploying Keylime verifier from a package" . Registrar For more information, see Section 7.4, "Deploying Keylime registrar from a package" . Tenant For more information, see Section 7.8, "Deploying Keylime tenant from a package" . Agent For more information, see Section 7.9, "Deploying Keylime agent from a package" . Unified Extensible Firmware Interface (UEFI) is enabled on the agent system. Procedure On the monitored system where the Keylime agent is configured and running, install the python3-keylime package, which contains the create_mb_refstate script: On the monitored system, generate a policy from the measured boot log of the current state of the system by using the create_mb_refstate script: Replace <./measured_boot_reference_state.json> with the path where the script saves the generated policy. If your UEFI system does not have Secure Boot enabled, pass the --without-secureboot argument. Important The policy generated with the create_mb_refstate script is based on the current state of the system and is very strict. Any modifications of the system including kernel updates and system updates will change the boot process and the system will fail the attestation. Copy the generated policy to the system where the keylime_tenant utility is configured, for example: On the system where the Keylime tenant is configured, provision the agent by using the keylime_tenant utility: Replace <agent_ip> with the agent's IP address. Replace <agent_uuid> with the agent's UUID. Replace <./measured_boot_reference_state.json> with the path to the measured boot policy. If you configure measured boot in combination with runtime monitoring, provide all the options from both use cases when entering the keylime_tenant -c add command. Note You can stop Keylime from monitoring a node by using the # keylime_tenant -c delete -t <agent_ip> -u <agent_uuid> command. You can modify the configuration of an already registered agent by using the keylime_tenant -c update command. Verification Reboot the monitored system and verify a successful attestation of the agent: Replace <agent_uuid> with the agent's UUID. If the value of operational_state is Get Quote and attestation_count is nonzero, the attestation of this agent is successful. If the value of operational_state is Invalid Quote or Failed attestation fails, the command displays output similar to the following: If the attestation fails, display more details in the verifier log: 7.12. Keylime environment variables You can set Keylime environment variables to override the values from the configuration files, for example, when starting a container with the podman run command by using the -e option. The environment variables have the following syntax: Where: <SECTION> is the section of the Keylime configuration file. <ENVIRONMENT_VARIABLE> is the environment variable. <value> is the value to which you want to set the environment variable. For example, -e KEYLIME_VERIFIER_MAX_RETRIES=6 sets the max_retries configuration option in the [verifier] section to 6 . Verifier configuration Table 7.1. [verifier] section Configuration option Environment variable Default value auto_migrate_db KEYLIME_VERIFIER_AUTO_MIGRATE_DB True client_cert KEYLIME_VERIFIER_CLIENT_CERT default client_key_password KEYLIME_VERIFIER_CLIENT_KEY_PASSWORD client_key KEYLIME_VERIFIER_CLIENT_KEY default database_pool_sz_ovfl KEYLIME_VERIFIER_DATABASE_POOL_SZ_OVFL 5,10 database_url KEYLIME_VERIFIER_DATABASE_URL sqlite durable_attestation_import KEYLIME_VERIFIER_DURABLE_ATTESTATION_IMPORT enable_agent_mtls KEYLIME_VERIFIER_ENABLE_AGENT_MTLS True exponential_backoff KEYLIME_VERIFIER_EXPONENTIAL_BACKOFF True ignore_tomtou_errors KEYLIME_VERIFIER_IGNORE_TOMTOU_ERRORS False ip KEYLIME_VERIFIER_IP 127.0.0.1 max_retries KEYLIME_VERIFIER_MAX_RETRIES 5 max_upload_size KEYLIME_VERIFIER_MAX_UPLOAD_SIZE 104857600 measured_boot_evaluate KEYLIME_VERIFIER_MEASURED_BOOT_EVALUATE once measured_boot_imports KEYLIME_VERIFIER_MEASURED_BOOT_IMPORTS [] measured_boot_policy_name KEYLIME_VERIFIER_MEASURED_BOOT_POLICY_NAME accept-all num_workers KEYLIME_VERIFIER_NUM_WORKERS 0 persistent_store_encoding KEYLIME_VERIFIER_PERSISTENT_STORE_ENCODING persistent_store_format KEYLIME_VERIFIER_PERSISTENT_STORE_FORMAT json persistent_store_url KEYLIME_VERIFIER_PERSISTENT_STORE_URL port KEYLIME_VERIFIER_PORT 8881 quote_interval KEYLIME_VERIFIER_QUOTE_INTERVAL 2 registrar_ip KEYLIME_VERIFIER_REGISTRAR_IP 127.0.0.1 registrar_port KEYLIME_VERIFIER_REGISTRAR_PORT 8891 request_timeout KEYLIME_VERIFIER_REQUEST_TIMEOUT 60.0 require_allow_list_signatures KEYLIME_VERIFIER_REQUIRE_ALLOW_LIST_SIGNATURES True retry_interval KEYLIME_VERIFIER_RETRY_INTERVAL 2 server_cert KEYLIME_VERIFIER_SERVER_CERT default server_key_password KEYLIME_VERIFIER_SERVER_KEY_PASSWORD default server_key KEYLIME_VERIFIER_SERVER_KEY default severity_labels KEYLIME_VERIFIER_SEVERITY_LABELS ["info", "notice", "warning", "error", "critical", "alert", "emergency"] severity_policy KEYLIME_VERIFIER_SEVERITY_POLICY [{"event_id": ".*", "severity_label" : "emergency"}] signed_attributes KEYLIME_VERIFIER_SIGNED_ATTRIBUTES time_stamp_authority_certs_path KEYLIME_VERIFIER_TIME_STAMP_AUTHORITY_CERTS_PATH time_stamp_authority_url KEYLIME_VERIFIER_TIME_STAMP_AUTHORITY_URL tls_dir KEYLIME_VERIFIER_TLS_DIR generate transparency_log_sign_algo KEYLIME_VERIFIER_TRANSPARENCY_LOG_SIGN_ALGO sha256 transparency_log_url KEYLIME_VERIFIER_TRANSPARENCY_LOG_URL trusted_client_ca KEYLIME_VERIFIER_TRUSTED_CLIENT_CA default trusted_server_ca KEYLIME_VERIFIER_TRUSTED_SERVER_CA default uuid KEYLIME_VERIFIER_UUID default version KEYLIME_VERIFIER_VERSION 2.0 Table 7.2. [revocations] section Configuration option Environment variable Default value enabled_revocation_notifications KEYLIME_VERIFIER_REVOCATIONS_ENABLED_REVOCATION_NOTIFICATIONS [agent] webhook_url KEYLIME_VERIFIER_REVOCATIONS_WEBHOOK_URL Registrar configuration Table 7.3. [registrar] section Configuration option Environment variable Default value auto_migrate_db KEYLIME_REGISTRAR_AUTO_MIGRATE_DB True database_pool_sz_ovfl KEYLIME_REGISTRAR_DATABASE_POOL_SZ_OVFL 5,10 database_url KEYLIME_REGISTRAR_DATABASE_URL sqlite durable_attestation_import KEYLIME_REGISTRAR_DURABLE_ATTESTATION_IMPORT ip KEYLIME_REGISTRAR_IP 127.0.0.1 persistent_store_encoding KEYLIME_REGISTRAR_PERSISTENT_STORE_ENCODING persistent_store_format KEYLIME_REGISTRAR_PERSISTENT_STORE_FORMAT json persistent_store_url KEYLIME_REGISTRAR_PERSISTENT_STORE_URL port KEYLIME_REGISTRAR_PORT 8890 prov_db_filename KEYLIME_REGISTRAR_PROV_DB_FILENAME provider_reg_data.sqlite server_cert KEYLIME_REGISTRAR_SERVER_CERT default server_key_password KEYLIME_REGISTRAR_SERVER_KEY_PASSWORD default server_key KEYLIME_REGISTRAR_SERVER_KEY default signed_attributes KEYLIME_REGISTRAR_SIGNED_ATTRIBUTES ek_tpm,aik_tpm,ekcert time_stamp_authority_certs_path KEYLIME_REGISTRAR_TIME_STAMP_AUTHORITY_CERTS_PATH time_stamp_authority_url KEYLIME_REGISTRAR_TIME_STAMP_AUTHORITY_URL tls_dir KEYLIME_REGISTRAR_TLS_DIR default tls_port KEYLIME_REGISTRAR_TLS_PORT 8891 transparency_log_sign_algo KEYLIME_REGISTRAR_TRANSPARENCY_LOG_SIGN_ALGO sha256 transparency_log_url KEYLIME_REGISTRAR_TRANSPARENCY_LOG_URL trusted_client_ca KEYLIME_REGISTRAR_TRUSTED_CLIENT_CA default version KEYLIME_REGISTRAR_VERSION 2.0 Tenant configuration Table 7.4. [tenant] section Configuration option Environment variable Default value accept_tpm_encryption_algs KEYLIME_TENANT_ACCEPT_TPM_ENCRYPTION_ALGS ecc, rsa accept_tpm_hash_algs KEYLIME_TENANT_ACCEPT_TPM_HASH_ALGS sha512, sha384, sha256 accept_tpm_signing_algs KEYLIME_TENANT_ACCEPT_TPM_SIGNING_ALGS ecschnorr, rsassa client_cert KEYLIME_TENANT_CLIENT_CERT default client_key_password KEYLIME_TENANT_CLIENT_KEY_PASSWORD client_key KEYLIME_TENANT_CLIENT_KEY default ek_check_script KEYLIME_TENANT_EK_CHECK_SCRIPT enable_agent_mtls KEYLIME_TENANT_ENABLE_AGENT_MTLS True exponential_backoff KEYLIME_TENANT_EXPONENTIAL_BACKOFF True max_payload_size KEYLIME_TENANT_MAX_PAYLOAD_SIZE 1048576 max_retries KEYLIME_TENANT_MAX_RETRIES 5 mb_refstate KEYLIME_TENANT_MB_REFSTATE registrar_ip KEYLIME_TENANT_REGISTRAR_IP 127.0.0.1 registrar_port KEYLIME_TENANT_REGISTRAR_PORT 8891 request_timeout KEYLIME_TENANT_REQUEST_TIMEOUT 60 require_ek_cert KEYLIME_TENANT_REQUIRE_EK_CERT True retry_interval KEYLIME_TENANT_RETRY_INTERVAL 2 tls_dir KEYLIME_TENANT_TLS_DIR default tpm_cert_store KEYLIME_TENANT_TPM_CERT_STORE /var/lib/keylime/tpm_cert_store trusted_server_ca KEYLIME_TENANT_TRUSTED_SERVER_CA default verifier_ip KEYLIME_TENANT_VERIFIER_IP 127.0.0.1 verifier_port KEYLIME_TENANT_VERIFIER_PORT 8881 version KEYLIME_TENANT_VERSION 2.0 CA configuration Table 7.5. [ca] section Configuration option Environment variable Default value cert_bits KEYLIME_CA_CERT_BITS 2048 cert_ca_lifetime KEYLIME_CA_CERT_CA_LIFETIME 3650 cert_ca_name KEYLIME_CA_CERT_CA_NAME Keylime Certificate Authority cert_country KEYLIME_CA_CERT_COUNTRY US cert_crl_dist KEYLIME_CA_CERT_CRL_DIST http://localhost:38080/crl cert_lifetime KEYLIME_CA_CERT_LIFETIME 365 cert_locality KEYLIME_CA_CERT_LOCALITY Lexington cert_org_unit KEYLIME_CA_CERT_ORG_UNIT 53 cert_organization KEYLIME_CA_CERT_ORGANIZATION MITLL cert_state KEYLIME_CA_CERT_STATE MA password KEYLIME_CA_PASSWORD default version KEYLIME_CA_VERSION 2.0 Agent configuration Table 7.6. [agent] section Configuration option Environment variable Default value contact_ip KEYLIME_AGENT_CONTACT_IP 127.0.0.1 contact_port KEYLIME_AGENT_CONTACT_PORT 9002 dec_payload_file KEYLIME_AGENT_DEC_PAYLOAD_FILE decrypted_payload ek_handle KEYLIME_AGENT_EK_HANDLE generate enable_agent_mtls KEYLIME_AGENT_ENABLE_AGENT_MTLS true enable_insecure_payload KEYLIME_AGENT_ENABLE_INSECURE_PAYLOAD false enable_revocation_notifications KEYLIME_AGENT_ENABLE_REVOCATION_NOTIFICATIONS true enc_keyname KEYLIME_AGENT_ENC_KEYNAME derived_tci_key exponential_backoff KEYLIME_AGENT_EXPONENTIAL_BACKOFF true extract_payload_zip KEYLIME_AGENT_EXTRACT_PAYLOAD_ZIP true ip KEYLIME_AGENT_IP 127.0.0.1 max_retries KEYLIME_AGENT_MAX_RETRIES 4 measure_payload_pcr KEYLIME_AGENT_MEASURE_PAYLOAD_PCR -1 payload_script KEYLIME_AGENT_PAYLOAD_SCRIPT autorun.sh port KEYLIME_AGENT_PORT 9002 registrar_ip KEYLIME_AGENT_REGISTRAR_IP 127.0.0.1 registrar_port KEYLIME_AGENT_REGISTRAR_PORT 8890 retry_interval KEYLIME_AGENT_RETRY_INTERVAL 2 revocation_actions KEYLIME_AGENT_REVOCATION_ACTIONS [] revocation_cert KEYLIME_AGENT_REVOCATION_CERT default revocation_notification_ip KEYLIME_AGENT_REVOCATION_NOTIFICATION_IP 127.0.0.1 revocation_notification_port KEYLIME_AGENT_REVOCATION_NOTIFICATION_PORT 8992 run_as KEYLIME_AGENT_RUN_AS keylime:tss secure_size KEYLIME_AGENT_SECURE_SIZE 1m server_cert KEYLIME_AGENT_SERVER_CERT default server_key_password KEYLIME_AGENT_SERVER_KEY_PASSWORD server_key KEYLIME_AGENT_SERVER_KEY default tls_dir KEYLIME_AGENT_TLS_DIR default tpm_encryption_alg KEYLIME_AGENT_TPM_ENCRYPTION_ALG rsa tpm_hash_alg KEYLIME_AGENT_TPM_HASH_ALG sha256 tpm_ownerpassword KEYLIME_AGENT_TPM_OWNERPASSWORD tpm_signing_alg KEYLIME_AGENT_TPM_SIGNING_ALG rsassa trusted_client_ca KEYLIME_AGENT_TRUSTED_CLIENT_CA default uuid KEYLIME_AGENT_UUID d432fbb3-d2f1-4a97-9ef7-75bd81c00000 version KEYLIME_AGENT_VERSION 2.0 Logging configuration Table 7.7. [logging] section Configuration option Environment variable Default value version KEYLIME_LOGGING_VERSION 2.0 Table 7.8. [loggers] section Configuration option Environment variable Default value keys KEYLIME_LOGGING_LOGGERS_KEYS root,keylime Table 7.9. [handlers] section Configuration option Environment variable Default value keys KEYLIME_LOGGING_HANDLERS_KEYS consoleHandler Table 7.10. [formatters] section Configuration option Environment variable Default value keys KEYLIME_LOGGING_FORMATTERS_KEYS formatter Table 7.11. [formatter_formatter] section Configuration option Environment variable Default value datefmt KEYLIME_LOGGING_FORMATTER_FORMATTER_DATEFMT %Y-%m-%d %H:%M:%S format KEYLIME_LOGGING_FORMATTER_FORMATTER_FORMAT %(asctime)s.%(msecs)03d - %(name)s - %(levelname)s - %(message)s Table 7.12. [logger_root] section Configuration option Environment variable Default value handlers KEYLIME_LOGGING_LOGGER_ROOT_HANDLERS consoleHandler level KEYLIME_LOGGING_LOGGER_ROOT_LEVEL INFO Table 7.13. [handler_consoleHandler] section Configuration option Environment variable Default value args KEYLIME_LOGGING_HANDLER_CONSOLEHANDLER_ARGS (sys.stdout,) class KEYLIME_LOGGING_HANDLER_CONSOLEHANDLER_CLASS StreamHandler formatter KEYLIME_LOGGING_HANDLER_CONSOLEHANDLER_FORMATTER formatter level KEYLIME_LOGGING_HANDLER_CONSOLEHANDLER_LEVEL INFO Table 7.14. [logger_keylime] section Configuration option Environment variable Default value handlers KEYLIME_LOGGING_LOGGER_KEYLIME_HANDLERS level KEYLIME_LOGGING_LOGGER_KEYLIME_LEVEL INFO qualname KEYLIME_LOGGING_LOGGER_KEYLIME_QUALNAME keylime
[ "dnf install keylime-verifier", "[verifier] ip = <verifier_IP_address>", "[verifier] database_url = <protocol>://<name>:<password>@<ip_address_or_hostname>/<properties>", "[verifier] tls_dir = /var/lib/keylime/cv_ca server_key = </path/to/server_key> server_key_password = <passphrase1> server_cert = </path/to/server_cert> trusted_client_ca = [' </path/to/ca/cert1> ', ' </path/to/ca/cert2> '] client_key = </path/to/client_key> client_key_password = <passphrase2> client_cert = </path/to/client_cert> trusted_server_ca = [' </path/to/ca/cert3> ', ' </path/to/ca/cert4> ']", "firewall-cmd --add-port 8881/tcp firewall-cmd --runtime-to-permanent", "systemctl enable --now keylime_verifier", "systemctl status keylime_verifier ● keylime_verifier.service - The Keylime verifier Loaded: loaded (/usr/lib/systemd/system/keylime_verifier.service; disabled; vendor preset: disabled) Active: active (running) since Wed 2022-11-09 10:10:08 EST; 1min 45s ago", "dnf install keylime-verifier", "[verifier] ip = *", "[verifier] database_url = <protocol>://<name>:<password>@<ip_address_or_hostname>/<properties>", "[verifier] tls_dir = /var/lib/keylime/cv_ca server_key = </path/to/server_key> server_cert = </path/to/server_cert> trusted_client_ca = [' </path/to/ca/cert1> ', ' </path/to/ca/cert2> '] client_key = </path/to/client_key> client_cert = </path/to/client_cert> trusted_server_ca = [' </path/to/ca/cert3> ', ' </path/to/ca/cert4> ']", "firewall-cmd --add-port 8881/tcp firewall-cmd --runtime-to-permanent", "podman run --name keylime-verifier -p 8881:8881 -v /etc/keylime/verifier.conf.d:/etc/keylime/verifier.conf.d:Z -v /var/lib/keylime/cv_ca:/var/lib/keylime/cv_ca:Z -d -e KEYLIME_VERIFIER_SERVER_KEY_PASSWORD= <passphrase1> -e KEYLIME_VERIFIER_CLIENT_KEY_PASSWORD= <passphrase2> registry.access.redhat.com/rhel9/keylime-verifier", "podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 80b6b9dbf57c registry.access.redhat.com/rhel9/keylime-verifier:latest keylime_verifier 14 seconds ago Up 14 seconds 0.0.0.0:8881->8881/tcp keylime-verifier", "dnf install keylime-registrar", "[registrar] ip = <registrar_IP_address>", "[registrar] database_url = <protocol>://<name>:<password>@<ip_address_or_hostname>/<properties>", "[registrar] tls_dir = /var/lib/keylime/reg_ca server_key = </path/to/server_key> server_key_password = <passphrase1> server_cert = </path/to/server_cert> trusted_client_ca = [' </path/to/ca/cert1> ', ' </path/to/ca/cert2> ']", "firewall-cmd --add-port 8890/tcp --add-port 8891/tcp firewall-cmd --runtime-to-permanent", "systemctl enable --now keylime_registrar", "systemctl status keylime_registrar ● keylime_registrar.service - The Keylime registrar service Loaded: loaded (/usr/lib/systemd/system/keylime_registrar.service; disabled; vendor preset: disabled) Active: active (running) since Wed 2022-11-09 10:10:17 EST; 1min 42s ago", "dnf install keylime-registrar", "[registrar] ip = *", "[registrar] database_url = &lt;protocol&gt;://&lt;name&gt;:&lt;password&gt;@&lt;ip_address_or_hostname&gt;/&lt;properties&gt;", "[registrar] tls_dir = /var/lib/keylime/reg_ca server_key = &lt;/path/to/server_key&gt; server_cert = &lt;/path/to/server_cert&gt; trusted_client_ca = [' &lt;/path/to/ca/cert1&gt; ', ' &lt;/path/to/ca/cert2&gt; ']", "firewall-cmd --add-port 8890/tcp --add-port 8891/tcp firewall-cmd --runtime-to-permanent", "podman run --name keylime-registrar -p 8890:8890 -p 8891:8891 -v /etc/keylime/registrar.conf.d:/etc/keylime/registrar.conf.d:Z -v /var/lib/keylime/reg_ca:/var/lib/keylime/reg_ca:Z -d -e KEYLIME_REGISTRAR_SERVER_KEY_PASSWORD= &lt;passphrase1&gt; registry.access.redhat.com/rhel9/keylime-registrar", "podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 07d4b4bff1b6 localhost/keylime-registrar:latest keylime_registrar 12 seconds ago Up 12 seconds 0.0.0.0:8881->8881/tcp, 0.0.0.0:8891->8891/tcp keylime-registrar", "vi keylime-playbook.yml", "--- - name: Manage keylime servers hosts: all vars: keylime_server_verifier_ip: \"{{ ansible_host }}\" keylime_server_registrar_ip: \"{{ ansible_host }}\" keylime_server_verifier_tls_dir: <ver_tls_directory > keylime_server_verifier_server_cert: <ver_server_certfile > keylime_server_verifier_server_key: <ver_server_key > keylime_server_verifier_server_key_passphrase: <ver_server_key_passphrase > keylime_server_verifier_trusted_client_ca: <ver_trusted_client_ca_list > keylime_server_verifier_client_cert: <ver_client_certfile > keylime_server_verifier_client_key: <ver_client_key > keylime_server_verifier_client_key_passphrase: <ver_client_key_passphrase > keylime_server_verifier_trusted_server_ca: <ver_trusted_server_ca_list > keylime_server_registrar_tls_dir: <reg_tls_directory > keylime_server_registrar_server_cert: <reg_server_certfile > keylime_server_registrar_server_key: <reg_server_key > keylime_server_registrar_server_key_passphrase: <reg_server_key_passphrase > keylime_server_registrar_trusted_client_ca: <reg_trusted_client_ca_list > roles: - rhel-system-roles.keylime_server", "ansible-playbook <keylime-playbook.yml>", "systemctl status keylime_verifier ● keylime_verifier.service - The Keylime verifier Loaded: loaded (/usr/lib/systemd/system/keylime_verifier.service; disabled; vendor preset: disabled) Active: active (running) since Wed 2022-11-09 10:10:08 EST; 1min 45s ago", "systemctl status keylime_registrar ● keylime_registrar.service - The Keylime registrar service Loaded: loaded (/usr/lib/systemd/system/keylime_registrar.service; disabled; vendor preset: disabled) Active: active (running) since Wed 2022-11-09 10:10:17 EST; 1min 42s ago", "dnf install keylime-tenant", "[tenant] verifier_ip = <verifier_ip>", "[tenant] registrar_ip = <registrar_ip>", "[tenant] tls_dir = /var/lib/keylime/cv_ca client_key = tenant-key.pem client_key_password = <passphrase1> client_cert = tenant-cert.pem trusted_server_ca = [' </path/to/ca/cert> ']", "keylime_tenant -c cvstatus Reading configuration from ['/etc/keylime/logging.conf'] 2022-10-14 12:56:08.155 - keylime.tpm - INFO - TPM2-TOOLS Version: 5.2 Reading configuration from ['/etc/keylime/tenant.conf'] 2022-10-14 12:56:08.157 - keylime.tenant - INFO - Setting up client TLS 2022-10-14 12:56:08.158 - keylime.tenant - INFO - Using default client_cert option for tenant 2022-10-14 12:56:08.158 - keylime.tenant - INFO - Using default client_key option for tenant 2022-10-14 12:56:08.178 - keylime.tenant - INFO - TLS is enabled. 2022-10-14 12:56:08.178 - keylime.tenant - WARNING - Using default UUID d432fbb3-d2f1-4a97-9ef7-75bd81c00000 2022-10-14 12:56:08.221 - keylime.tenant - INFO - Verifier at 127.0.0.1 with Port 8881 does not have agent d432fbb3-d2f1-4a97-9ef7-75bd81c00000.", "keylime_tenant -c regstatus Reading configuration from ['/etc/keylime/logging.conf'] 2022-10-14 12:56:02.114 - keylime.tpm - INFO - TPM2-TOOLS Version: 5.2 Reading configuration from ['/etc/keylime/tenant.conf'] 2022-10-14 12:56:02.116 - keylime.tenant - INFO - Setting up client TLS 2022-10-14 12:56:02.116 - keylime.tenant - INFO - Using default client_cert option for tenant 2022-10-14 12:56:02.116 - keylime.tenant - INFO - Using default client_key option for tenant 2022-10-14 12:56:02.137 - keylime.tenant - INFO - TLS is enabled. 2022-10-14 12:56:02.137 - keylime.tenant - WARNING - Using default UUID d432fbb3-d2f1-4a97-9ef7-75bd81c00000 2022-10-14 12:56:02.171 - keylime.registrar_client - CRITICAL - Error: could not get agent d432fbb3-d2f1-4a97-9ef7-75bd81c00000 data from Registrar Server: 404 2022-10-14 12:56:02.172 - keylime.registrar_client - CRITICAL - Response code 404: agent d432fbb3-d2f1-4a97-9ef7-75bd81c00000 not found 2022-10-14 12:56:02.172 - keylime.tenant - INFO - Agent d432fbb3-d2f1-4a97-9ef7-75bd81c00000 does not exist on the registrar. Please register the agent with the registrar. 2022-10-14 12:56:02.172 - keylime.tenant - INFO - {\"code\": 404, \"status\": \"Agent d432fbb3-d2f1-4a97-9ef7-75bd81c00000 does not exist on registrar 127.0.0.1 port 8891.\", \"results\": {}}", "dnf install keylime-agent", "[agent] ip = ' <agent_ip> '", "[agent] registrar_ip = ' <registrar_IP_address> '", "[agent] uuid = ' <agent_UUID> '", "[agent] server_key = ' </path/to/server_key> ' server_key_password = ' <passphrase1> ' server_cert = ' </path/to/server_cert> ' trusted_client_ca = '[ </path/to/ca/cert3> , </path/to/ca/cert4> ]'", "firewall-cmd --add-port 9002/tcp firewall-cmd --runtime-to-permanent", "systemctl enable --now keylime_agent", "keylime_tenant -c regstatus --uuid <agent_uuid> Reading configuration from ['/etc/keylime/logging.conf'] ==\\n-----END CERTIFICATE-----\\n\", \"ip\": \"127.0.0.1\", \"port\": 9002, \"regcount\": 1, \"operational_state\": \"Registered\"}}}", "PROC_SUPER_MAGIC = 0x9fa0 dont_measure fsmagic=0x9fa0 SYSFS_MAGIC = 0x62656572 dont_measure fsmagic=0x62656572 DEBUGFS_MAGIC = 0x64626720 dont_measure fsmagic=0x64626720 TMPFS_MAGIC = 0x01021994 dont_measure fsmagic=0x1021994 RAMFS_MAGIC dont_measure fsmagic=0x858458f6 DEVPTS_SUPER_MAGIC=0x1cd1 dont_measure fsmagic=0x1cd1 BINFMTFS_MAGIC=0x42494e4d dont_measure fsmagic=0x42494e4d SECURITYFS_MAGIC=0x73636673 dont_measure fsmagic=0x73636673 SELINUX_MAGIC=0xf97cff8c dont_measure fsmagic=0xf97cff8c SMACK_MAGIC=0x43415d53 dont_measure fsmagic=0x43415d53 NSFS_MAGIC=0x6e736673 dont_measure fsmagic=0x6e736673 EFIVARFS_MAGIC dont_measure fsmagic=0xde5e81e4 CGROUP_SUPER_MAGIC=0x27e0eb dont_measure fsmagic=0x27e0eb CGROUP2_SUPER_MAGIC=0x63677270 dont_measure fsmagic=0x63677270 OVERLAYFS_MAGIC when containers are used we almost always want to ignore them dont_measure fsmagic=0x794c7630 MEASUREMENTS measure func=BPRM_CHECK measure func=FILE_MMAP mask=MAY_EXEC measure func=MODULE_CHECK uid=0", "grubby --update-kernel DEFAULT --args 'ima_appraise=fix ima_canonical_fmt ima_policy=tcb ima_template=ima-ng'", "systemctl status keylime_agent ● keylime_agent.service - The Keylime compute agent Loaded: loaded (/usr/lib/systemd/system/keylime_agent.service; enabled; preset: disabled) Active: active (running) since", "/usr/share/keylime/scripts/create_allowlist.sh -o <allowlist.txt> -h sha256sum", "scp <allowlist.txt> root@ <tenant . ip> :/root/ <allowlist.txt>", "keylime_create_policy -a <allowlist.txt> -e <excludelist.txt> -o <policy.json>", "keylime_tenant -c add -t <agent_ip> -u <agent_uuid> --runtime-policy <policy.json> --cert default", "keylime_tenant -c add -t 127.0.0.1 -u d432fbb3-d2f1-4a97-9ef7-75bd81c00000 --runtime-policy policy.json --cert default", "keylime_tenant -c cvstatus -u <agent.uuid> {\" <agent.uuid> \": {\"operational_state\": \"Get Quote\"...\"attestation_count\": 5", "{\" <agent.uuid> \": {\"operational_state\": \"Invalid Quote\", ... \"ima.validation.ima-ng.not_in_allowlist\", \"attestation_count\": 5, \"last_received_quote\": 1684150329, \"last_successful_attestation\": 1684150327}}", "journalctl -u keylime_verifier keylime.tpm - INFO - Checking IMA measurement list keylime.ima - WARNING - File not found in allowlist: /root/bad-script.sh keylime.ima - ERROR - IMA ERRORS: template-hash 0 fnf 1 hash 0 good 781 keylime.cloudverifier - WARNING - agent D432FBB3-D2F1-4A97-9EF7-75BD81C00000 failed, stopping polling", "dnf -y install python3-keylime", "/usr/share/keylime/scripts/create_mb_refstate /sys/kernel/security/tpm0/binary_bios_measurements <./measured_boot_reference_state.json>", "scp root@ <agent_ip> : <./measured_boot_reference_state.json> <./measured_boot_reference_state.json>", "keylime_tenant -c add -t <agent_ip> -u <agent_uuid> --mb_refstate <./measured_boot_reference_state.json> --cert default", "keylime_tenant -c cvstatus -u <agent_uuid> {\" <agent.uuid> \": {\"operational_state\": \"Get Quote\"...\"attestation_count\": 5", "{\" <agent.uuid> \": {\"operational_state\": \"Invalid Quote\", ... \"ima.validation.ima-ng.not_in_allowlist\", \"attestation_count\": 5, \"last_received_quote\": 1684150329, \"last_successful_attestation\": 1684150327}}", "journalctl -u keylime_verifier {\"d432fbb3-d2f1-4a97-9ef7-75bd81c00000\": {\"operational_state\": \"Tenant Quote Failed\", ... \"last_event_id\": \"measured_boot.invalid_pcr_0\", \"attestation_count\": 0, \"last_received_quote\": 1684487093, \"last_successful_attestation\": 0}}", "KEYLIME _<SECTION>_<ENVIRONMENT_VARIABLE> = <value>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/security_hardening/assembly_ensuring-system-integrity-with-keylime_security-hardening
18.13. Creating Tunnels
18.13. Creating Tunnels This section will demonstrate how to implement different tunneling scenarios. 18.13.1. Creating Multicast Tunnels A multicast group is set up to represent a virtual network. Any guest virtual machines whose network devices are in the same multicast group can talk to each other even across host physical machines. This mode is also available to unprivileged users. There is no default DNS or DHCP support and no outgoing network access. To provide outgoing network access, one of the guest virtual machines should have a second NIC which is connected to one of the first four network types, thus providing appropriate routing. The multicast protocol is compatible with the guest virtual machine user mode. Note that the source address that you provide must be from the address used for the multicast address block. To create a multicast tunnel, specify the following XML details into the <devices> element: ... <devices> <interface type='mcast'> <mac address='52:54:00:6d:90:01'> <source address='230.0.0.1' port='5558'/> </interface> </devices> ... Figure 18.28. Multicast tunnel XML example
[ "<devices> <interface type='mcast'> <mac address='52:54:00:6d:90:01'> <source address='230.0.0.1' port='5558'/> </interface> </devices>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-virtual_networking-creating_tunnels
Chapter 16. Enabling Red Hat build of Keycloak Metrics
Chapter 16. Enabling Red Hat build of Keycloak Metrics Red Hat build of Keycloak has built in support for metrics. This chapter describes how to enable and configure server metrics. 16.1. Enabling Metrics It is possible to enable metrics using the build time option metrics-enabled : bin/kc.[sh|bat] start --metrics-enabled=true 16.2. Querying Metrics Red Hat build of Keycloak exposes metrics at the following endpoint: /metrics The response from the endpoint uses a application/openmetrics-text content type and it is based on the Prometheus (OpenMetrics) text format. The snippet bellow is an example of a response: 16.3. Available Metrics The table below summarizes the available metrics groups: Metric Description System A set of system-level metrics related to CPU and memory usage. JVM A set of metrics from the Java Virtual Machine (JVM) related to GC, and heap. Database A set of metrics from the database connection pool, if using a database. Cache A set of metrics from Infinispan caches. See Configuring distributed caches for more details. 16.4. Relevant options Value metrics-enabled 🛠 If the server should expose metrics. If enabled, metrics are available at the /metrics endpoint. CLI: --metrics-enabled Env: KC_METRICS_ENABLED true , false (default)
[ "bin/kc.[sh|bat] start --metrics-enabled=true", "HELP base_gc_total Displays the total number of collections that have occurred. This attribute lists -1 if the collection count is undefined for this collector. TYPE base_gc_total counter base_gc_total{name=\"G1 Young Generation\",} 14.0 HELP jvm_memory_usage_after_gc_percent The percentage of long-lived heap pool used after the last GC event, in the range [0..1] TYPE jvm_memory_usage_after_gc_percent gauge jvm_memory_usage_after_gc_percent{area=\"heap\",pool=\"long-lived\",} 0.0 HELP jvm_threads_peak_threads The peak live thread count since the Java virtual machine started or peak was reset TYPE jvm_threads_peak_threads gauge jvm_threads_peak_threads 113.0 HELP agroal_active_count Number of active connections. These connections are in use and not available to be acquired. TYPE agroal_active_count gauge agroal_active_count{datasource=\"default\",} 0.0 HELP base_memory_maxHeap_bytes Displays the maximum amount of memory, in bytes, that can be used for memory management. TYPE base_memory_maxHeap_bytes gauge base_memory_maxHeap_bytes 1.6781410304E10 HELP process_start_time_seconds Start time of the process since unix epoch. TYPE process_start_time_seconds gauge process_start_time_seconds 1.675188449054E9 HELP system_load_average_1m The sum of the number of runnable entities queued to available processors and the number of runnable entities running on the available processors averaged over a period of time TYPE system_load_average_1m gauge system_load_average_1m 4.005859375" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/server_guide/configuration-metrics-
Chapter 90. workbook
Chapter 90. workbook This chapter describes the commands under the workbook command. 90.1. workbook create Create new workbook. Usage: Table 90.1. Positional arguments Value Summary definition Workbook definition file Table 90.2. Command arguments Value Summary -h, --help Show this help message and exit --public With this flag workbook will be marked as "public". --namespace [NAMESPACE] Namespace to create the workbook within. Table 90.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 90.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 90.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 90.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 90.2. workbook definition show Show workbook definition. Usage: Table 90.7. Positional arguments Value Summary name Workbook name Table 90.8. Command arguments Value Summary -h, --help Show this help message and exit 90.3. workbook delete Delete workbook. Usage: Table 90.9. Positional arguments Value Summary workbook Name of workbook(s). Table 90.10. Command arguments Value Summary -h, --help Show this help message and exit --namespace [NAMESPACE] Namespace to delete the workbook(s) from. 90.4. workbook list List all workbooks. Usage: Table 90.11. Command arguments Value Summary -h, --help Show this help message and exit --marker [MARKER] The last execution uuid of the page, displays list of executions after "marker". --limit [LIMIT] Maximum number of entries to return in a single result. --sort_keys [SORT_KEYS] Comma-separated list of sort keys to sort results by. Default: created_at. Example: mistral execution-list --sort_keys=id,description --sort_dirs [SORT_DIRS] Comma-separated list of sort directions. default: asc. Example: mistral execution-list --sort_keys=id,description --sort_dirs=asc,desc --filter FILTERS Filters. can be repeated. Table 90.12. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 90.13. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 90.14. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 90.15. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 90.5. workbook show Show specific workbook. Usage: Table 90.16. Positional arguments Value Summary workbook Workbook name Table 90.17. Command arguments Value Summary -h, --help Show this help message and exit --namespace [NAMESPACE] Namespace to get the workbook from. Table 90.18. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 90.19. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 90.20. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 90.21. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 90.6. workbook update Update workbook. Usage: Table 90.22. Positional arguments Value Summary definition Workbook definition file Table 90.23. Command arguments Value Summary -h, --help Show this help message and exit --namespace [NAMESPACE] Namespace to update the workbook in. --public With this flag workbook will be marked as "public". Table 90.24. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 90.25. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 90.26. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 90.27. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 90.7. workbook validate Validate workbook. Usage: Table 90.28. Positional arguments Value Summary definition Workbook definition file Table 90.29. Command arguments Value Summary -h, --help Show this help message and exit Table 90.30. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 90.31. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 90.32. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 90.33. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack workbook create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--public] [--namespace [NAMESPACE]] definition", "openstack workbook definition show [-h] name", "openstack workbook delete [-h] [--namespace [NAMESPACE]] workbook [workbook ...]", "openstack workbook list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--marker [MARKER]] [--limit [LIMIT]] [--sort_keys [SORT_KEYS]] [--sort_dirs [SORT_DIRS]] [--filter FILTERS]", "openstack workbook show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--namespace [NAMESPACE]] workbook", "openstack workbook update [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--namespace [NAMESPACE]] [--public] definition", "openstack workbook validate [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] definition" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/command_line_interface_reference/workbook
Chapter 20. Configuring artifact types
Chapter 20. Configuring artifact types As a Red Hat Quay administrator, you can configure Open Container Initiative (OCI) artifact types and other experimental artifact types through the FEATURE_GENERAL_OCI_SUPPORT , ALLOWED_OCI_ARTIFACT_TYPES , and IGNORE_UNKNOWN_MEDIATYPES configuration fields. The following Open Container Initiative (OCI) artifact types are built into Red Hat Quay by default and are enabled through the FEATURE_GENERAL_OCI_SUPPORT configuration field: Field Media Type Supported content types Helm application/vnd.cncf.helm.config.v1+json application/tar+gzip , application/vnd.cncf.helm.chart.content.v1.tar+gzip Cosign application/vnd.oci.image.config.v1+json application/vnd.dev.cosign.simplesigning.v1+json , application/vnd.dsse.envelope.v1+json SPDX application/vnd.oci.image.config.v1+json text/spdx , text/spdx+xml , text/spdx+json Syft application/vnd.oci.image.config.v1+json application/vnd.syft+json CycloneDX application/vnd.oci.image.config.v1+json application/vnd.cyclonedx , application/vnd.cyclonedx+xml , application/vnd.cyclonedx+json In-toto application/vnd.oci.image.config.v1+json application/vnd.in-toto+json Unknown application/vnd.cncf.openpolicyagent.policy.layer.v1+rego application/vnd.cncf.openpolicyagent.policy.layer.v1+rego , application/vnd.cncf.openpolicyagent.data.layer.v1+json Additionally, Red Hat Quay uses the ZStandard , or zstd , to reduce the size of container images or other related artifacts. Zstd helps optimize storage and improve transfer speeds when working with container images. Use the following procedures to configure support for the default and experimental OCI media types. 20.1. Configuring OCI artifact types Use the following procedure to configure artifact types that are embedded in Red Hat Quay by default. Prerequisites You have Red Hat Quay administrator privileges. Procedure In your Red Hat Quay config.yaml file, enable support for general OCI support by setting the FEATURE_GENERAL_OCI_SUPPORT field to true . For example: FEATURE_GENERAL_OCI_SUPPORT: true With FEATURE_GENERAL_OCI_SUPPORT set to true, Red Hat Quay users can now push and pull charts of the default artifact types to their Red Hat Quay deployment. 20.2. Configuring additional artifact types Use the following procedure to configure additional, and specific, artifact types for your Red Hat Quay deployment. Note Using the ALLOWED_OCI_ARTIFACT_TYPES configuration field, you can restrict which artifact types are accepted by your Red Hat Quay registry. If you want your Red Hat Quay deployment to accept all artifact types, see "Configuring unknown media types". Prerequistes You have Red Hat Quay administrator privileges. Procedure Add the ALLOWED_OCI_ARTIFACT_TYPES configuration field, along with the configuration and layer types: FEATURE_GENERAL_OCI_SUPPORT: true ALLOWED_OCI_ARTIFACT_TYPES: <oci config type 1>: - <oci layer type 1> - <oci layer type 2> <oci config type 2>: - <oci layer type 3> - <oci layer type 4> For example, you can add Singularity Image Format (SIF) support by adding the following to your config.yaml file: ALLOWED_OCI_ARTIFACT_TYPES: application/vnd.oci.image.config.v1+json: - application/vnd.dev.cosign.simplesigning.v1+json application/vnd.cncf.helm.config.v1+json: - application/tar+gzip application/vnd.sylabs.sif.config.v1+json: - application/vnd.sylabs.sif.layer.v1+tar Note When adding OCI artifact types that are not configured by default, Red Hat Quay administrators will also need to manually add support for Cosign and Helm if desired. Now, users can tag SIF images for their Red Hat Quay registry. 20.3. Configuring unknown media types Use the following procedure to enable all artifact types for your Red Hat Quay deployment. Note With this field enabled, your Red Hat Quay deployment accepts all artifact types. Prerequistes You have Red Hat Quay administrator privileges. Procedure Add the IGNORE_UNKNOWN_MEDIATYPES configuration field to your Red Hat Quay config.yaml file: IGNORE_UNKNOWN_MEDIATYPES: true With this field enabled, your Red Hat Quay deployment accepts unknown and unrecognized artifact types.
[ "FEATURE_GENERAL_OCI_SUPPORT: true", "FEATURE_GENERAL_OCI_SUPPORT: true ALLOWED_OCI_ARTIFACT_TYPES: <oci config type 1>: - <oci layer type 1> - <oci layer type 2> <oci config type 2>: - <oci layer type 3> - <oci layer type 4>", "ALLOWED_OCI_ARTIFACT_TYPES: application/vnd.oci.image.config.v1+json: - application/vnd.dev.cosign.simplesigning.v1+json application/vnd.cncf.helm.config.v1+json: - application/tar+gzip application/vnd.sylabs.sif.config.v1+json: - application/vnd.sylabs.sif.layer.v1+tar", "IGNORE_UNKNOWN_MEDIATYPES: true" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/manage_red_hat_quay/supported-oci-media-types
Chapter 4. Configuring user preferences for email notifications
Chapter 4. Configuring user preferences for email notifications Each user in the Red Hat Hybrid Cloud Console must opt in to receive email notifications emails about events. You can select the services from which to receive notifications as well as the frequency. Important If you select Instant notification for any service, you might receive a very large number of emails. Prerequisites You are logged in to the Red Hat Hybrid Cloud Console. You have configured relevant events in the console. A Notifications administrator or Organization Administrator has configured behavior groups to receive event notifications. Procedure In the Hybrid Cloud Console, navigate to Settings > Notifications > My User Preferences . The My Notifications page appears. On the My Notifications page, the available services are grouped by category, for example Red Hat Enterprise Linux. Select the service you want to configure your notifications for, for example, Advisor or Inventory. A list of the available event notifications for the selected service opens. At the top of the list, click Select all to enable all notifications for the service, or select one of the following options for each event listed: Note All options are not available for all services. You cannot disable OpenShift notifications on this page because your cluster is managed by Red Hat. These notifications are the primary way that Red Hat Site Reliability Engineering (SRE) will contact you to inform you about cluster problems and request actions you must take to resolve them. Cluster owners cannot unsubscribe from email notifications. If you are not a cluster owner and you do not want to receive notification emails, you can ask your cluster owner or administrator to remove you from the list of cluster notification contacts as described in Removing notification contacts from your cluster . Daily digest : Receive a daily summary of triggered application events that occur in a 24 hour time frame. Instant notification : Receive an email immediately for each triggered application event. Important If you select Instant notification for any service, you might receive a very large number of emails. Weekly report : Receive an email that contains the Advisor Weekly Report. Update your information and then click Save . Email notifications are delivered in the format and frequency that you selected. Note If you decide to stop receiving notifications, select Deselect all or uncheck the boxes for the events you do not want to be notified about, and then click Save . You will no longer receive any email notifications unless you return to this screen and enable them once again. 4.1. Customizing the daily digest email notification time You can choose to receive a summary of triggered application events occurring in your Red Hat Hybrid Cloud Console services in a daily digest email, instead of being notified as events occur. By default, the daily digest is sent at 00:00 Coordinated Universal Time (UTC). Organization Administrators and Notifications administrators can customize the time the daily digest is sent. The daily digest provides a snapshot of events occurring over a 24-hour time frame, starting from the time you specify in the notifications settings. Prerequisites You are logged in to the Hybrid Cloud Console as an Organization Administrator or as a user with Notifications administrator permissions. Procedure In the Hybrid Cloud Console, navigate to Settings > Notifications > My User Preferences . The My Notifications page appears. On the My Notifications page, click Edit time settings . Select Custom time and then specify the time and time zone to send your account's daily digest email. Click Save . The daily digest email is sent each day at the time you selected. Note After you save a new time, the Hybrid Cloud Console converts the new time to the UTC time zone.
null
https://docs.redhat.com/en/documentation/red_hat_hybrid_cloud_console/1-latest/html/configuring_notifications_on_the_red_hat_hybrid_cloud_console_with_fedramp/proc-notif-config-user-preferences_notifications
Chapter 6. Verifying HCI configuration
Chapter 6. Verifying HCI configuration After deployment is complete, verify the HCI environment is properly configured. 6.1. Verifying HCI configuration After the deployment of the HCI environment, verify that the deployment was successful with the configuration specified. Procedure Start a ceph shell. Confirm NUMA and memory target configuration: Confirm specific OSD configuration: Confirm specific OSD backfill configuration: Confirm the reserved_host_memory_mb configuration on the Compute node.
[ "ceph config dump | grep numa osd advanced osd_numa_auto_affinity true ceph config dump | grep autotune osd advanced osd_memory_target_autotune true ceph config get mgr mgr/cephadm/autotune_memory_target_ratio 0.200000", "ceph config get osd.11 osd_memory_target 4294967296 ceph config get osd.11 osd_memory_target_autotune true ceph config get osd.11 osd_numa_auto_affinity true", "ceph config get osd.11 osd_recovery_op_priority 3 ceph config get osd.11 osd_max_backfills 1 ceph config get osd.11 osd_recovery_max_active_hdd 3 ceph config get osd.11 osd_recovery_max_active_ssd 10", "sudo podman exec -ti nova_compute /bin/bash bash-5.1USD grep reserved_host_memory_mb /etc/nova/nova.conf" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/deploying_a_hyperconverged_infrastructure/assembly_verify-hci-configuration
Red Hat Data Grid
Red Hat Data Grid Data Grid is a high-performance, distributed in-memory data store. Schemaless data structure Flexibility to store different objects as key-value pairs. Grid-based data storage Designed to distribute and replicate data across clusters. Elastic scaling Dynamically adjust the number of nodes to meet demand without service disruption. Data interoperability Store, retrieve, and query data in the grid from different endpoints.
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/configuring_data_grid_caches/red-hat-data-grid
Chapter 3. Using the OpenShift Container Platform dashboard to get cluster information
Chapter 3. Using the OpenShift Container Platform dashboard to get cluster information The OpenShift Container Platform web console captures high-level information about the cluster. 3.1. About the OpenShift Container Platform dashboards page Access the OpenShift Container Platform dashboard, which captures high-level information about the cluster, by navigating to Home Overview from the OpenShift Container Platform web console. The OpenShift Container Platform dashboard provides various cluster information, captured in individual dashboard cards. The OpenShift Container Platform dashboard consists of the following cards: Details provides a brief overview of informational cluster details. Status include ok , error , warning , in progress , and unknown . Resources can add custom status names. Cluster ID Provider Version Cluster Inventory details number of resources and associated statuses. It is helpful when intervention is required to resolve problems, including information about: Number of nodes Number of pods Persistent storage volume claims Bare metal hosts in the cluster, listed according to their state (only available in metal3 environment). Bare metal hosts in the cluster, listed according to their state (only available in metal3 environment) Status helps administrators understand how cluster resources are consumed. Click on a resource to jump to a detailed page listing pods and nodes that consume the largest amount of the specified cluster resource (CPU, memory, or storage). Cluster Utilization shows the capacity of various resources over a specified period of time, to help administrators understand the scale and frequency of high resource consumption, including information about: CPU time Memory allocation Storage consumed Network resources consumed Pod count Activity lists messages related to recent activity in the cluster, such as pod creation or virtual machine migration to another host. 3.2. Recognizing resource and project limits and quotas You can view a graphical representation of available resources in the Topology view of the web console Developer perspective. If a resource has a message about resource limitations or quotas being reached, a yellow border appears around the resource name. Click the resource to open a side panel to see the message. If the Topology view has been zoomed out, a yellow dot indicates that a message is available. If you are using List View from the View Shortcuts menu, resources appear as a list. The Alerts column indicates if a message is available.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/web_console/using-dashboard-to-get-cluster-info
Chapter 3. Hardware requirements for NFV
Chapter 3. Hardware requirements for NFV This section describes the hardware requirements for NFV. For a complete list of the certified hardware for Red Hat OpenStack Platform, see Red Hat OpenStack Platform certified hardware . 3.1. Tested NICs for NFV For a list of tested NICs for NFV, see the Red Hat Knowledgebase solution Network Adapter Fast Datapath Feature Support Matrix . Use the default driver for the supported NIC, unless you are configuring OVS-DPDK on NVIDIA (Mellanox) network interfaces. For NVIDIA network interfaces, you must set the corresponding kernel driver in the j2 network configuration template. Example In this example, the mlx5_core driver is set for the Mellanox ConnectX-5 network interface: 3.2. Troubleshooting hardware offload In a Red Hat OpenStack Platform(RHOSP) 16.2 deployment, OVS Hardware Offload might not offload flows for VMs with switchdev -capable ports and Mellanox ConnectX5 NICs. To troubleshoot and configure offload flows in this scenario, disable the ESWITCH_IPV4_TTL_MODIFY_ENABLE Mellanox firmware parameter. For more troubleshooting information about OVS Hardware Offload in RHOSP 16.2, see the Red Hat Knowledgebase solution OVS Hardware Offload with Mellanox NIC in OpenStack Platform 16.2 . Procedure Log in to the Compute nodes in your RHOSP deployment that have Mellanox NICs that you want to configure. Use the mstflint utility to query the ESWITCH_IPV4_TTL_MODIFY_ENABLE Mellanox firmware parameter . If the ESWITCH_IPV4_TTL_MODIFY_ENABLE parameter is enabled and set to 1 , then set the value to 0 to disable it. Reboot the node. 3.3. Discovering your NUMA node topology When you plan your deployment, you must understand the NUMA topology of your Compute node to partition the CPU and memory resources for optimum performance. To determine the NUMA information, perform one of the following tasks: Enable hardware introspection to retrieve this information from bare-metal nodes. Log on to each bare-metal node to manually collect the information. Note You must install and configure the undercloud before you can retrieve NUMA information through hardware introspection. For more information about undercloud configuration, see the Director Installation and Usage guide. 3.4. Retrieving hardware introspection details The Bare Metal service hardware-inspection-extras feature is enabled by default, and you can use it to retrieve hardware details for overcloud configuration. For more information about the inspection_extras parameter in the undercloud.conf file, see Configuring director . For example, the numa_topology collector is part of the hardware-inspection extras and includes the following information for each NUMA node: RAM (in kilobytes) Physical CPU cores and their sibling threads NICs associated with the NUMA node Procedure To retrieve the information listed above, substitute <UUID> with the UUID of the bare-metal node to complete the following command: The following example shows the retrieved NUMA information for a bare-metal node: 3.5. NFV BIOS settings The following table describes the required BIOS settings for NFV: Note You must enable SR-IOV global and NIC settings in the BIOS, or your Red Hat OpenStack Platform (RHOSP) deployment with SR-IOV Compute nodes will fail. Table 3.1. BIOS Settings Parameter Setting C3 Power State Disabled. C6 Power State Disabled. MLC Streamer Enabled. MLC Spatial Prefetcher Enabled. DCU Data Prefetcher Enabled. DCA Enabled. CPU Power and Performance Performance. Memory RAS and Performance Config NUMA Optimized Enabled. Turbo Boost Disabled in NFV deployments that require deterministic performance. Enabled in all other scenarios. VT-d Enabled for Intel cards if VFIO functionality is needed. NUMA memory interleave Disabled. On processors that use the intel_idle driver, Red Hat Enterprise Linux can ignore BIOS settings and re-enable the processor C-state. You can disable intel_idle and instead use the acpi_idle driver by specifying the key-value pair intel_idle.max_cstate=0 on the kernel boot command line. Confirm that the processor is using the acpi_idle driver by checking the contents of current_driver : Note You will experience some latency after changing drivers, because it takes time for the Tuned daemon to start. However, after Tuned loads, the processor does not use the deeper C-state.
[ "members - type: ovs_dpdk_port name: dpdk0 driver: mlx5_core members: - type: interface name: enp3s0f0", "yum install -y mstflint mstconfig -d <PF PCI BDF> q ESWITCH_IPV4_TTL_MODIFY_ENABLE", "mstconfig -d <PF PCI BDF> s ESWITCH_IPV4_TTL_MODIFY_ENABLE=0`", "openstack baremetal introspection data save <UUID> | jq .numa_topology", "{ \"cpus\": [ { \"cpu\": 1, \"thread_siblings\": [ 1, 17 ], \"numa_node\": 0 }, { \"cpu\": 2, \"thread_siblings\": [ 10, 26 ], \"numa_node\": 1 }, { \"cpu\": 0, \"thread_siblings\": [ 0, 16 ], \"numa_node\": 0 }, { \"cpu\": 5, \"thread_siblings\": [ 13, 29 ], \"numa_node\": 1 }, { \"cpu\": 7, \"thread_siblings\": [ 15, 31 ], \"numa_node\": 1 }, { \"cpu\": 7, \"thread_siblings\": [ 7, 23 ], \"numa_node\": 0 }, { \"cpu\": 1, \"thread_siblings\": [ 9, 25 ], \"numa_node\": 1 }, { \"cpu\": 6, \"thread_siblings\": [ 6, 22 ], \"numa_node\": 0 }, { \"cpu\": 3, \"thread_siblings\": [ 11, 27 ], \"numa_node\": 1 }, { \"cpu\": 5, \"thread_siblings\": [ 5, 21 ], \"numa_node\": 0 }, { \"cpu\": 4, \"thread_siblings\": [ 12, 28 ], \"numa_node\": 1 }, { \"cpu\": 4, \"thread_siblings\": [ 4, 20 ], \"numa_node\": 0 }, { \"cpu\": 0, \"thread_siblings\": [ 8, 24 ], \"numa_node\": 1 }, { \"cpu\": 6, \"thread_siblings\": [ 14, 30 ], \"numa_node\": 1 }, { \"cpu\": 3, \"thread_siblings\": [ 3, 19 ], \"numa_node\": 0 }, { \"cpu\": 2, \"thread_siblings\": [ 2, 18 ], \"numa_node\": 0 } ], \"ram\": [ { \"size_kb\": 66980172, \"numa_node\": 0 }, { \"size_kb\": 67108864, \"numa_node\": 1 } ], \"nics\": [ { \"name\": \"ens3f1\", \"numa_node\": 1 }, { \"name\": \"ens3f0\", \"numa_node\": 1 }, { \"name\": \"ens2f0\", \"numa_node\": 0 }, { \"name\": \"ens2f1\", \"numa_node\": 0 }, { \"name\": \"ens1f1\", \"numa_node\": 0 }, { \"name\": \"ens1f0\", \"numa_node\": 0 }, { \"name\": \"eno4\", \"numa_node\": 0 }, { \"name\": \"eno1\", \"numa_node\": 0 }, { \"name\": \"eno3\", \"numa_node\": 0 }, { \"name\": \"eno2\", \"numa_node\": 0 } ] }", "cat /sys/devices/system/cpu/cpuidle/current_driver acpi_idle" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/network_functions_virtualization_planning_and_configuration_guide/hardware-req-nfv_rhosp-nfv
7.6. Setting up Additional Subsystems
7.6. Setting up Additional Subsystems After you have installed the root Certificate Authority (CA) as described in Section 7.4, "Setting Up a Root Certificate Authority" , you can install additional Certificate System subsystems. Prerequisites All additional subsystems require a root Certificate Authority (CA). If you have not installed a root Certificate System CA, see Section 7.4, "Setting Up a Root Certificate Authority" . Installing the Subsystem To set up an additional subsystem, you have the following options: Configuration file-based installation: Use this method for high-level customization. This installation method uses a configuration file that overrides the default installation parameters. You can install Certificate System using a configuration file in a single step or in two steps. For details and examples, see: The pkispawn (8) man page for the single-step installation. Section 7.7, "Two-step Installation" for the two-step installation. Interactive installation: Use the interactive installer if you only want to set the minimum of required configuration options. For example: Replace subsystem with one of the following subsystems: KRA , OCSP , TKS , or TPS . The interactive installer does not support installing a subordinate CA. To install a subordinate CA, use the two-step installation. See Section 7.7, "Two-step Installation" .
[ "pkispawn -s subsystem" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/install-additional-subsystems
6.4. Resource Meta Options
6.4. Resource Meta Options In addition to the resource-specific parameters, you can configure additional resource options for any resource. These options are used by the cluster to decide how your resource should behave. Table 6.3, "Resource Meta Options" describes these options. Table 6.3. Resource Meta Options Field Default Description priority 0 If not all resources can be active, the cluster will stop lower priority resources in order to keep higher priority ones active. target-role Started What state should the cluster attempt to keep this resource in? Allowed values: * Stopped - Force the resource to be stopped * Started - Allow the resource to be started (In the case of multistate resources, they will not promoted to master) * Master - Allow the resource to be started and, if appropriate, promoted is-managed true Is the cluster allowed to start and stop the resource? Allowed values: true , false resource-stickiness 0 Value to indicate how much the resource prefers to stay where it is. requires Calculated Indicates under what conditions the resource can be started. Defaults to fencing except under the conditions noted below. Possible values: * nothing - The cluster can always start the resource. * quorum - The cluster can only start this resource if a majority of the configured nodes are active. This is the default value if stonith-enabled is false or the resource's standard is stonith . * fencing - The cluster can only start this resource if a majority of the configured nodes are active and any failed or unknown nodes have been powered off. * unfencing - The cluster can only start this resource if a majority of the configured nodes are active and any failed or unknown nodes have been powered off and only on nodes that have been unfenced . This is the default value if the provides=unfencing stonith meta option has been set for a fencing device. migration-threshold INFINITY How many failures may occur for this resource on a node, before this node is marked ineligible to host this resource. A value of 0 indicates that this feature is disabled (the node will never be marked ineligible); by contrast, the cluster treats INFINITY (the default) as a very large but finite number. This option has an effect only if the failed operation has on-fail=restart (the default), and additionally for failed start operations if the cluster property start-failure-is-fatal is false . For information on configuring the migration-threshold option, see Section 8.2, "Moving Resources Due to Failure" . For information on the start-failure-is-fatal option, see Table 12.1, "Cluster Properties" . failure-timeout 0 (disabled) Used in conjunction with the migration-threshold option, indicates how many seconds to wait before acting as if the failure had not occurred, and potentially allowing the resource back to the node on which it failed. As with any time-based actions, this is not guaranteed to be checked more frequently than the value of the cluster-recheck-interval cluster parameter. For information on configuring the failure-timeout option, see Section 8.2, "Moving Resources Due to Failure" . multiple-active stop_start What should the cluster do if it ever finds the resource active on more than one node. Allowed values: * block - mark the resource as unmanaged * stop_only - stop all active instances and leave them that way * stop_start - stop all active instances and start the resource in one location only To change the default value of a resource option, use the following command. For example, the following command resets the default value of resource-stickiness to 100. Omitting the options parameter from the pcs resource defaults displays a list of currently configured default values for resource options. The following example shows the output of this command after you have reset the default value of resource-stickiness to 100. Whether you have reset the default value of a resource meta option or not, you can set a resource option for a particular resource to a value other than the default when you create the resource. The following shows the format of the pcs resource create command you use when specifying a value for a resource meta option. For example, the following command creates a resource with a resource-stickiness value of 50. You can also set the value of a resource meta option for an existing resource, group, cloned resource, or master resource with the following command. In the following example, there is an existing resource named dummy_resource . This command sets the failure-timeout meta option to 20 seconds, so that the resource can attempt to restart on the same node in 20 seconds. After executing this command, you can display the values for the resource to verity that failure-timeout=20s is set. For information on resource clone meta options, see Section 9.1, "Resource Clones" . For information on resource master meta options, see Section 9.2, "Multistate Resources: Resources That Have Multiple Modes" .
[ "pcs resource defaults options", "pcs resource defaults resource-stickiness=100", "pcs resource defaults resource-stickiness:100", "pcs resource create resource_id standard:provider:type | type [ resource options ] [meta meta_options ...]", "pcs resource create VirtualIP ocf:heartbeat:IPaddr2 ip=192.168.0.120 cidr_netmask=24 meta resource-stickiness=50", "pcs resource meta resource_id | group_id | clone_id | master_id meta_options", "pcs resource meta dummy_resource failure-timeout=20s", "pcs resource show dummy_resource Resource: dummy_resource (class=ocf provider=heartbeat type=Dummy) Meta Attrs: failure-timeout=20s Operations: start interval=0s timeout=20 (dummy_resource-start-timeout-20) stop interval=0s timeout=20 (dummy_resource-stop-timeout-20) monitor interval=10 timeout=20 (dummy_resource-monitor-interval-10)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/s1-resourceopts-haar
Postinstallation configuration
Postinstallation configuration OpenShift Container Platform 4.17 Day 2 operations for OpenShift Container Platform Red Hat OpenShift Documentation Team
[ "oc get dnses.config.openshift.io/cluster -o yaml", "apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: \"2019-10-25T18:27:09Z\" generation: 2 name: cluster resourceVersion: \"37966\" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>: owned publicZone: id: Z2XXXXXXXXXXA4 status: {}", "oc patch dnses.config.openshift.io/cluster --type=merge --patch='{\"spec\": {\"publicZone\": null}}'", "dns.config.openshift.io/cluster patched", "oc get dnses.config.openshift.io/cluster -o yaml", "apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: \"2019-10-25T18:27:09Z\" generation: 2 name: cluster resourceVersion: \"37966\" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>-wfpg4: owned status: {}", "oc replace --force --wait --filename - <<EOF apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: default spec: endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: Internal EOF", "ingresscontroller.operator.openshift.io \"default\" deleted ingresscontroller.operator.openshift.io/default replaced", "providerSpec: value: loadBalancers: - name: lk4pj-ext 1 type: network 2 - name: lk4pj-int type: network", "oc get machine -n openshift-machine-api", "NAME STATE TYPE REGION ZONE AGE lk4pj-master-0 running m4.xlarge us-east-1 us-east-1a 17m lk4pj-master-1 running m4.xlarge us-east-1 us-east-1b 17m lk4pj-master-2 running m4.xlarge us-east-1 us-east-1a 17m lk4pj-worker-us-east-1a-5fzfj running m4.xlarge us-east-1 us-east-1a 15m lk4pj-worker-us-east-1a-vbghs running m4.xlarge us-east-1 us-east-1a 15m lk4pj-worker-us-east-1b-zgpzg running m4.xlarge us-east-1 us-east-1b 15m", "oc edit machines -n openshift-machine-api <control_plane_name> 1", "providerSpec: value: loadBalancers: - name: lk4pj-ext 1 type: network 2 - name: lk4pj-int type: network", "oc edit configs.imageregistry/cluster", "spec: # storage: azure: # networkAccess: type: Internal", "oc get configs.imageregistry/cluster -o=jsonpath=\"{.spec.storage.azure.privateEndpointName}\" -w", "oc patch configs.imageregistry cluster --type=merge -p '{\"spec\":{\"disableRedirect\": true}}'", "oc get imagestream -n openshift", "NAME IMAGE REPOSITORY TAGS UPDATED cli image-registry.openshift-image-registry.svc:5000/openshift/cli latest 8 hours ago", "oc debug node/<node_name>", "chroot /host", "podman login --tls-verify=false -u unused -p USD(oc whoami -t) image-registry.openshift-image-registry.svc:5000", "Login Succeeded!", "podman pull --tls-verify=false image-registry.openshift-image-registry.svc:5000/openshift/tools", "Trying to pull image-registry.openshift-image-registry.svc:5000/openshift/tools/openshift/tools Getting image source signatures Copying blob 6b245f040973 done Copying config 22667f5368 done Writing manifest to image destination Storing signatures 22667f53682a2920948d19c7133ab1c9c3f745805c14125859d20cede07f11f9", "oc edit configs.imageregistry/cluster", "spec: # storage: azure: # networkAccess: type: Internal internal: subnetName: <subnet_name> vnetName: <vnet_name> networkResourceGroupName: <network_resource_group_name>", "oc get configs.imageregistry/cluster -o=jsonpath=\"{.spec.storage.azure.privateEndpointName}\" -w", "oc get imagestream -n openshift", "NAME IMAGE REPOSITORY TAGS UPDATED cli image-registry.openshift-image-registry.svc:5000/openshift/cli latest 8 hours ago", "oc debug node/<node_name>", "chroot /host", "podman login --tls-verify=false -u unused -p USD(oc whoami -t) image-registry.openshift-image-registry.svc:5000", "Login Succeeded!", "podman pull --tls-verify=false image-registry.openshift-image-registry.svc:5000/openshift/tools", "Trying to pull image-registry.openshift-image-registry.svc:5000/openshift/tools/openshift/tools Getting image source signatures Copying blob 6b245f040973 done Copying config 22667f5368 done Writing manifest to image destination Storing signatures 22667f53682a2920948d19c7133ab1c9c3f745805c14125859d20cede07f11f9", "oc patch configs.imageregistry cluster --type=merge -p '{\"spec\":{\"disableRedirect\": true}}'", "oc get imagestream -n openshift", "NAME IMAGE REPOSITORY TAGS UPDATED cli default-route-openshift-image-registry.<cluster_dns>/cli latest 8 hours ago", "podman login --tls-verify=false -u unused -p USD(oc whoami -t) default-route-openshift-image-registry.<cluster_dns>", "Login Succeeded!", "podman pull --tls-verify=false default-route-openshift-image-registry.<cluster_dns> /openshift/tools", "Trying to pull default-route-openshift-image-registry.<cluster_dns>/openshift/tools Getting image source signatures Copying blob 6b245f040973 done Copying config 22667f5368 done Writing manifest to image destination Storing signatures 22667f53682a2920948d19c7133ab1c9c3f745805c14125859d20cede07f11f9", "oc adm release info -o jsonpath=\"{ .metadata.metadata}\"", "{ \"release.openshift.io/architecture\": \"multi\", \"url\": \"https://access.redhat.com/errata/<errata_version>\" }", "{ \"url\": \"https://access.redhat.com/errata/<errata_version>\" }", "az login", "az storage account create -n USD{STORAGE_ACCOUNT_NAME} -g USD{RESOURCE_GROUP} -l westus --sku Standard_LRS 1", "az storage container create -n USD{CONTAINER_NAME} --account-name USD{STORAGE_ACCOUNT_NAME}", "RHCOS_VHD_ORIGIN_URL=USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.aarch64.\"rhel-coreos-extensions\".\"azure-disk\".url')", "BLOB_NAME=rhcos-USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.aarch64.\"rhel-coreos-extensions\".\"azure-disk\".release')-azure.aarch64.vhd", "end=`date -u -d \"30 minutes\" '+%Y-%m-%dT%H:%MZ'`", "sas=`az storage container generate-sas -n USD{CONTAINER_NAME} --account-name USD{STORAGE_ACCOUNT_NAME} --https-only --permissions dlrw --expiry USDend -o tsv`", "az storage blob copy start --account-name USD{STORAGE_ACCOUNT_NAME} --sas-token \"USDsas\" --source-uri \"USD{RHCOS_VHD_ORIGIN_URL}\" --destination-blob \"USD{BLOB_NAME}\" --destination-container USD{CONTAINER_NAME}", "az storage blob show -c USD{CONTAINER_NAME} -n USD{BLOB_NAME} --account-name USD{STORAGE_ACCOUNT_NAME} | jq .properties.copy", "{ \"completionTime\": null, \"destinationSnapshot\": null, \"id\": \"1fd97630-03ca-489a-8c4e-cfe839c9627d\", \"incrementalCopy\": null, \"progress\": \"17179869696/17179869696\", \"source\": \"https://rhcos.blob.core.windows.net/imagebucket/rhcos-411.86.202207130959-0-azure.aarch64.vhd\", \"status\": \"success\", 1 \"statusDescription\": null }", "az sig create --resource-group USD{RESOURCE_GROUP} --gallery-name USD{GALLERY_NAME}", "az sig image-definition create --resource-group USD{RESOURCE_GROUP} --gallery-name USD{GALLERY_NAME} --gallery-image-definition rhcos-arm64 --publisher RedHat --offer arm --sku arm64 --os-type linux --architecture Arm64 --hyper-v-generation V2", "RHCOS_VHD_URL=USD(az storage blob url --account-name USD{STORAGE_ACCOUNT_NAME} -c USD{CONTAINER_NAME} -n \"USD{BLOB_NAME}\" -o tsv)", "az sig image-version create --resource-group USD{RESOURCE_GROUP} --gallery-name USD{GALLERY_NAME} --gallery-image-definition rhcos-arm64 --gallery-image-version 1.0.0 --os-vhd-storage-account USD{STORAGE_ACCOUNT_NAME} --os-vhd-uri USD{RHCOS_VHD_URL}", "az sig image-version show -r USDGALLERY_NAME -g USDRESOURCE_GROUP -i rhcos-arm64 -e 1.0.0", "/resourceGroups/USD{RESOURCE_GROUP}/providers/Microsoft.Compute/galleries/USD{GALLERY_NAME}/images/rhcos-arm64/versions/1.0.0", "az login", "az storage account create -n USD{STORAGE_ACCOUNT_NAME} -g USD{RESOURCE_GROUP} -l westus --sku Standard_LRS 1", "az storage container create -n USD{CONTAINER_NAME} --account-name USD{STORAGE_ACCOUNT_NAME}", "RHCOS_VHD_ORIGIN_URL=USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.x86_64.\"rhel-coreos-extensions\".\"azure-disk\".url')", "BLOB_NAME=rhcos-USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.x86_64.\"rhel-coreos-extensions\".\"azure-disk\".release')-azure.x86_64.vhd", "end=`date -u -d \"30 minutes\" '+%Y-%m-%dT%H:%MZ'`", "sas=`az storage container generate-sas -n USD{CONTAINER_NAME} --account-name USD{STORAGE_ACCOUNT_NAME} --https-only --permissions dlrw --expiry USDend -o tsv`", "az storage blob copy start --account-name USD{STORAGE_ACCOUNT_NAME} --sas-token \"USDsas\" --source-uri \"USD{RHCOS_VHD_ORIGIN_URL}\" --destination-blob \"USD{BLOB_NAME}\" --destination-container USD{CONTAINER_NAME}", "az storage blob show -c USD{CONTAINER_NAME} -n USD{BLOB_NAME} --account-name USD{STORAGE_ACCOUNT_NAME} | jq .properties.copy", "{ \"completionTime\": null, \"destinationSnapshot\": null, \"id\": \"1fd97630-03ca-489a-8c4e-cfe839c9627d\", \"incrementalCopy\": null, \"progress\": \"17179869696/17179869696\", \"source\": \"https://rhcos.blob.core.windows.net/imagebucket/rhcos-411.86.202207130959-0-azure.aarch64.vhd\", \"status\": \"success\", 1 \"statusDescription\": null }", "az sig create --resource-group USD{RESOURCE_GROUP} --gallery-name USD{GALLERY_NAME}", "az sig image-definition create --resource-group USD{RESOURCE_GROUP} --gallery-name USD{GALLERY_NAME} --gallery-image-definition rhcos-x86_64 --publisher RedHat --offer x86_64 --sku x86_64 --os-type linux --architecture x64 --hyper-v-generation V2", "RHCOS_VHD_URL=USD(az storage blob url --account-name USD{STORAGE_ACCOUNT_NAME} -c USD{CONTAINER_NAME} -n \"USD{BLOB_NAME}\" -o tsv)", "az sig image-version create --resource-group USD{RESOURCE_GROUP} --gallery-name USD{GALLERY_NAME} --gallery-image-definition rhcos-arm64 --gallery-image-version 1.0.0 --os-vhd-storage-account USD{STORAGE_ACCOUNT_NAME} --os-vhd-uri USD{RHCOS_VHD_URL}", "az sig image-version show -r USDGALLERY_NAME -g USDRESOURCE_GROUP -i rhcos-x86_64 -e 1.0.0", "/resourceGroups/USD{RESOURCE_GROUP}/providers/Microsoft.Compute/galleries/USD{GALLERY_NAME}/images/rhcos-x86_64/versions/1.0.0", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker name: <infrastructure_id>-machine-set-0 namespace: openshift-machine-api spec: replicas: 2 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-machine-set-0 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <infrastructure_id>-machine-set-0 spec: lifecycleHooks: {} metadata: {} providerSpec: value: acceleratedNetworking: true apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: offer: \"\" publisher: \"\" resourceID: /resourceGroups/USD{RESOURCE_GROUP}/providers/Microsoft.Compute/galleries/USD{GALLERY_NAME}/images/rhcos-arm64/versions/1.0.0 1 sku: \"\" version: \"\" kind: AzureMachineProviderSpec location: <region> managedIdentity: <infrastructure_id>-identity networkResourceGroup: <infrastructure_id>-rg osDisk: diskSettings: {} diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: <infrastructure_id> resourceGroup: <infrastructure_id>-rg subnet: <infrastructure_id>-worker-subnet userDataSecret: name: worker-user-data vmSize: Standard_D4ps_v5 2 vnet: <infrastructure_id>-vnet zone: \"<zone>\"", "oc create -f <file_name> 1", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE <infrastructure_id>-machine-set-0 2 2 2 2 10m", "oc get nodes", "oc adm release info -o jsonpath=\"{ .metadata.metadata}\"", "{ \"release.openshift.io/architecture\": \"multi\", \"url\": \"https://access.redhat.com/errata/<errata_version>\" }", "{ \"url\": \"https://access.redhat.com/errata/<errata_version>\" }", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-aws-machine-set-0 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> 5 machine.openshift.io/cluster-api-machine-type: <role> 6 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> 7 spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" providerSpec: value: ami: id: ami-02a574449d4f4d280 8 apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile 9 instanceType: m6g.xlarge 10 kind: AWSMachineProviderConfig placement: availabilityZone: us-east-1a 11 region: <region> 12 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-node 13 subnet: filters: - name: tag:Name values: - <infrastructure_id>-subnet-private-<zone> tags: - name: kubernetes.io/cluster/<infrastructure_id> 14 value: owned - name: <custom_tag_name> value: <custom_tag_value> userDataSecret: name: worker-user-data", "oc get -o jsonpath=\"{.status.infrastructureName}{'\\n'}\" infrastructure cluster", "oc get configmap/coreos-bootimages -n openshift-machine-config-operator -o jsonpath='{.data.stream}' | jq -r '.architectures.<arch>.images.aws.regions.\"<region>\".image'", "oc create -f <file_name> 1", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE <infrastructure_id>-aws-machine-set-0 2 2 2 2 10m", "oc get nodes", "oc adm release info -o jsonpath=\"{ .metadata.metadata}\"", "{ \"release.openshift.io/architecture\": \"multi\", \"url\": \"https://access.redhat.com/errata/<errata_version>\" }", "{ \"url\": \"https://access.redhat.com/errata/<errata_version>\" }", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-w-a namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" providerSpec: value: apiVersion: gcpprovider.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 3 labels: null sizeGb: 128 type: pd-ssd gcpMetadata: 4 - key: <custom_metadata_key> value: <custom_metadata_value> kind: GCPMachineProviderSpec machineType: n1-standard-4 5 metadata: creationTimestamp: null networkInterfaces: - network: <infrastructure_id>-network subnetwork: <infrastructure_id>-worker-subnet projectID: <project_name> 6 region: us-central1 7 serviceAccounts: - email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker userDataSecret: name: worker-user-data zone: us-central1-a", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get configmap/coreos-bootimages -n openshift-machine-config-operator -o jsonpath='{.data.stream}' | jq -r '.architectures.aarch64.images.gcp'", "\"gcp\": { \"release\": \"415.92.202309142014-0\", \"project\": \"rhcos-cloud\", \"name\": \"rhcos-415-92-202309142014-0-gcp-aarch64\" }", "projects/<project>/global/images/<image_name>", "oc create -f <file_name> 1", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE <infrastructure_id>-gcp-machine-set-0 2 2 2 2 10m", "oc get nodes", "oc adm release info -o jsonpath=\"{ .metadata.metadata}\"", "{ \"release.openshift.io/architecture\": \"multi\", \"url\": \"https://access.redhat.com/errata/<errata_version>\" }", "{ \"url\": \"https://access.redhat.com/errata/<errata_version>\" }", "oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign", "curl -k http://<HTTP_server>/worker.ign", "RHCOS_VHD_ORIGIN_URL=USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.<architecture>.artifacts.metal.formats.iso.disk.location')", "sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2", "sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b", "DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img 2", "kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot", "menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign 1 2 initrd rhcos-<version>-live-initramfs.<architecture>.img 3 }", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.30.3 master-1 Ready master 63m v1.30.3 master-2 Ready master 64m v1.30.3", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.30.3 master-1 Ready master 73m v1.30.3 master-2 Ready master 74m v1.30.3 worker-0 Ready worker 11m v1.30.3 worker-1 Ready worker 11m v1.30.3", "oc adm release info -o jsonpath=\"{ .metadata.metadata}\"", "{ \"release.openshift.io/architecture\": \"multi\", \"url\": \"https://access.redhat.com/errata/<errata_version>\" }", "{ \"url\": \"https://access.redhat.com/errata/<errata_version>\" }", "oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign", "curl -k http://<http_server>/worker.ign", "curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.kernel.location')", "curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.initramfs.location')", "curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.rootfs.location')", "cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/dasda coreos.inst.ignition_url=http://<http_server>/worker.ign coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 rd.dasd=0.0.3490 zfcp.allow_lun_scan=0", "cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/sda coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.ignition_url=http://<http_server>/worker.ign ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 zfcp.allow_lun_scan=0 rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000", "ipl c", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.30.3 master-1 Ready master 63m v1.30.3 master-2 Ready master 64m v1.30.3", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.30.3 master-1 Ready master 73m v1.30.3 master-2 Ready master 74m v1.30.3 worker-0 Ready worker 11m v1.30.3 worker-1 Ready worker 11m v1.30.3", "oc adm release info -o jsonpath=\"{ .metadata.metadata}\"", "{ \"release.openshift.io/architecture\": \"multi\", \"url\": \"https://access.redhat.com/errata/<errata_version>\" }", "{ \"url\": \"https://access.redhat.com/errata/<errata_version>\" }", "oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign", "curl -k http://<http_server>/worker.ign", "curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.kernel.location')", "curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.initramfs.location')", "curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.rootfs.location')", "cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/dasda coreos.inst.ignition_url=http://<http_server>/worker.ign coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 rd.dasd=0.0.3490 zfcp.allow_lun_scan=0", "cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/sda coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.ignition_url=http://<http_server>/worker.ign ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 zfcp.allow_lun_scan=0 rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.30.3 master-1 Ready master 63m v1.30.3 master-2 Ready master 64m v1.30.3", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.30.3 master-1 Ready master 73m v1.30.3 master-2 Ready master 74m v1.30.3 worker-0 Ready worker 11m v1.30.3 worker-1 Ready worker 11m v1.30.3", "oc adm release info -o jsonpath=\"{ .metadata.metadata}\"", "{ \"release.openshift.io/architecture\": \"multi\", \"url\": \"https://access.redhat.com/errata/<errata_version>\" }", "{ \"url\": \"https://access.redhat.com/errata/<errata_version>\" }", "oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign", "curl -k http://<HTTP_server>/worker.ign", "curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.kernel.location')", "curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.initramfs.location')", "curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.rootfs.location')", "virt-install --connect qemu:///system --name <vm_name> --autostart --os-variant rhel9.4 \\ 1 --cpu host --vcpus <vcpus> --memory <memory_mb> --disk <vm_name>.qcow2,size=<image_size> --network network=<virt_network_parm> --location <media_location>,kernel=<rhcos_kernel>,initrd=<rhcos_initrd> \\ 2 --extra-args \"rd.neednet=1\" --extra-args \"coreos.inst.install_dev=/dev/vda\" --extra-args \"coreos.inst.ignition_url=http://<http_server>/worker.ign \" \\ 3 --extra-args \"coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img\" \\ 4 --extra-args \"ip=<ip>::<gateway>:<netmask>:<hostname>::none\" \\ 5 --extra-args \"nameserver=<dns>\" --extra-args \"console=ttysclp0\" --noautoconsole --wait", "osinfo-query os -f short-id", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.30.3 master-1 Ready master 63m v1.30.3 master-2 Ready master 64m v1.30.3", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.30.3 master-1 Ready master 73m v1.30.3 master-2 Ready master 74m v1.30.3 worker-0 Ready worker 11m v1.30.3 worker-1 Ready worker 11m v1.30.3", "oc adm release info -o jsonpath=\"{ .metadata.metadata}\"", "{ \"release.openshift.io/architecture\": \"multi\", \"url\": \"https://access.redhat.com/errata/<errata_version>\" }", "{ \"url\": \"https://access.redhat.com/errata/<errata_version>\" }", "oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign", "curl -k http://<HTTP_server>/worker.ign", "RHCOS_VHD_ORIGIN_URL=USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.<architecture>.artifacts.metal.formats.iso.disk.location')", "sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2", "sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b", "DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img 2", "kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot", "menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign 1 2 initrd rhcos-<version>-live-initramfs.<architecture>.img 3 }", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.30.3 master-1 Ready master 63m v1.30.3 master-2 Ready master 64m v1.30.3", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes -o wide", "NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME worker-0-ppc64le Ready worker 42d v1.30.3 192.168.200.21 <none> Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.ppc64le cri-o://1.30.3-3.rhaos4.15.gitb36169e.el9 worker-1-ppc64le Ready worker 42d v1.30.3 192.168.200.20 <none> Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.ppc64le cri-o://1.30.3-3.rhaos4.15.gitb36169e.el9 master-0-x86 Ready control-plane,master 75d v1.30.3 10.248.0.38 10.248.0.38 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.30.3-3.rhaos4.15.gitb36169e.el9 master-1-x86 Ready control-plane,master 75d v1.30.3 10.248.0.39 10.248.0.39 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.30.3-3.rhaos4.15.gitb36169e.el9 master-2-x86 Ready control-plane,master 75d v1.30.3 10.248.0.40 10.248.0.40 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.30.3-3.rhaos4.15.gitb36169e.el9 worker-0-x86 Ready worker 75d v1.30.3 10.248.0.43 10.248.0.43 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.30.3-3.rhaos4.15.gitb36169e.el9 worker-1-x86 Ready worker 75d v1.30.3 10.248.0.44 10.248.0.44 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.30.3-3.rhaos4.15.gitb36169e.el9", "apiVersion: apps/v1 kind: Deployment metadata: # spec: # template: # spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/arch operator: In values: 1 - amd64 - arm64", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: # spec: # template: # spec: # taints: - effect: NoSchedule key: multiarch.openshift.io/arch value: arm64", "oc adm taint nodes <node-name> multiarch.openshift.io/arch=arm64:NoSchedule", "oc annotate namespace my-namespace 'scheduler.alpha.kubernetes.io/defaultTolerations'='[{\"operator\": \"Exists\", \"effect\": \"NoSchedule\", \"key\": \"multiarch.openshift.io/arch\"}]'", "apiVersion: apps/v1 kind: Deployment metadata: # spec: # template: # spec: tolerations: - key: \"multiarch.openshift.io/arch\" value: \"arm64\" operator: \"Equal\" effect: \"NoSchedule\"", "apiVersion: apps/v1 kind: Deployment metadata: # spec: # template: # spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/arch operator: In values: - amd64 - arm64 tolerations: - key: \"multiarch.openshift.io/arch\" value: \"arm64\" operator: \"Equal\" effect: \"NoSchedule\"", "oc label node <node_name> <label>", "oc label node worker-arm64-01 node-role.kubernetes.io/worker-64k-pages=", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-64k-pages spec: machineConfigSelector: matchExpressions: - key: machineconfiguration.openshift.io/role operator: In values: - worker - worker-64k-pages nodeSelector: matchLabels: node-role.kubernetes.io/worker-64k-pages: \"\" kubernetes.io/arch: arm64", "oc create -f <filename>.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"worker-64k-pages\" 1 name: 99-worker-64kpages spec: kernelType: 64k-pages 2", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-9d55ac9a91127c36314e1efe7d77fbf8 True False False 3 3 3 0 361d worker rendered-worker-e7b61751c4a5b7ff995d64b967c421ff True False False 7 7 7 0 361d worker-64k-pages rendered-worker-64k-pages-e7b61751c4a5b7ff995d64b967c421ff True False False 2 2 2 0 35m", "oc patch is/cli-artifacts -n openshift -p '{\"spec\":{\"tags\":[{\"name\":\"latest\",\"importPolicy\":{\"importMode\":\"PreserveOriginal\"}}]}}'", "oc get istag cli-artifacts:latest -n openshift -oyaml", "dockerImageManifests: - architecture: amd64 digest: sha256:16d4c96c52923a9968fbfa69425ec703aff711f1db822e4e9788bf5d2bee5d77 manifestSize: 1252 mediaType: application/vnd.docker.distribution.manifest.v2+json os: linux - architecture: arm64 digest: sha256:6ec8ad0d897bcdf727531f7d0b716931728999492709d19d8b09f0d90d57f626 manifestSize: 1252 mediaType: application/vnd.docker.distribution.manifest.v2+json os: linux - architecture: ppc64le digest: sha256:65949e3a80349cdc42acd8c5b34cde6ebc3241eae8daaeea458498fedb359a6a manifestSize: 1252 mediaType: application/vnd.docker.distribution.manifest.v2+json os: linux - architecture: s390x digest: sha256:75f4fa21224b5d5d511bea8f92dfa8e1c00231e5c81ab95e83c3013d245d1719 manifestSize: 1252 mediaType: application/vnd.docker.distribution.manifest.v2+json os: linux", "oc create ns openshift-multiarch-tuning-operator", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-multiarch-tuning-operator namespace: openshift-multiarch-tuning-operator spec: {}", "oc create -f <file_name> 1", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-multiarch-tuning-operator namespace: openshift-multiarch-tuning-operator spec: channel: stable name: multiarch-tuning-operator source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Automatic startingCSV: multiarch-tuning-operator.<version>", "oc create -f <file_name> 1", "oc get csv -n openshift-multiarch-tuning-operator", "NAME DISPLAY VERSION REPLACES PHASE multiarch-tuning-operator.<version> Multiarch Tuning Operator <version> multiarch-tuning-operator.1.0.0 Succeeded", "oc get operatorgroup -n openshift-multiarch-tuning-operator", "NAME AGE openshift-multiarch-tuning-operator-q8zbb 133m", "oc get subscription -n openshift-multiarch-tuning-operator", "NAME PACKAGE SOURCE CHANNEL multiarch-tuning-operator multiarch-tuning-operator redhat-operators stable", "apiVersion: multiarch.openshift.io/v1beta1 kind: ClusterPodPlacementConfig metadata: name: cluster 1 spec: logVerbosityLevel: Normal 2 namespaceSelector: 3 matchExpressions: - key: multiarch.openshift.io/exclude-pod-placement operator: DoesNotExist plugins: 4 nodeAffinityScoring: 5 enabled: true 6 platforms: 7 - architecture: amd64 8 weight: 100 9 - architecture: arm64 weight: 50", "namespaceSelector: matchExpressions: - key: multiarch.openshift.io/include-pod-placement operator: Exists", "apiVersion: multiarch.openshift.io/v1beta1 kind: ClusterPodPlacementConfig metadata: name: cluster spec: logVerbosityLevel: Normal namespaceSelector: matchExpressions: - key: multiarch.openshift.io/exclude-pod-placement operator: DoesNotExist plugins: nodeAffinityScoring: enabled: true platforms: - architecture: amd64 weight: 100 - architecture: arm64 weight: 50", "oc create -f <file_name> 1", "oc get clusterpodplacementconfig", "NAME AGE cluster 29s", "oc delete clusterpodplacementconfig cluster", "oc get clusterpodplacementconfig", "No resources found", "oc get subscription.operators.coreos.com -n <namespace> 1", "NAME PACKAGE SOURCE CHANNEL openshift-multiarch-tuning-operator multiarch-tuning-operator redhat-operators stable", "oc get subscription.operators.coreos.com <subscription_name> -n <namespace> -o yaml | grep currentCSV 1", "currentCSV: multiarch-tuning-operator.<version>", "oc delete subscription.operators.coreos.com <subscription_name> -n <namespace> 1", "subscription.operators.coreos.com \"openshift-multiarch-tuning-operator\" deleted", "oc delete clusterserviceversion <currentCSV_value> -n <namespace> 1", "clusterserviceversion.operators.coreos.com \"multiarch-tuning-operator.<version>\" deleted", "oc get csv -n <namespace> 1", "No resources found in openshift-multiarch-tuning-operator namespace.", "oc get machinesets.machine.openshift.io -n openshift-machine-api", "oc get machines.machine.openshift.io -n openshift-machine-api", "oc annotate machines.machine.openshift.io/<machine_name> -n openshift-machine-api machine.openshift.io/delete-machine=\"true\"", "oc scale --replicas=2 machinesets.machine.openshift.io <machineset> -n openshift-machine-api", "oc edit machinesets.machine.openshift.io <machineset> -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2", "oc get machines.machine.openshift.io", "spec: deletePolicy: <delete_policy> replicas: <desired_replica_count>", "oc edit scheduler cluster", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: type=user-node,region=east 1 mastersSchedulable: false", "oc patch MachineSet <name> --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"<key>\"=\"<value>\",\"<key>\"=\"<value>\"}}]' -n openshift-machine-api 1", "oc patch MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"type\":\"user-node\",\"region\":\"east\"}}]' -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: template: spec: metadata: labels: region: \"east\" type: \"user-node\"", "oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: metadata: spec: metadata: labels: region: east type: user-node", "oc scale --replicas=0 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api", "oc scale --replicas=1 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api", "oc get nodes -l <key>=<value>", "oc get nodes -l type=user-node", "NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.30.3", "oc label nodes <name> <key>=<value>", "oc label nodes ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 type=user-node region=east", "kind: Node apiVersion: v1 metadata: name: <node_name> labels: type: \"user-node\" region: \"east\"", "oc get nodes -l <key>=<value>,<key>=<value>", "oc get nodes -l type=user-node,region=east", "NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.30.3", "oc edit nodes.config/cluster", "apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"true\" include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" creationTimestamp: \"2022-07-08T16:02:51Z\" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: \"1865\" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: workerLatencyProfile: MediumUpdateAverageReaction 1", "oc edit nodes.config/cluster", "apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"true\" include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" creationTimestamp: \"2022-07-08T16:02:51Z\" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: \"1865\" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: workerLatencyProfile: LowUpdateSlowReaction 1", "oc get KubeControllerManager -o yaml | grep -i workerlatency -A 5 -B 5", "- lastTransitionTime: \"2022-07-11T19:47:10Z\" reason: ProfileUpdated status: \"False\" type: WorkerLatencyProfileProgressing - lastTransitionTime: \"2022-07-11T19:47:10Z\" 1 message: all static pod revision(s) have updated latency profile reason: ProfileUpdated status: \"True\" type: WorkerLatencyProfileComplete - lastTransitionTime: \"2022-07-11T19:20:11Z\" reason: AsExpected status: \"False\" type: WorkerLatencyProfileDegraded - lastTransitionTime: \"2022-07-11T19:20:36Z\" status: \"False\"", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc label node <node-name> node-role.kubernetes.io/app=\"\"", "oc label node <node-name> node-role.kubernetes.io/infra=\"\"", "oc get nodes", "oc edit scheduler cluster", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: node-role.kubernetes.io/infra=\"\" 1", "oc label node <node_name> <label>", "oc label node ci-ln-n8mqwr2-f76d1-xscn2-worker-c-6fmtx node-role.kubernetes.io/infra=", "cat infra.mcp.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: infra spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]} 1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: \"\" 2", "oc create -f infra.mcp.yaml", "oc get machineconfig", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED 00-master 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 00-worker 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-1ae2a1e0-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-ssh 3.2.0 31d 99-worker-1ae64748-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-worker-ssh 3.2.0 31d rendered-infra-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 23m rendered-master-072d4b2da7f88162636902b074e9e28e 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-master-3e88ec72aed3886dec061df60d16d1af 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-master-419bee7de96134963a15fdf9dd473b25 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-master-53f5c91c7661708adce18739cc0f40fb 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d rendered-master-a6a357ec18e5bce7f5ac426fc7c5ffcd 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-master-dc7f874ec77fc4b969674204332da037 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-1a75960c52ad18ff5dfa6674eb7e533d 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-2640531be11ba43c61d72e82dc634ce6 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-worker-4f110718fe88e5f349987854a1147755 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-worker-afc758e194d6188677eb837842d3b379 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-worker-daa08cc1e8f5fcdeba24de60cd955cc3 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d", "cat infra.mc.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 51-infra labels: machineconfiguration.openshift.io/role: infra 1 spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/infratest mode: 0644 contents: source: data:,infra", "oc create -f infra.mc.yaml", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE infra rendered-infra-60e35c2e99f42d976e084fa94da4d0fc True False False 1 1 1 0 4m20s master rendered-master-9360fdb895d4c131c7c4bebbae099c90 True False False 3 3 3 0 91m worker rendered-worker-60e35c2e99f42d976e084fa94da4d0fc True False False 2 2 2 0 91m", "oc describe nodes <node_name>", "describe node ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Name: ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Roles: worker Taints: node-role.kubernetes.io/infra:NoSchedule", "oc adm taint nodes <node_name> <key>=<value>:<effect>", "oc adm taint nodes node1 node-role.kubernetes.io/infra=reserved:NoSchedule", "kind: Node apiVersion: v1 metadata: name: <node_name> labels: spec: taints: - key: node-role.kubernetes.io/infra effect: NoSchedule value: reserved", "oc adm taint nodes <node_name> <key>=<value>:<effect>", "oc adm taint nodes node1 node-role.kubernetes.io/infra=reserved:NoExecute", "kind: Node apiVersion: v1 metadata: name: <node_name> labels: spec: taints: - key: node-role.kubernetes.io/infra effect: NoExecute value: reserved", "tolerations: - effect: NoSchedule 1 key: node-role.kubernetes.io/infra 2 value: reserved 3 - effect: NoExecute 4 key: node-role.kubernetes.io/infra 5 operator: Exists 6 value: reserved 7", "oc get ingresscontroller default -n openshift-ingress-operator -o yaml", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: 2019-04-18T12:35:39Z finalizers: - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller generation: 1 name: default namespace: openshift-ingress-operator resourceVersion: \"11341\" selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default uid: 79509e05-61d6-11e9-bc55-02ce4781844a spec: {} status: availableReplicas: 2 conditions: - lastTransitionTime: 2019-04-18T12:36:15Z status: \"True\" type: Available domain: apps.<cluster>.example.com endpointPublishingStrategy: type: LoadBalancerService selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default", "oc edit ingresscontroller default -n openshift-ingress-operator", "spec: nodePlacement: nodeSelector: 1 matchLabels: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved", "oc get pod -n openshift-ingress -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none> router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none>", "oc get node <node_name> 1", "NAME STATUS ROLES AGE VERSION ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.30.3", "oc get configs.imageregistry.operator.openshift.io/cluster -o yaml", "apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: creationTimestamp: 2019-02-05T13:52:05Z finalizers: - imageregistry.operator.openshift.io/finalizer generation: 1 name: cluster resourceVersion: \"56174\" selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster uid: 36fd3724-294d-11e9-a524-12ffeee2931b spec: httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623 logging: 2 managementState: Managed proxy: {} replicas: 1 requests: read: {} write: {} storage: s3: bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c region: us-east-1 status:", "oc edit configs.imageregistry.operator.openshift.io/cluster", "spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: namespaces: - openshift-image-registry topologyKey: kubernetes.io/hostname weight: 100 logLevel: Normal managementState: Managed nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved", "oc get pods -o wide -n openshift-image-registry", "oc describe node <node_name>", "oc edit configmap cluster-monitoring-config -n openshift-monitoring", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |+ alertmanagerMain: nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusK8s: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusOperator: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute metricsServer: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute kubeStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute telemeterClient: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute openshiftStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute thanosQuerier: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute monitoringPlugin: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute", "watch 'oc get pod -n openshift-monitoring -o wide'", "oc delete pod -n openshift-monitoring <pod>", "oc edit nodes.config/cluster", "apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"true\" include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" creationTimestamp: \"2022-07-08T16:02:51Z\" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: \"1865\" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: cgroupMode: \"v1\" 1", "oc get mc", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 97-master-generated-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-generated-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23d4317815a5f854bd3553d689cfe2e9 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 10s 1 rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-dcc7f1b92892d34db74d6832bcc9ccd4 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 10s", "oc describe mc <name>", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 05-worker-kernelarg-selinuxpermissive spec: kernelArguments: systemd.unified_cgroup_hierarchy=0 1 systemd.legacy_systemd_cgroup_controller=1 2 psi=1 3", "oc get nodes", "NAME STATUS ROLES AGE VERSION ci-ln-fm1qnwt-72292-99kt6-master-0 Ready,SchedulingDisabled master 58m v1.30.3 ci-ln-fm1qnwt-72292-99kt6-master-1 Ready master 58m v1.30.3 ci-ln-fm1qnwt-72292-99kt6-master-2 Ready master 58m v1.30.3 ci-ln-fm1qnwt-72292-99kt6-worker-a-h5gt4 Ready,SchedulingDisabled worker 48m v1.30.3 ci-ln-fm1qnwt-72292-99kt6-worker-b-7vtmd Ready worker 48m v1.30.3 ci-ln-fm1qnwt-72292-99kt6-worker-c-rhzkv Ready worker 48m v1.30.3", "oc debug node/<node_name>", "sh-4.4# chroot /host", "stat -c %T -f /sys/fs/cgroup", "cgroup2fs", "apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 spec: featureSet: TechPreviewNoUpgrade 2", "sh-4.2# chroot /host", "sh-4.2# cat /etc/kubernetes/kubelet.conf", "featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false", "oc edit featuregate cluster", "apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 spec: featureSet: TechPreviewNoUpgrade 2", "sh-4.2# chroot /host", "sh-4.2# cat /etc/kubernetes/kubelet.conf", "featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false", "oc edit apiserver", "spec: encryption: type: aesgcm 1", "oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "EncryptionCompleted All resources encrypted: routes.route.openshift.io", "oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "EncryptionCompleted All resources encrypted: secrets, configmaps", "oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "EncryptionCompleted All resources encrypted: oauthaccesstokens.oauth.openshift.io, oauthauthorizetokens.oauth.openshift.io", "oc edit apiserver", "spec: encryption: type: identity 1", "oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "DecryptionCompleted Encryption mode set to identity and everything is decrypted", "oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "DecryptionCompleted Encryption mode set to identity and everything is decrypted", "oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "DecryptionCompleted Encryption mode set to identity and everything is decrypted", "oc debug --as-root node/<node_name>", "sh-4.4# chroot /host", "export HTTP_PROXY=http://<your_proxy.example.com>:8080", "export HTTPS_PROXY=https://<your_proxy.example.com>:8080", "export NO_PROXY=<example.com>", "sh-4.4# /usr/local/bin/cluster-backup.sh /home/core/assets/backup", "found latest kube-apiserver: /etc/kubernetes/static-pod-resources/kube-apiserver-pod-6 found latest kube-controller-manager: /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-7 found latest kube-scheduler: /etc/kubernetes/static-pod-resources/kube-scheduler-pod-6 found latest etcd: /etc/kubernetes/static-pod-resources/etcd-pod-3 ede95fe6b88b87ba86a03c15e669fb4aa5bf0991c180d3c6895ce72eaade54a1 etcdctl version: 3.4.14 API version: 3.4 {\"level\":\"info\",\"ts\":1624647639.0188997,\"caller\":\"snapshot/v3_snapshot.go:119\",\"msg\":\"created temporary db file\",\"path\":\"/home/core/assets/backup/snapshot_2021-06-25_190035.db.part\"} {\"level\":\"info\",\"ts\":\"2021-06-25T19:00:39.030Z\",\"caller\":\"clientv3/maintenance.go:200\",\"msg\":\"opened snapshot stream; downloading\"} {\"level\":\"info\",\"ts\":1624647639.0301006,\"caller\":\"snapshot/v3_snapshot.go:127\",\"msg\":\"fetching snapshot\",\"endpoint\":\"https://10.0.0.5:2379\"} {\"level\":\"info\",\"ts\":\"2021-06-25T19:00:40.215Z\",\"caller\":\"clientv3/maintenance.go:208\",\"msg\":\"completed snapshot read; closing\"} {\"level\":\"info\",\"ts\":1624647640.6032252,\"caller\":\"snapshot/v3_snapshot.go:142\",\"msg\":\"fetched snapshot\",\"endpoint\":\"https://10.0.0.5:2379\",\"size\":\"114 MB\",\"took\":1.584090459} {\"level\":\"info\",\"ts\":1624647640.6047094,\"caller\":\"snapshot/v3_snapshot.go:152\",\"msg\":\"saved\",\"path\":\"/home/core/assets/backup/snapshot_2021-06-25_190035.db\"} Snapshot saved at /home/core/assets/backup/snapshot_2021-06-25_190035.db {\"hash\":3866667823,\"revision\":31407,\"totalKey\":12828,\"totalSize\":114446336} snapshot db and kube resources are successfully saved to /home/core/assets/backup", "etcd member has been defragmented: <member_name> , memberID: <member_id>", "failed defrag on member: <member_name> , memberID: <member_id> : <error_message>", "oc -n openshift-etcd get pods -l k8s-app=etcd -o wide", "etcd-ip-10-0-159-225.example.redhat.com 3/3 Running 0 175m 10.0.159.225 ip-10-0-159-225.example.redhat.com <none> <none> etcd-ip-10-0-191-37.example.redhat.com 3/3 Running 0 173m 10.0.191.37 ip-10-0-191-37.example.redhat.com <none> <none> etcd-ip-10-0-199-170.example.redhat.com 3/3 Running 0 176m 10.0.199.170 ip-10-0-199-170.example.redhat.com <none> <none>", "oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com etcdctl endpoint status --cluster -w table", "Defaulting container name to etcdctl. Use 'oc describe pod/etcd-ip-10-0-159-225.example.redhat.com -n openshift-etcd' to see all of the containers in this pod. +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.5.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.5.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.5.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+", "oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com", "sh-4.4# unset ETCDCTL_ENDPOINTS", "sh-4.4# etcdctl --command-timeout=30s --endpoints=https://localhost:2379 defrag", "Finished defragmenting etcd member[https://localhost:2379]", "sh-4.4# etcdctl endpoint status -w table --cluster", "+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.5.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.5.9 | 41 MB | false | false | 7 | 91624 | 91624 | | 1 | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.5.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+", "sh-4.4# etcdctl alarm list", "memberID:12345678912345678912 alarm:NOSPACE", "sh-4.4# etcdctl alarm disarm", "sudo mv -v /etc/kubernetes/manifests/etcd-pod.yaml /tmp", "sudo crictl ps | grep etcd | egrep -v \"operator|etcd-guard\"", "sudo mv -v /etc/kubernetes/manifests/kube-apiserver-pod.yaml /tmp", "sudo crictl ps | grep kube-apiserver | egrep -v \"operator|guard\"", "sudo mv -v /etc/kubernetes/manifests/kube-controller-manager-pod.yaml /tmp", "sudo crictl ps | grep kube-controller-manager | egrep -v \"operator|guard\"", "sudo mv -v /etc/kubernetes/manifests/kube-scheduler-pod.yaml /tmp", "sudo crictl ps | grep kube-scheduler | egrep -v \"operator|guard\"", "sudo mv -v /var/lib/etcd/ /tmp", "sudo mv -v /etc/kubernetes/manifests/keepalived.yaml /tmp", "sudo crictl ps --name keepalived", "ip -o address | egrep '<api_vip>|<ingress_vip>'", "sudo ip address del <reported_vip> dev <reported_vip_device>", "ip -o address | grep <api_vip>", "sudo -E /usr/local/bin/cluster-restore.sh /home/core/assets/backup", "...stopping kube-scheduler-pod.yaml ...stopping kube-controller-manager-pod.yaml ...stopping etcd-pod.yaml ...stopping kube-apiserver-pod.yaml Waiting for container etcd to stop .complete Waiting for container etcdctl to stop .............................complete Waiting for container etcd-metrics to stop complete Waiting for container kube-controller-manager to stop complete Waiting for container kube-apiserver to stop ..........................................................................................complete Waiting for container kube-scheduler to stop complete Moving etcd data-dir /var/lib/etcd/member to /var/lib/etcd-backup starting restore-etcd static pod starting kube-apiserver-pod.yaml static-pod-resources/kube-apiserver-pod-7/kube-apiserver-pod.yaml starting kube-controller-manager-pod.yaml static-pod-resources/kube-controller-manager-pod-7/kube-controller-manager-pod.yaml starting kube-scheduler-pod.yaml static-pod-resources/kube-scheduler-pod-8/kube-scheduler-pod.yaml", "oc get nodes -w", "NAME STATUS ROLES AGE VERSION host-172-25-75-28 Ready master 3d20h v1.30.3 host-172-25-75-38 Ready infra,worker 3d20h v1.30.3 host-172-25-75-40 Ready master 3d20h v1.30.3 host-172-25-75-65 Ready master 3d20h v1.30.3 host-172-25-75-74 Ready infra,worker 3d20h v1.30.3 host-172-25-75-79 Ready worker 3d20h v1.30.3 host-172-25-75-86 Ready worker 3d20h v1.30.3 host-172-25-75-98 Ready infra,worker 3d20h v1.30.3", "ssh -i <ssh-key-path> core@<master-hostname>", "sh-4.4# pwd /var/lib/kubelet/pki sh-4.4# ls kubelet-client-2022-04-28-11-24-09.pem kubelet-server-2022-04-28-11-24-15.pem kubelet-client-current.pem kubelet-server-current.pem", "sudo systemctl restart kubelet.service", "oc get csr", "NAME AGE SIGNERNAME REQUESTOR CONDITION csr-2s94x 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending 1 csr-4bd6t 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending 2 csr-4hl85 13m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 3 csr-zhhhp 3m8s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 4", "oc describe csr <csr_name> 1", "oc adm certificate approve <csr_name>", "oc adm certificate approve <csr_name>", "sudo crictl ps | grep etcd | egrep -v \"operator|etcd-guard\"", "3ad41b7908e32 36f86e2eeaaffe662df0d21041eb22b8198e0e58abeeae8c743c3e6e977e8009 About a minute ago Running etcd 0 7c05f8af362f0", "oc -n openshift-etcd get pods -l k8s-app=etcd", "NAME READY STATUS RESTARTS AGE etcd-ip-10-0-143-125.ec2.internal 1/1 Running 1 2m47s", "oc -n openshift-ovn-kubernetes delete pod -l app=ovnkube-control-plane", "oc -n openshift-ovn-kubernetes get pod -l app=ovnkube-control-plane", "sudo rm -f /var/lib/ovn-ic/etc/*.db", "sudo systemctl restart ovs-vswitchd ovsdb-server", "oc -n openshift-ovn-kubernetes delete pod -l app=ovnkube-node --field-selector=spec.nodeName==<node>", "oc get po -n openshift-ovn-kubernetes", "oc delete node <node>", "ssh -i <ssh-key-path> core@<node>", "sudo mv /var/lib/kubelet/pki/* /tmp", "sudo systemctl restart kubelet.service", "oc get csr", "NAME AGE SIGNERNAME REQUESTOR CONDITION csr-<uuid> 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending", "adm certificate approve csr-<uuid>", "oc get nodes", "oc -n openshift-ovn-kubernetes get pod -l app=ovnkube-node --field-selector=spec.nodeName==<node>", "oc get machines -n openshift-machine-api -o wide", "NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-0 Running m4.xlarge us-east-1 us-east-1a 3h37m ip-10-0-131-183.ec2.internal aws:///us-east-1a/i-0ec2782f8287dfb7e stopped 1 clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running", "oc delete machine -n openshift-machine-api clustername-8qw5l-master-0 1", "oc get machines -n openshift-machine-api -o wide", "NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-master-3 Provisioning m4.xlarge us-east-1 us-east-1a 85s ip-10-0-173-171.ec2.internal aws:///us-east-1a/i-015b0888fe17bc2c8 running 1 clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running", "oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": {\"useUnsupportedUnsafeNonHANonProductionUnstableEtcd\": true}}}'", "export KUBECONFIG=/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost-recovery.kubeconfig", "oc patch etcd cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge 1", "oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": null}}'", "oc get etcd/cluster -oyaml", "oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 7 1", "oc patch kubeapiserver cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge", "oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 7 1", "oc patch kubecontrollermanager cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge", "oc get kubecontrollermanager -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 7 1", "oc patch kubescheduler cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge", "oc get kubescheduler -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 7 1", "oc adm wait-for-stable-cluster", "oc -n openshift-etcd get pods -l k8s-app=etcd", "etcd-ip-10-0-143-125.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-154-194.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-173-171.ec2.internal 2/2 Running 0 9h", "export KUBECONFIG=<installation_directory>/auth/kubeconfig", "oc whoami", "oc get poddisruptionbudget --all-namespaces", "NAMESPACE NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE openshift-apiserver openshift-apiserver-pdb N/A 1 1 121m openshift-cloud-controller-manager aws-cloud-controller-manager 1 N/A 1 125m openshift-cloud-credential-operator pod-identity-webhook 1 N/A 1 117m openshift-cluster-csi-drivers aws-ebs-csi-driver-controller-pdb N/A 1 1 121m openshift-cluster-storage-operator csi-snapshot-controller-pdb N/A 1 1 122m openshift-cluster-storage-operator csi-snapshot-webhook-pdb N/A 1 1 122m openshift-console console N/A 1 1 116m #", "apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: minAvailable: 2 2 selector: 3 matchLabels: name: my-pod", "apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: maxUnavailable: 25% 2 selector: 3 matchLabels: name: my-pod", "oc create -f </path/to/file> -n <project_name>", "apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: my-pdb spec: minAvailable: 2 selector: matchLabels: name: my-pod unhealthyPodEvictionPolicy: AlwaysAllow 1", "oc create -f pod-disruption-budget.yaml", "subscription-manager register --username=<user_name> --password=<password>", "subscription-manager refresh", "subscription-manager list --available --matches '*OpenShift*'", "subscription-manager attach --pool=<pool_id>", "subscription-manager repos --enable=\"rhel-8-for-x86_64-baseos-rpms\" --enable=\"rhel-8-for-x86_64-appstream-rpms\" --enable=\"rhocp-4.17-for-rhel-8-x86_64-rpms\"", "yum install openshift-ansible openshift-clients jq", "subscription-manager register --username=<user_name> --password=<password>", "subscription-manager refresh", "subscription-manager list --available --matches '*OpenShift*'", "subscription-manager attach --pool=<pool_id>", "subscription-manager repos --disable=\"*\"", "yum repolist", "yum-config-manager --disable <repo_id>", "yum-config-manager --disable \\*", "subscription-manager repos --enable=\"rhel-8-for-x86_64-baseos-rpms\" --enable=\"rhel-8-for-x86_64-appstream-rpms\" --enable=\"rhocp-4.17-for-rhel-8-x86_64-rpms\" --enable=\"fast-datapath-for-rhel-8-x86_64-rpms\"", "systemctl disable --now firewalld.service", "[all:vars] ansible_user=root 1 #ansible_become=True 2 openshift_kubeconfig_path=\"~/.kube/config\" 3 [new_workers] 4 mycluster-rhel8-0.example.com mycluster-rhel8-1.example.com", "cd /usr/share/ansible/openshift-ansible", "ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml 1", "oc get nodes -o wide", "oc adm cordon <node_name> 1", "oc adm drain <node_name> --force --delete-emptydir-data --ignore-daemonsets 1", "oc delete nodes <node_name> 1", "oc get nodes -o wide", "oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign", "curl -k http://<HTTP_server>/worker.ign", "RHCOS_VHD_ORIGIN_URL=USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.<architecture>.artifacts.metal.formats.iso.disk.location')", "sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2", "sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b", "DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img 2", "kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot", "menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign 1 2 initrd rhcos-<version>-live-initramfs.<architecture>.img 3 }", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.30.3 master-1 Ready master 63m v1.30.3 master-2 Ready master 64m v1.30.3", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.30.3 master-1 Ready master 73m v1.30.3 master-2 Ready master 74m v1.30.3 worker-0 Ready worker 11m v1.30.3 worker-1 Ready worker 11m v1.30.3", "oc project openshift-machine-api", "oc get secret worker-user-data --template='{{index .data.userData | base64decode}}' | jq > userData.txt", "{ \"ignition\": { \"config\": { \"merge\": [ { \"source\": \"https:....\" } ] }, \"security\": { \"tls\": { \"certificateAuthorities\": [ { \"source\": \"data:text/plain;charset=utf-8;base64,.....==\" } ] } }, \"version\": \"3.2.0\" }, \"storage\": { \"disks\": [ { \"device\": \"/dev/nvme1n1\", 1 \"partitions\": [ { \"label\": \"var\", \"sizeMiB\": 50000, 2 \"startMiB\": 0 3 } ] } ], \"filesystems\": [ { \"device\": \"/dev/disk/by-partlabel/var\", 4 \"format\": \"xfs\", 5 \"path\": \"/var\" 6 } ] }, \"systemd\": { \"units\": [ 7 { \"contents\": \"[Unit]\\nBefore=local-fs.target\\n[Mount]\\nWhere=/var\\nWhat=/dev/disk/by-partlabel/var\\nOptions=defaults,pquota\\n[Install]\\nWantedBy=local-fs.target\\n\", \"enabled\": true, \"name\": \"var.mount\" } ] } }", "oc get secret worker-user-data --template='{{index .data.disableTemplating | base64decode}}' | jq > disableTemplating.txt", "oc create secret generic worker-user-data-x5 --from-file=userData=userData.txt --from-file=disableTemplating=disableTemplating.txt", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: auto-52-92tf4 name: worker-us-east-2-nvme1n1 1 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: auto-52-92tf4 machine.openshift.io/cluster-api-machineset: auto-52-92tf4-worker-us-east-2b template: metadata: labels: machine.openshift.io/cluster-api-cluster: auto-52-92tf4 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: auto-52-92tf4-worker-us-east-2b spec: metadata: {} providerSpec: value: ami: id: ami-0c2dbd95931a apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - DeviceName: /dev/nvme1n1 2 ebs: encrypted: true iops: 0 volumeSize: 120 volumeType: gp2 - DeviceName: /dev/nvme1n2 3 ebs: encrypted: true iops: 0 volumeSize: 50 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: auto-52-92tf4-worker-profile instanceType: m6i.large kind: AWSMachineProviderConfig metadata: creationTimestamp: null placement: availabilityZone: us-east-2b region: us-east-2 securityGroups: - filters: - name: tag:Name values: - auto-52-92tf4-worker-sg subnet: id: subnet-07a90e5db1 tags: - name: kubernetes.io/cluster/auto-52-92tf4 value: owned userDataSecret: name: worker-user-data-x5 4", "oc create -f <file-name>.yaml", "oc get machineset", "NAME DESIRED CURRENT READY AVAILABLE AGE ci-ln-2675bt2-76ef8-bdgsc-worker-us-east-1a 1 1 1 1 124m ci-ln-2675bt2-76ef8-bdgsc-worker-us-east-1b 2 2 2 2 124m worker-us-east-2-nvme1n1 1 1 1 1 2m35s 1", "oc get nodes", "NAME STATUS ROLES AGE VERSION ip-10-0-128-78.ec2.internal Ready worker 117m v1.30.3 ip-10-0-146-113.ec2.internal Ready master 127m v1.30.3 ip-10-0-153-35.ec2.internal Ready worker 118m v1.30.3 ip-10-0-176-58.ec2.internal Ready master 126m v1.30.3 ip-10-0-217-135.ec2.internal Ready worker 2m57s v1.30.3 1 ip-10-0-225-248.ec2.internal Ready master 127m v1.30.3 ip-10-0-245-59.ec2.internal Ready worker 116m v1.30.3", "oc debug node/<node-name> -- chroot /host lsblk", "oc debug node/ip-10-0-217-135.ec2.internal -- chroot /host lsblk", "NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT nvme0n1 202:0 0 120G 0 disk |-nvme0n1p1 202:1 0 1M 0 part |-nvme0n1p2 202:2 0 127M 0 part |-nvme0n1p3 202:3 0 384M 0 part /boot `-nvme0n1p4 202:4 0 119.5G 0 part /sysroot nvme1n1 202:16 0 50G 0 disk `-nvme1n1p1 202:17 0 48.8G 0 part /var 1", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example 1 namespace: openshift-machine-api spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> 4 unhealthyConditions: - type: \"Ready\" timeout: \"300s\" 5 status: \"False\" - type: \"Ready\" timeout: \"300s\" 6 status: \"Unknown\" maxUnhealthy: \"40%\" 7 nodeStartupTimeout: \"10m\" 8", "oc apply -f healthcheck.yml", "oc get machinesets.machine.openshift.io -n openshift-machine-api", "oc get machines.machine.openshift.io -n openshift-machine-api", "oc annotate machines.machine.openshift.io/<machine_name> -n openshift-machine-api machine.openshift.io/delete-machine=\"true\"", "oc scale --replicas=2 machinesets.machine.openshift.io <machineset> -n openshift-machine-api", "oc edit machinesets.machine.openshift.io <machineset> -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2", "oc get machines.machine.openshift.io", "kubeletConfig: podsPerCore: 10", "kubeletConfig: maxPods: 250", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: infra spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]}", "oc get kubeletconfig", "NAME AGE set-kubelet-config 15m", "oc get mc | grep kubelet", "99-worker-generated-kubelet-1 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 26m", "oc describe machineconfigpool <name>", "oc describe machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: 2019-02-08T14:52:39Z generation: 1 labels: custom-kubelet: set-kubelet-config 1", "oc label machineconfigpool worker custom-kubelet=set-kubelet-config", "oc get machineconfig", "oc describe node <node_name>", "oc describe node ci-ln-5grqprb-f76d1-ncnqq-worker-a-mdv94", "Allocatable: attachable-volumes-aws-ebs: 25 cpu: 3500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 15341844Ki pods: 250", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-config spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-kubelet-config 1 kubeletConfig: 2 podPidsLimit: 8192 containerLogMaxSize: 50Mi maxPods: 500", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-config spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-kubelet-config kubeletConfig: maxPods: <pod_count> kubeAPIBurst: <burst_rate> kubeAPIQPS: <QPS>", "oc label machineconfigpool worker custom-kubelet=set-kubelet-config", "oc create -f change-maxPods-cr.yaml", "oc get kubeletconfig", "NAME AGE set-kubelet-config 15m", "oc describe node <node_name>", "Allocatable: attachable-volumes-gce-pd: 127 cpu: 3500m ephemeral-storage: 123201474766 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 14225400Ki pods: 500 1", "oc get kubeletconfigs set-kubelet-config -o yaml", "spec: kubeletConfig: containerLogMaxSize: 50Mi maxPods: 500 podPidsLimit: 8192 machineConfigPoolSelector: matchLabels: custom-kubelet: set-kubelet-config status: conditions: - lastTransitionTime: \"2021-06-30T17:04:07Z\" message: Success status: \"True\" type: Success", "oc edit machineconfigpool worker", "spec: maxUnavailable: <node_count>", "oc label node perf-node.example.com cpumanager=true", "oc edit machineconfigpool worker", "metadata: creationTimestamp: 2020-xx-xxx generation: 3 labels: custom-kubelet: cpumanager-enabled", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2", "oc create -f cpumanager-kubeletconfig.yaml", "oc get machineconfig 99-worker-XXXXXX-XXXXX-XXXX-XXXXX-kubelet -o json | grep ownerReference -A7", "\"ownerReferences\": [ { \"apiVersion\": \"machineconfiguration.openshift.io/v1\", \"kind\": \"KubeletConfig\", \"name\": \"cpumanager-enabled\", \"uid\": \"7ed5616d-6b72-11e9-aae1-021e1ce18878\" } ]", "oc debug node/perf-node.example.com sh-4.2# cat /host/etc/kubernetes/kubelet.conf | grep cpuManager", "cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2", "oc new-project <project_name>", "cat cpumanager-pod.yaml", "apiVersion: v1 kind: Pod metadata: generateName: cpumanager- spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: cpumanager image: gcr.io/google_containers/pause:3.2 resources: requests: cpu: 1 memory: \"1G\" limits: cpu: 1 memory: \"1G\" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] nodeSelector: cpumanager: \"true\"", "oc create -f cpumanager-pod.yaml", "oc describe pod cpumanager", "Name: cpumanager-6cqz7 Namespace: default Priority: 0 PriorityClassName: <none> Node: perf-node.example.com/xxx.xx.xx.xxx Limits: cpu: 1 memory: 1G Requests: cpu: 1 memory: 1G QoS Class: Guaranteed Node-Selectors: cpumanager=true", "oc describe node --selector='cpumanager=true' | grep -i cpumanager- -B2", "NAMESPACE NAME CPU Requests CPU Limits Memory Requests Memory Limits Age cpuman cpumanager-mlrrz 1 (28%) 1 (28%) 1G (13%) 1G (13%) 27m", "oc debug node/perf-node.example.com", "sh-4.2# systemctl status | grep -B5 pause", "├─init.scope │ └─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 17 └─kubepods.slice ├─kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice │ ├─crio-b5437308f1a574c542bdf08563b865c0345c8f8c0b0a655612c.scope │ └─32706 /pause", "cd /sys/fs/cgroup/kubepods.slice/kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice/crio-b5437308f1ad1a7db0574c542bdf08563b865c0345c86e9585f8c0b0a655612c.scope", "for i in `ls cpuset.cpus cgroup.procs` ; do echo -n \"USDi \"; cat USDi ; done", "cpuset.cpus 1 tasks 32706", "grep ^Cpus_allowed_list /proc/32706/status", "Cpus_allowed_list: 1", "cat /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc494a073_6b77_11e9_98c0_06bba5c387ea.slice/crio-c56982f57b75a2420947f0afc6cafe7534c5734efc34157525fa9abbf99e3849.scope/cpuset.cpus", "oc describe node perf-node.example.com", "Capacity: attachable-volumes-aws-ebs: 39 cpu: 2 ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8162900Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7548500Ki pods: 250 ------- ---- ------------ ---------- --------------- ------------- --- default cpumanager-6cqz7 1 (66%) 1 (66%) 1G (12%) 1G (12%) 29m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 1440m (96%) 1 (66%)", "NAME READY STATUS RESTARTS AGE cpumanager-6cqz7 1/1 Running 0 33m cpumanager-7qc2t 0/1 Pending 0 11s", "apiVersion: v1 kind: Pod metadata: generateName: hugepages-volume- spec: containers: - securityContext: privileged: true image: rhel7:latest command: - sleep - inf name: example volumeMounts: - mountPath: /dev/hugepages name: hugepage resources: limits: hugepages-2Mi: 100Mi 1 memory: \"1Gi\" cpu: \"1\" volumes: - name: hugepage emptyDir: medium: HugePages", "oc label node <node_using_hugepages> node-role.kubernetes.io/worker-hp=", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: hugepages 1 namespace: openshift-cluster-node-tuning-operator spec: profile: 2 - data: | [main] summary=Boot time configuration for hugepages include=openshift-node [bootloader] cmdline_openshift_node_hugepages=hugepagesz=2M hugepages=50 3 name: openshift-node-hugepages recommend: - machineConfigLabels: 4 machineconfiguration.openshift.io/role: \"worker-hp\" priority: 30 profile: openshift-node-hugepages", "oc create -f hugepages-tuned-boottime.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-hp labels: worker-hp: \"\" spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker-hp]} nodeSelector: matchLabels: node-role.kubernetes.io/worker-hp: \"\"", "oc create -f hugepages-mcp.yaml", "oc get node <node_using_hugepages> -o jsonpath=\"{.status.allocatable.hugepages-2Mi}\" 100Mi", "service DevicePlugin { // GetDevicePluginOptions returns options to be communicated with Device // Manager rpc GetDevicePluginOptions(Empty) returns (DevicePluginOptions) {} // ListAndWatch returns a stream of List of Devices // Whenever a Device state change or a Device disappears, ListAndWatch // returns the new list rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {} // Allocate is called during container creation so that the Device // Plug-in can run device specific operations and instruct Kubelet // of the steps to make the Device available in the container rpc Allocate(AllocateRequest) returns (AllocateResponse) {} // PreStartcontainer is called, if indicated by Device Plug-in during // registration phase, before each container start. Device plug-in // can run device specific operations such as resetting the device // before making devices available to the container rpc PreStartcontainer(PreStartcontainerRequest) returns (PreStartcontainerResponse) {} }", "oc describe machineconfig <name>", "oc describe machineconfig 00-worker", "Name: 00-worker Namespace: Labels: machineconfiguration.openshift.io/role=worker 1", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: devicemgr 1 spec: machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io: devicemgr 2 kubeletConfig: feature-gates: - DevicePlugins=true 3", "oc create -f devicemgr.yaml", "kubeletconfig.machineconfiguration.openshift.io/devicemgr created", "apiVersion: v1 kind: Node metadata: name: my-node # spec: taints: - effect: NoExecute key: key1 value: value1 #", "apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\" tolerationSeconds: 3600 #", "apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c name: my-node # spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master #", "apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" 1 value: \"value1\" operator: \"Equal\" effect: \"NoExecute\" tolerationSeconds: 3600 2 #", "apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Exists\" 1 effect: \"NoExecute\" tolerationSeconds: 3600 #", "oc adm taint nodes <node_name> <key>=<value>:<effect>", "oc adm taint nodes node1 key1=value1:NoExecute", "apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c name: my-node # spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master #", "apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" 1 value: \"value1\" operator: \"Equal\" effect: \"NoExecute\" tolerationSeconds: 3600 2 #", "apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 3600 #", "oc edit machineset <machineset>", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: my-machineset # spec: # template: # spec: taints: - effect: NoExecute key: key1 value: value1 #", "oc scale --replicas=0 machineset <machineset> -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 0", "oc scale --replicas=2 machineset <machineset> -n openshift-machine-api", "oc edit machineset <machineset> -n openshift-machine-api", "oc adm taint nodes node1 dedicated=groupName:NoSchedule", "kind: Node apiVersion: v1 metadata: name: my-node # spec: taints: - key: dedicated value: groupName effect: NoSchedule #", "apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"disktype\" value: \"ssd\" operator: \"Equal\" effect: \"NoSchedule\" tolerationSeconds: 3600 #", "oc adm taint nodes <node-name> disktype=ssd:NoSchedule", "oc adm taint nodes <node-name> disktype=ssd:PreferNoSchedule", "kind: Node apiVersion: v1 metadata: name: my_node # spec: taints: - key: disktype value: ssd effect: PreferNoSchedule #", "oc adm taint nodes <node-name> <key>-", "oc adm taint nodes ip-10-0-132-248.ec2.internal key1-", "node/ip-10-0-132-248.ec2.internal untainted", "apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key2\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 3600 #", "oc edit KubeletConfig cpumanager-enabled", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s topologyManagerPolicy: single-numa-node 2", "spec: containers: - name: nginx image: nginx", "spec: containers: - name: nginx image: nginx resources: limits: memory: \"200Mi\" requests: memory: \"100Mi\"", "spec: containers: - name: nginx image: nginx resources: limits: memory: \"200Mi\" cpu: \"2\" example.com/device: \"1\" requests: memory: \"200Mi\" cpu: \"2\" example.com/device: \"1\"", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4", "apiVersion: v1 kind: Namespace metadata: labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: \"true\"", "apiVersion: v1 kind: Pod metadata: name: my-pod namespace: my-namespace spec: containers: - name: hello-openshift image: openshift/hello-openshift resources: limits: memory: \"512Mi\" cpu: \"2000m\"", "apiVersion: v1 kind: Pod metadata: name: my-pod namespace: my-namespace spec: containers: - image: openshift/hello-openshift name: hello-openshift resources: limits: cpu: \"1\" 1 memory: 512Mi requests: cpu: 250m 2 memory: 256Mi", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operator.autoscaling.openshift.io/v1\",\"kind\":\"ClusterResourceOverride\",\"metadata\":{\"annotations\":{},\"name\":\"cluster\"},\"spec\":{\"podResourceOverride\":{\"spec\":{\"cpuRequestToLimitPercent\":25,\"limitCPUToMemoryPercent\":200,\"memoryRequestToLimitPercent\":50}}}} creationTimestamp: \"2019-12-18T22:35:02Z\" generation: 1 name: cluster resourceVersion: \"127622\" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: \"127621\" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3", "apiVersion: v1 kind: Namespace metadata: name: clusterresourceoverride-operator", "oc create -f <file-name>.yaml", "oc create -f cro-namespace.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: clusterresourceoverride-operator namespace: clusterresourceoverride-operator spec: targetNamespaces: - clusterresourceoverride-operator", "oc create -f <file-name>.yaml", "oc create -f cro-og.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: clusterresourceoverride namespace: clusterresourceoverride-operator spec: channel: \"stable\" name: clusterresourceoverride source: redhat-operators sourceNamespace: openshift-marketplace", "oc create -f <file-name>.yaml", "oc create -f cro-sub.yaml", "oc project clusterresourceoverride-operator", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4", "oc create -f <file-name>.yaml", "oc create -f cro-cr.yaml", "oc get clusterresourceoverride cluster -n clusterresourceoverride-operator -o yaml", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operator.autoscaling.openshift.io/v1\",\"kind\":\"ClusterResourceOverride\",\"metadata\":{\"annotations\":{},\"name\":\"cluster\"},\"spec\":{\"podResourceOverride\":{\"spec\":{\"cpuRequestToLimitPercent\":25,\"limitCPUToMemoryPercent\":200,\"memoryRequestToLimitPercent\":50}}}} creationTimestamp: \"2019-12-18T22:35:02Z\" generation: 1 name: cluster resourceVersion: \"127622\" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: \"127621\" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 1 cpuRequestToLimitPercent: 25 2 limitCPUToMemoryPercent: 200 3", "apiVersion: v1 kind: Namespace metadata: labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: \"true\" 1", "sysctl -a |grep commit", "# vm.overcommit_memory = 0 #", "sysctl -a |grep panic", "# vm.panic_on_oom = 0 #", "oc edit machineconfigpool <name>", "oc edit machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker", "oc label machineconfigpool worker custom-kubelet=small-pods", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: disable-cpu-units 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: cpuCfsQuota: false 3", "oc create -f <file_name>.yaml", "sysctl -w vm.overcommit_memory=0", "apiVersion: v1 kind: Namespace metadata: annotations: quota.openshift.io/cluster-resource-override-enabled: \"false\" <.>", "oc edit machineconfigpool <name>", "oc edit machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #", "oc label machineconfigpool worker custom-kubelet=small-pods", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: worker-kubeconfig 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: evictionSoft: 3 memory.available: \"500Mi\" 4 nodefs.available: \"10%\" nodefs.inodesFree: \"5%\" imagefs.available: \"15%\" imagefs.inodesFree: \"10%\" evictionSoftGracePeriod: 5 memory.available: \"1m30s\" nodefs.available: \"1m30s\" nodefs.inodesFree: \"1m30s\" imagefs.available: \"1m30s\" imagefs.inodesFree: \"1m30s\" evictionHard: 6 memory.available: \"200Mi\" nodefs.available: \"5%\" nodefs.inodesFree: \"4%\" imagefs.available: \"10%\" imagefs.inodesFree: \"5%\" evictionPressureTransitionPeriod: 3m 7 imageMinimumGCAge: 5m 8 imageGCHighThresholdPercent: 80 9 imageGCLowThresholdPercent: 75 10 #", "oc create -f <file_name>.yaml", "oc create -f gc-container.yaml", "kubeletconfig.machineconfiguration.openshift.io/gc-container created", "oc get machineconfigpool", "NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True", "get tuned.tuned.openshift.io/default -o yaml -n openshift-cluster-node-tuning-operator", "profile: - name: tuned_profile_1 data: | # TuneD profile specification [main] summary=Description of tuned_profile_1 profile [sysctl] net.ipv4.ip_forward=1 # ... other sysctl's or other TuneD daemon plugins supported by the containerized TuneD - name: tuned_profile_n data: | # TuneD profile specification [main] summary=Description of tuned_profile_n profile # tuned_profile_n profile settings", "recommend: <recommend-item-1> <recommend-item-n>", "- machineConfigLabels: 1 <mcLabels> 2 match: 3 <match> 4 priority: <priority> 5 profile: <tuned_profile_name> 6 operand: 7 debug: <bool> 8 tunedConfig: reapply_sysctl: <bool> 9", "- label: <label_name> 1 value: <label_value> 2 type: <label_type> 3 <match> 4", "- match: - label: tuned.openshift.io/elasticsearch match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra type: pod priority: 10 profile: openshift-control-plane-es - match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra priority: 20 profile: openshift-control-plane - priority: 30 profile: openshift-node", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-custom namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile with an additional kernel parameter include=openshift-node [bootloader] cmdline_openshift_node_custom=+skew_tick=1 name: openshift-node-custom recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: \"worker-custom\" priority: 20 profile: openshift-node-custom", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: provider-gce namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=GCE Cloud provider-specific profile # Your tuning for GCE Cloud provider goes here. name: provider-gce", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: default namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Optimize systems running OpenShift (provider specific parent profile) include=-provider-USD{f:exec:cat:/var/lib/ocp-tuned/provider},openshift name: openshift recommend: - profile: openshift-control-plane priority: 30 match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra - profile: openshift-node priority: 40", "oc exec USDtuned_pod -n openshift-cluster-node-tuning-operator -- find /usr/lib/tuned/openshift{,-control-plane,-node} -name tuned.conf -exec grep -H ^ {} \\;", "oc edit machineconfigpool <name>", "oc edit machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #", "oc label machineconfigpool worker custom-kubelet=small-pods", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: podsPerCore: 10 3 maxPods: 250 4 #", "oc create -f <file_name>.yaml", "oc get machineconfigpools", "NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False False False worker worker-8cecd1236b33ee3f8a5e False True False", "oc get machineconfigpools", "NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False True False worker worker-8cecd1236b33ee3f8a5e True False False", "apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> name: <infrastructure_id>-<role> namespace: openshift-machine-api spec: lifecycleHooks: {} metadata: {} providerSpec: value: apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 8192 metadata: creationTimestamp: null network: devices: - gateway: 192.168.204.1 1 ipAddrs: - 192.168.204.8/24 2 nameservers: 3 - 192.168.204.1 networkName: qe-segment-204 numCPUs: 4 numCoresPerSocket: 2 snapshot: \"\" template: <vm_template_name> userDataSecret: name: worker-user-data workspace: datacenter: <vcenter_data_center_name> datastore: <vcenter_datastore_name> folder: <vcenter_vm_folder_path> resourcepool: <vsphere_resource_pool> server: <vcenter_server_ip> status: {}", "oc create -f <file_name>.yaml", "oc create -f <ipaddressclaim_filename>", "kind: IPAddressClaim metadata: finalizers: - machine.openshift.io/ip-claim-protection name: cluster-dev-9n5wg-worker-0-m7529-claim-0-0 namespace: openshift-machine-api spec: poolRef: apiGroup: ipamcontroller.example.io kind: IPPool name: static-ci-pool status: {}", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: annotations: machine.openshift.io/memoryMb: \"8192\" machine.openshift.io/vCPU: \"4\" labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> name: <infrastructure_id>-<role> namespace: openshift-machine-api spec: replicas: 0 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: ipam: \"true\" machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: lifecycleHooks: {} metadata: {} providerSpec: value: apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 8192 metadata: {} network: devices: - addressesFromPools: 1 - group: ipamcontroller.example.io name: static-ci-pool resource: IPPool nameservers: - \"192.168.204.1\" 2 networkName: qe-segment-204 numCPUs: 4 numCoresPerSocket: 2 snapshot: \"\" template: rvanderp4-dev-9n5wg-rhcos-generated-region-generated-zone userDataSecret: name: worker-user-data workspace: datacenter: IBMCdatacenter datastore: /IBMCdatacenter/datastore/vsanDatastore folder: /IBMCdatacenter/vm/rvanderp4-dev-9n5wg resourcePool: /IBMCdatacenter/host/IBMCcluster//Resources server: vcenter.ibmc.devcluster.openshift.com", "oc scale --replicas=2 machineset <machineset> -n openshift-machine-api", "oc edit machineset <machineset> -n openshift-machine-api", "oc get ipaddressclaims.ipam.cluster.x-k8s.io -n openshift-machine-api", "NAME POOL NAME POOL KIND cluster-dev-9n5wg-worker-0-m7529-claim-0-0 static-ci-pool IPPool cluster-dev-9n5wg-worker-0-wdqkt-claim-0-0 static-ci-pool IPPool", "oc create -f ipaddress.yaml", "apiVersion: ipam.cluster.x-k8s.io/v1alpha1 kind: IPAddress metadata: name: cluster-dev-9n5wg-worker-0-m7529-ipaddress-0-0 namespace: openshift-machine-api spec: address: 192.168.204.129 claimRef: 1 name: cluster-dev-9n5wg-worker-0-m7529-claim-0-0 gateway: 192.168.204.1 poolRef: 2 apiGroup: ipamcontroller.example.io kind: IPPool name: static-ci-pool prefix: 23", "oc --type=merge patch IPAddressClaim cluster-dev-9n5wg-worker-0-m7529-claim-0-0 -p='{\"status\":{\"addressRef\": {\"name\": \"cluster-dev-9n5wg-worker-0-m7529-ipaddress-0-0\"}}}' -n openshift-machine-api --subresource=status", "oc adm create-bootstrap-project-template -o yaml > template.yaml", "oc create -f template.yaml -n openshift-config", "oc edit project.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestTemplate: name: <template_name>", "oc edit template <project_template> -n openshift-config", "objects: - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {} - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-kube-apiserver-operator spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-kube-apiserver-operator podSelector: matchLabels: app: kube-apiserver-operator policyTypes: - Ingress", "oc new-project <project> 1", "oc get networkpolicy NAME POD-SELECTOR AGE allow-from-openshift-ingress <none> 7s allow-from-same-namespace <none> 7s", "oc get is <imagestream> -n openshift -o json | jq .spec.tags[].from.name | grep registry.redhat.io", "oc image mirror registry.redhat.io/rhscl/ruby-25-rhel7:latest USD{MIRROR_ADDR}/rhscl/ruby-25-rhel7:latest", "oc create configmap registry-config --from-file=USD{MIRROR_ADDR_HOSTNAME}..5000=USDpath/ca.crt -n openshift-config", "oc patch image.config.openshift.io/cluster --patch '{\"spec\":{\"additionalTrustedCA\":{\"name\":\"registry-config\"}}}' --type=merge", "oc edit configs.samples.operator.openshift.io -n openshift-cluster-samples-operator", "oc create configmap registry-config --from-file=USD{MIRROR_ADDR_HOSTNAME}..5000=USDpath/ca.crt -n openshift-config", "oc patch image.config.openshift.io/cluster --patch '{\"spec\":{\"additionalTrustedCA\":{\"name\":\"registry-config\"}}}' --type=merge", "oc import-image is/must-gather -n openshift", "oc adm must-gather --image=USD(oc adm release info --image-for must-gather)", "get imagestreams -nopenshift", "oc get is <image-stream-name> -o jsonpath=\"{range .spec.tags[*]}{.name}{'\\t'}{.from.name}{'\\n'}{end}\" -nopenshift", "oc get is ubi8-openjdk-17 -o jsonpath=\"{range .spec.tags[*]}{.name}{'\\t'}{.from.name}{'\\n'}{end}\" -nopenshift", "1.11 registry.access.redhat.com/ubi8/openjdk-17:1.11 1.12 registry.access.redhat.com/ubi8/openjdk-17:1.12", "oc tag <repository/image> <image-stream-name:tag> --scheduled -nopenshift", "oc tag registry.access.redhat.com/ubi8/openjdk-17:1.11 ubi8-openjdk-17:1.11 --scheduled -nopenshift oc tag registry.access.redhat.com/ubi8/openjdk-17:1.12 ubi8-openjdk-17:1.12 --scheduled -nopenshift", "get imagestream <image-stream-name> -o jsonpath=\"{range .spec.tags[*]}Tag: {.name}{'\\t'}Scheduled: {.importPolicy.scheduled}{'\\n'}{end}\" -nopenshift", "get imagestream ubi8-openjdk-17 -o jsonpath=\"{range .spec.tags[*]}Tag: {.name}{'\\t'}Scheduled: {.importPolicy.scheduled}{'\\n'}{end}\" -nopenshift", "Tag: 1.11 Scheduled: true Tag: 1.12 Scheduled: true", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: my_identity_provider 1 mappingMethod: claim 2 type: HTPasswd htpasswd: fileData: name: htpass-secret 3", "oc describe clusterrole.rbac", "Name: admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- .packages.apps.redhat.com [] [] [* create update patch delete get list watch] imagestreams [] [] [create delete deletecollection get list patch update watch create get list watch] imagestreams.image.openshift.io [] [] [create delete deletecollection get list patch update watch create get list watch] secrets [] [] [create delete deletecollection get list patch update watch get list watch create delete deletecollection patch update] buildconfigs/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates [] [] [create delete deletecollection get list patch update watch get list watch] routes [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances [] [] [create delete deletecollection get list patch update watch get list watch] templates [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] routes.route.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] serviceaccounts [] [] [create delete deletecollection get list patch update watch impersonate create delete deletecollection patch update get list watch] imagestreams/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings [] [] [create delete deletecollection get list patch update watch] roles [] [] [create delete deletecollection get list patch update watch] rolebindings.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] roles.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] imagestreams.image.openshift.io/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] roles.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] networkpolicies.extensions [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] networkpolicies.networking.k8s.io [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] configmaps [] [] [create delete deletecollection patch update get list watch] endpoints [] [] [create delete deletecollection patch update get list watch] persistentvolumeclaims [] [] [create delete deletecollection patch update get list watch] pods [] [] [create delete deletecollection patch update get list watch] replicationcontrollers/scale [] [] [create delete deletecollection patch update get list watch] replicationcontrollers [] [] [create delete deletecollection patch update get list watch] services [] [] [create delete deletecollection patch update get list watch] daemonsets.apps [] [] [create delete deletecollection patch update get list watch] deployments.apps/scale [] [] [create delete deletecollection patch update get list watch] deployments.apps [] [] [create delete deletecollection patch update get list watch] replicasets.apps/scale [] [] [create delete deletecollection patch update get list watch] replicasets.apps [] [] [create delete deletecollection patch update get list watch] statefulsets.apps/scale [] [] [create delete deletecollection patch update get list watch] statefulsets.apps [] [] [create delete deletecollection patch update get list watch] horizontalpodautoscalers.autoscaling [] [] [create delete deletecollection patch update get list watch] cronjobs.batch [] [] [create delete deletecollection patch update get list watch] jobs.batch [] [] [create delete deletecollection patch update get list watch] daemonsets.extensions [] [] [create delete deletecollection patch update get list watch] deployments.extensions/scale [] [] [create delete deletecollection patch update get list watch] deployments.extensions [] [] [create delete deletecollection patch update get list watch] ingresses.extensions [] [] [create delete deletecollection patch update get list watch] replicasets.extensions/scale [] [] [create delete deletecollection patch update get list watch] replicasets.extensions [] [] [create delete deletecollection patch update get list watch] replicationcontrollers.extensions/scale [] [] [create delete deletecollection patch update get list watch] poddisruptionbudgets.policy [] [] [create delete deletecollection patch update get list watch] deployments.apps/rollback [] [] [create delete deletecollection patch update] deployments.extensions/rollback [] [] [create delete deletecollection patch update] catalogsources.operators.coreos.com [] [] [create update patch delete get list watch] clusterserviceversions.operators.coreos.com [] [] [create update patch delete get list watch] installplans.operators.coreos.com [] [] [create update patch delete get list watch] packagemanifests.operators.coreos.com [] [] [create update patch delete get list watch] subscriptions.operators.coreos.com [] [] [create update patch delete get list watch] buildconfigs/instantiate [] [] [create] buildconfigs/instantiatebinary [] [] [create] builds/clone [] [] [create] deploymentconfigrollbacks [] [] [create] deploymentconfigs/instantiate [] [] [create] deploymentconfigs/rollback [] [] [create] imagestreamimports [] [] [create] localresourceaccessreviews [] [] [create] localsubjectaccessreviews [] [] [create] podsecuritypolicyreviews [] [] [create] podsecuritypolicyselfsubjectreviews [] [] [create] podsecuritypolicysubjectreviews [] [] [create] resourceaccessreviews [] [] [create] routes/custom-host [] [] [create] subjectaccessreviews [] [] [create] subjectrulesreviews [] [] [create] deploymentconfigrollbacks.apps.openshift.io [] [] [create] deploymentconfigs.apps.openshift.io/instantiate [] [] [create] deploymentconfigs.apps.openshift.io/rollback [] [] [create] localsubjectaccessreviews.authorization.k8s.io [] [] [create] localresourceaccessreviews.authorization.openshift.io [] [] [create] localsubjectaccessreviews.authorization.openshift.io [] [] [create] resourceaccessreviews.authorization.openshift.io [] [] [create] subjectaccessreviews.authorization.openshift.io [] [] [create] subjectrulesreviews.authorization.openshift.io [] [] [create] buildconfigs.build.openshift.io/instantiate [] [] [create] buildconfigs.build.openshift.io/instantiatebinary [] [] [create] builds.build.openshift.io/clone [] [] [create] imagestreamimports.image.openshift.io [] [] [create] routes.route.openshift.io/custom-host [] [] [create] podsecuritypolicyreviews.security.openshift.io [] [] [create] podsecuritypolicyselfsubjectreviews.security.openshift.io [] [] [create] podsecuritypolicysubjectreviews.security.openshift.io [] [] [create] jenkins.build.openshift.io [] [] [edit view view admin edit view] builds [] [] [get create delete deletecollection get list patch update watch get list watch] builds.build.openshift.io [] [] [get create delete deletecollection get list patch update watch get list watch] projects [] [] [get delete get delete get patch update] projects.project.openshift.io [] [] [get delete get delete get patch update] namespaces [] [] [get get list watch] pods/attach [] [] [get list watch create delete deletecollection patch update] pods/exec [] [] [get list watch create delete deletecollection patch update] pods/portforward [] [] [get list watch create delete deletecollection patch update] pods/proxy [] [] [get list watch create delete deletecollection patch update] services/proxy [] [] [get list watch create delete deletecollection patch update] routes/status [] [] [get list watch update] routes.route.openshift.io/status [] [] [get list watch update] appliedclusterresourcequotas [] [] [get list watch] bindings [] [] [get list watch] builds/log [] [] [get list watch] deploymentconfigs/log [] [] [get list watch] deploymentconfigs/status [] [] [get list watch] events [] [] [get list watch] imagestreams/status [] [] [get list watch] limitranges [] [] [get list watch] namespaces/status [] [] [get list watch] pods/log [] [] [get list watch] pods/status [] [] [get list watch] replicationcontrollers/status [] [] [get list watch] resourcequotas/status [] [] [get list watch] resourcequotas [] [] [get list watch] resourcequotausages [] [] [get list watch] rolebindingrestrictions [] [] [get list watch] deploymentconfigs.apps.openshift.io/log [] [] [get list watch] deploymentconfigs.apps.openshift.io/status [] [] [get list watch] controllerrevisions.apps [] [] [get list watch] rolebindingrestrictions.authorization.openshift.io [] [] [get list watch] builds.build.openshift.io/log [] [] [get list watch] imagestreams.image.openshift.io/status [] [] [get list watch] appliedclusterresourcequotas.quota.openshift.io [] [] [get list watch] imagestreams/layers [] [] [get update get] imagestreams.image.openshift.io/layers [] [] [get update get] builds/details [] [] [update] builds.build.openshift.io/details [] [] [update] Name: basic-user Labels: <none> Annotations: openshift.io/description: A user that can get basic information about projects. rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- selfsubjectrulesreviews [] [] [create] selfsubjectaccessreviews.authorization.k8s.io [] [] [create] selfsubjectrulesreviews.authorization.openshift.io [] [] [create] clusterroles.rbac.authorization.k8s.io [] [] [get list watch] clusterroles [] [] [get list] clusterroles.authorization.openshift.io [] [] [get list] storageclasses.storage.k8s.io [] [] [get list] users [] [~] [get] users.user.openshift.io [] [~] [get] projects [] [] [list watch] projects.project.openshift.io [] [] [list watch] projectrequests [] [] [list] projectrequests.project.openshift.io [] [] [list] Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- *.* [] [] [*] [*] [] [*]", "oc describe clusterrolebinding.rbac", "Name: alertmanager-main Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: alertmanager-main Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount alertmanager-main openshift-monitoring Name: basic-users Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: basic-user Subjects: Kind Name Namespace ---- ---- --------- Group system:authenticated Name: cloud-credential-operator-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cloud-credential-operator-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-cloud-credential-operator Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:masters Name: cluster-admins Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:cluster-admins User system:admin Name: cluster-api-manager-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cluster-api-manager-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-machine-api", "oc describe rolebinding.rbac", "oc describe rolebinding.rbac -n joe-project", "Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe-project Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe-project Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe-project", "oc adm policy add-role-to-user <role> <user> -n <project>", "oc adm policy add-role-to-user admin alice -n joe", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: admin-0 namespace: joe roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: admin subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: alice", "oc describe rolebinding.rbac -n <project>", "oc describe rolebinding.rbac -n joe", "Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: admin-0 Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User alice 1 Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe", "oc create role <name> --verb=<verb> --resource=<resource> -n <project>", "oc create role podview --verb=get --resource=pod -n blue", "oc adm policy add-role-to-user podview user2 --role-namespace=blue -n blue", "oc create clusterrole <name> --verb=<verb> --resource=<resource>", "oc create clusterrole podviewonly --verb=get --resource=pod", "oc adm policy add-cluster-role-to-user cluster-admin <user>", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: \"true\" name: <cluster_role>access-unauthenticated roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: <cluster_role> subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:unauthenticated", "oc apply -f add-<cluster_role>.yaml", "INFO Install complete! INFO Run 'export KUBECONFIG=<your working directory>/auth/kubeconfig' to manage the cluster with 'oc', the OpenShift CLI. INFO The cluster is ready when 'oc login -u kubeadmin -p <provided>' succeeds (wait a few minutes). INFO Access the OpenShift web-console here: https://console-openshift-console.apps.demo1.openshift4-beta-abcorp.com INFO Login to the console with user: kubeadmin, password: <provided>", "oc delete secrets kubeadmin -n kube-system", "oc create -f <path/to/manifests/dir>/imageContentSourcePolicy.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog 1 namespace: openshift-marketplace 2 spec: sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode> 3 image: <registry>/<namespace>/redhat-operator-index:v4.17 4 displayName: My Operator Catalog publisher: <publisher_name> 5 updateStrategy: registryPoll: 6 interval: 30m", "oc apply -f catalogSource.yaml", "oc get pods -n openshift-marketplace", "NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h", "oc get catalogsource -n openshift-marketplace", "NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5s", "oc get packagemanifest -n openshift-marketplace", "NAME CATALOG AGE jaeger-product My Operator Catalog 93s", "oc get packagemanifests -n openshift-marketplace", "NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m", "oc describe packagemanifests <operator_name> -n openshift-marketplace", "Kind: PackageManifest Install Modes: 1 Supported: true Type: OwnNamespace Supported: true Type: SingleNamespace Supported: false Type: MultiNamespace Supported: true Type: AllNamespaces Entries: Name: example-operator.v3.7.11 Version: 3.7.11 Name: example-operator.v3.7.10 Version: 3.7.10 Name: stable-3.7 2 Entries: Name: example-operator.v3.8.5 Version: 3.8.5 Name: example-operator.v3.8.4 Version: 3.8.4 Name: stable-3.8 3 Default Channel: stable-3.8 4", "oc get packagemanifests <operator_name> -n <catalog_namespace> -o yaml", "oc get packagemanifest --selector=catalog=<catalogsource_name> --field-selector metadata.name=<operator_name> -n <catalog_namespace> -o yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> 1 spec: targetNamespaces: - <namespace> 2", "oc apply -f operatorgroup.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: <namespace_per_install_mode> 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: <catalog_name> 4 sourceNamespace: <catalog_source_namespace> 5 config: env: 6 - name: ARGS value: \"-v=10\" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: \"Exists\" resources: 11 requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\" nodeSelector: 12 foo: bar", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-operator spec: channel: stable-3.7 installPlanApproval: Manual 1 name: example-operator source: custom-operators sourceNamespace: openshift-marketplace startingCSV: example-operator.v3.7.10 2", "kind: Subscription spec: installPlanApproval: Manual 1", "kind: Subscription spec: config: env: - name: ROLEARN value: \"<role_arn>\" 1", "kind: Subscription spec: config: env: - name: CLIENTID value: \"<client_id>\" 1 - name: TENANTID value: \"<tenant_id>\" 2 - name: SUBSCRIPTIONID value: \"<subscription_id>\" 3", "kind: Subscription spec: config: env: - name: AUDIENCE value: \"<audience_url>\" 1 - name: SERVICE_ACCOUNT_EMAIL value: \"<service_account_email>\" 2", "//iam.googleapis.com/projects/<project_number>/locations/global/workloadIdentityPools/<pool_id>/providers/<provider_id>", "<service_account_name>@<project_id>.iam.gserviceaccount.com", "oc apply -f subscription.yaml", "oc describe subscription <subscription_name> -n <namespace>", "oc describe operatorgroup <operatorgroup_name> -n <namespace>", "ccoctl <provider_name> refresh-keys \\ 1 --kubeconfig <openshift_kubeconfig_file> \\ 2 --credentials-requests-dir <path_to_credential_requests_directory> \\ 3 --name <name> 4", "oc patch kubecontrollermanager cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date )\"'\"}}' --type=merge", "oc get co kube-controller-manager", "oc -n openshift-cloud-credential-operator get CredentialsRequest -o json | jq -r '.items[] | select (.spec.providerSpec.kind==\"<provider_spec>\") | .spec.secretRef'", "{ \"name\": \"ebs-cloud-credentials\", \"namespace\": \"openshift-cluster-csi-drivers\" } { \"name\": \"cloud-credential-operator-iam-ro-creds\", \"namespace\": \"openshift-cloud-credential-operator\" }", "oc delete secret <secret_name> \\ 1 -n <secret_namespace> 2", "oc delete secret ebs-cloud-credentials -n openshift-cluster-csi-drivers", "RELEASE_IMAGE=USD(oc get clusterversion -o jsonpath={..desired.image})", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret", "chmod 775 ccoctl.<rhel_version>", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for {ibm-cloud-title} nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "oc get configmap --namespace openshift-kube-apiserver bound-sa-token-signing-certs --output 'go-template={{index .data \"service-account-001.pub\"}}' > ./output_dir/serviceaccount-signer.public 1", "./ccoctl azure create-oidc-issuer --name <azure_infra_name> \\ 1 --output-dir ./output_dir --region <azure_region> \\ 2 --subscription-id <azure_subscription_id> \\ 3 --tenant-id <azure_tenant_id> --public-key-file ./output_dir/serviceaccount-signer.public 4", "ll ./output_dir/manifests", "total 8 -rw-------. 1 cloud-user cloud-user 193 May 22 02:29 azure-ad-pod-identity-webhook-config.yaml 1 -rw-------. 1 cloud-user cloud-user 165 May 22 02:29 cluster-authentication-02-config.yaml", "OIDC_ISSUER_URL=`awk '/serviceAccountIssuer/ { print USD2 }' ./output_dir/manifests/cluster-authentication-02-config.yaml`", "oc patch authentication cluster --type=merge -p \"{\\\"spec\\\":{\\\"serviceAccountIssuer\\\":\\\"USD{OIDC_ISSUER_URL}\\\"}}\"", "oc adm wait-for-stable-cluster", "All clusteroperators are stable", "oc adm reboot-machine-config-pool mcp/worker mcp/master", "oc adm wait-for-node-reboot nodes --all", "All nodes rebooted", "oc patch cloudcredential cluster --type=merge --patch '{\"spec\":{\"credentialsMode\":\"Manual\"}}'", "oc adm release extract --credentials-requests --included --to <path_to_directory_for_credentials_requests> --registry-config ~/.pull-secret", "AZURE_INSTALL_RG=`oc get infrastructure cluster -o jsonpath --template '{ .status.platformStatus.azure.resourceGroupName }'`", "ccoctl azure create-managed-identities --name <azure_infra_name> --output-dir ./output_dir --region <azure_region> --subscription-id <azure_subscription_id> --credentials-requests-dir <path_to_directory_for_credentials_requests> --issuer-url \"USD{OIDC_ISSUER_URL}\" --dnszone-resource-group-name <azure_dns_zone_resourcegroup_name> \\ 1 --installation-resource-group-name \"USD{AZURE_INSTALL_RG}\"", "oc apply -f ./output_dir/manifests/azure-ad-pod-identity-webhook-config.yaml", "find ./output_dir/manifests -iname \"openshift*yaml\" -print0 | xargs -I {} -0 -t oc replace -f {}", "oc adm reboot-machine-config-pool mcp/worker mcp/master", "oc adm wait-for-node-reboot nodes --all", "All nodes rebooted", "oc adm wait-for-stable-cluster", "All clusteroperators are stable", "oc delete secret -n kube-system azure-credentials", "oc get cloudcredentials cluster -o=jsonpath={.spec.credentialsMode}", "Manual", "oc get secrets -n kube-system <secret_name>", "Error from server (NotFound): secrets \"aws-creds\" not found", "oc get authentication cluster -o jsonpath --template='{ .spec.serviceAccountIssuer }'", "oc get secrets -n openshift-image-registry installer-cloud-credentials -o jsonpath='{.data}'", "oc get pods -n openshift-cloud-credential-operator", "NAME READY STATUS RESTARTS AGE cloud-credential-operator-59cf744f78-r8pbq 2/2 Running 2 71m pod-identity-webhook-548f977b4c-859lz 1/1 Running 1 70m" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/postinstallation_configuration/index
Chapter 6. Uninstalling OpenShift Data Foundation
Chapter 6. Uninstalling OpenShift Data Foundation 6.1. Uninstalling OpenShift Data Foundation in Internal mode To uninstall OpenShift Data Foundation in Internal mode, refer to the knowledge base article on Uninstalling OpenShift Data Foundation .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/deploying_openshift_data_foundation_using_microsoft_azure/uninstalling_openshift_data_foundation
Chapter 3. Creating build inputs
Chapter 3. Creating build inputs Use the following sections for an overview of build inputs, instructions on how to use inputs to provide source content for builds to operate on, and how to use build environments and create secrets. 3.1. Build inputs A build input provides source content for builds to operate on. You can use the following build inputs to provide sources in OpenShift Container Platform, listed in order of precedence: Inline Dockerfile definitions Content extracted from existing images Git repositories Binary (Local) inputs Input secrets External artifacts You can combine multiple inputs in a single build. However, as the inline Dockerfile takes precedence, it can overwrite any other file named Dockerfile provided by another input. Binary (local) input and Git repositories are mutually exclusive inputs. You can use input secrets when you do not want certain resources or credentials used during a build to be available in the final application image produced by the build, or want to consume a value that is defined in a secret resource. External artifacts can be used to pull in additional files that are not available as one of the other build input types. When you run a build: A working directory is constructed and all input content is placed in the working directory. For example, the input Git repository is cloned into the working directory, and files specified from input images are copied into the working directory using the target path. The build process changes directories into the contextDir , if one is defined. The inline Dockerfile, if any, is written to the current directory. The content from the current directory is provided to the build process for reference by the Dockerfile, custom builder logic, or assemble script. This means any input content that resides outside the contextDir is ignored by the build. The following example of a source definition includes multiple input types and an explanation of how they are combined. For more details on how each input type is defined, see the specific sections for each input type. source: git: uri: https://github.com/openshift/ruby-hello-world.git 1 ref: "master" images: - from: kind: ImageStreamTag name: myinputimage:latest namespace: mynamespace paths: - destinationDir: app/dir/injected/dir 2 sourcePath: /usr/lib/somefile.jar contextDir: "app/dir" 3 dockerfile: "FROM centos:7\nRUN yum install -y httpd" 4 1 The repository to be cloned into the working directory for the build. 2 /usr/lib/somefile.jar from myinputimage is stored in <workingdir>/app/dir/injected/dir . 3 The working directory for the build becomes <original_workingdir>/app/dir . 4 A Dockerfile with this content is created in <original_workingdir>/app/dir , overwriting any existing file with that name. 3.2. Dockerfile source When you supply a dockerfile value, the content of this field is written to disk as a file named dockerfile . This is done after other input sources are processed, so if the input source repository contains a Dockerfile in the root directory, it is overwritten with this content. The source definition is part of the spec section in the BuildConfig : source: dockerfile: "FROM centos:7\nRUN yum install -y httpd" 1 1 The dockerfile field contains an inline Dockerfile that is built. Additional resources The typical use for this field is to provide a Dockerfile to a docker strategy build. 3.3. Image source You can add additional files to the build process with images. Input images are referenced in the same way the From and To image targets are defined. This means both container images and image stream tags can be referenced. In conjunction with the image, you must provide one or more path pairs to indicate the path of the files or directories to copy the image and the destination to place them in the build context. The source path can be any absolute path within the image specified. The destination must be a relative directory path. At build time, the image is loaded and the indicated files and directories are copied into the context directory of the build process. This is the same directory into which the source repository content is cloned. If the source path ends in /. then the content of the directory is copied, but the directory itself is not created at the destination. Image inputs are specified in the source definition of the BuildConfig : source: git: uri: https://github.com/openshift/ruby-hello-world.git ref: "master" images: 1 - from: 2 kind: ImageStreamTag name: myinputimage:latest namespace: mynamespace paths: 3 - destinationDir: injected/dir 4 sourcePath: /usr/lib/somefile.jar 5 - from: kind: ImageStreamTag name: myotherinputimage:latest namespace: myothernamespace pullSecret: mysecret 6 paths: - destinationDir: injected/dir sourcePath: /usr/lib/somefile.jar 1 An array of one or more input images and files. 2 A reference to the image containing the files to be copied. 3 An array of source/destination paths. 4 The directory relative to the build root where the build process can access the file. 5 The location of the file to be copied out of the referenced image. 6 An optional secret provided if credentials are needed to access the input image. Note If your cluster uses an ImageDigestMirrorSet , ImageTagMirrorSet , or ImageContentSourcePolicy object to configure repository mirroring, you can use only global pull secrets for mirrored registries. You cannot add a pull secret to a project. Images that require pull secrets When using an input image that requires a pull secret, you can link the pull secret to the service account used by the build. By default, builds use the builder service account. The pull secret is automatically added to the build if the secret contains a credential that matches the repository hosting the input image. To link a pull secret to the service account used by the build, run: USD oc secrets link builder dockerhub Note This feature is not supported for builds using the custom strategy. Images on mirrored registries that require pull secrets When using an input image from a mirrored registry, if you get a build error: failed to pull image message, you can resolve the error by using either of the following methods: Create an input secret that contains the authentication credentials for the builder image's repository and all known mirrors. In this case, create a pull secret for credentials to the image registry and its mirrors. Use the input secret as the pull secret on the BuildConfig object. 3.4. Git source When specified, source code is fetched from the supplied location. If you supply an inline Dockerfile, it overwrites the Dockerfile in the contextDir of the Git repository. The source definition is part of the spec section in the BuildConfig : source: git: 1 uri: "https://github.com/openshift/ruby-hello-world" ref: "master" contextDir: "app/dir" 2 dockerfile: "FROM openshift/ruby-22-centos7\nUSER example" 3 1 The git field contains the Uniform Resource Identifier (URI) to the remote Git repository of the source code. You must specify the value of the ref field to check out a specific Git reference. A valid ref can be a SHA1 tag or a branch name. The default value of the ref field is master . 2 The contextDir field allows you to override the default location inside the source code repository where the build looks for the application source code. If your application exists inside a sub-directory, you can override the default location (the root folder) using this field. 3 If the optional dockerfile field is provided, it should be a string containing a Dockerfile that overwrites any Dockerfile that may exist in the source repository. If the ref field denotes a pull request, the system uses a git fetch operation and then checkout FETCH_HEAD . When no ref value is provided, OpenShift Container Platform performs a shallow clone ( --depth=1 ). In this case, only the files associated with the most recent commit on the default branch (typically master ) are downloaded. This results in repositories downloading faster, but without the full commit history. To perform a full git clone of the default branch of a specified repository, set ref to the name of the default branch (for example main ). Warning Git clone operations that go through a proxy that is performing man in the middle (MITM) TLS hijacking or reencrypting of the proxied connection do not work. 3.4.1. Using a proxy If your Git repository can only be accessed using a proxy, you can define the proxy to use in the source section of the build configuration. You can configure both an HTTP and HTTPS proxy to use. Both fields are optional. Domains for which no proxying should be performed can also be specified in the NoProxy field. Note Your source URI must use the HTTP or HTTPS protocol for this to work. source: git: uri: "https://github.com/openshift/ruby-hello-world" ref: "master" httpProxy: http://proxy.example.com httpsProxy: https://proxy.example.com noProxy: somedomain.com, otherdomain.com Note For Pipeline strategy builds, given the current restrictions with the Git plugin for Jenkins, any Git operations through the Git plugin do not leverage the HTTP or HTTPS proxy defined in the BuildConfig . The Git plugin only uses the proxy configured in the Jenkins UI at the Plugin Manager panel. This proxy is then used for all git interactions within Jenkins, across all jobs. Additional resources You can find instructions on how to configure proxies through the Jenkins UI at JenkinsBehindProxy . 3.4.2. Source Clone Secrets Builder pods require access to any Git repositories defined as source for a build. Source clone secrets are used to provide the builder pod with access it would not normally have access to, such as private repositories or repositories with self-signed or untrusted SSL certificates. The following source clone secret configurations are supported: .gitconfig File Basic Authentication SSH Key Authentication Trusted Certificate Authorities Note You can also use combinations of these configurations to meet your specific needs. 3.4.2.1. Automatically adding a source clone secret to a build configuration When a BuildConfig is created, OpenShift Container Platform can automatically populate its source clone secret reference. This behavior allows the resulting builds to automatically use the credentials stored in the referenced secret to authenticate to a remote Git repository, without requiring further configuration. To use this functionality, a secret containing the Git repository credentials must exist in the namespace in which the BuildConfig is later created. This secrets must include one or more annotations prefixed with build.openshift.io/source-secret-match-uri- . The value of each of these annotations is a Uniform Resource Identifier (URI) pattern, which is defined as follows. When a BuildConfig is created without a source clone secret reference and its Git source URI matches a URI pattern in a secret annotation, OpenShift Container Platform automatically inserts a reference to that secret in the BuildConfig . Prerequisites A URI pattern must consist of: A valid scheme: *:// , git:// , http:// , https:// or ssh:// A host: *` or a valid hostname or IP address optionally preceded by *. A path: /* or / followed by any characters optionally including * characters In all of the above, a * character is interpreted as a wildcard. Important URI patterns must match Git source URIs which are conformant to RFC3986 . Do not include a username (or password) component in a URI pattern. For example, if you use ssh://[email protected]:7999/ATLASSIAN jira.git for a git repository URL, the source secret must be specified as ssh://bitbucket.atlassian.com:7999/* (and not ssh://[email protected]:7999/* ). USD oc annotate secret mysecret \ 'build.openshift.io/source-secret-match-uri-1=ssh://bitbucket.atlassian.com:7999/*' Procedure If multiple secrets match the Git URI of a particular BuildConfig , OpenShift Container Platform selects the secret with the longest match. This allows for basic overriding, as in the following example. The following fragment shows two partial source clone secrets, the first matching any server in the domain mycorp.com accessed by HTTPS, and the second overriding access to servers mydev1.mycorp.com and mydev2.mycorp.com : kind: Secret apiVersion: v1 metadata: name: matches-all-corporate-servers-https-only annotations: build.openshift.io/source-secret-match-uri-1: https://*.mycorp.com/* data: ... --- kind: Secret apiVersion: v1 metadata: name: override-for-my-dev-servers-https-only annotations: build.openshift.io/source-secret-match-uri-1: https://mydev1.mycorp.com/* build.openshift.io/source-secret-match-uri-2: https://mydev2.mycorp.com/* data: ... Add a build.openshift.io/source-secret-match-uri- annotation to a pre-existing secret using: USD oc annotate secret mysecret \ 'build.openshift.io/source-secret-match-uri-1=https://*.mycorp.com/*' 3.4.2.2. Manually adding a source clone secret Source clone secrets can be added manually to a build configuration by adding a sourceSecret field to the source section inside the BuildConfig and setting it to the name of the secret that you created. In this example, it is the basicsecret . apiVersion: "build.openshift.io/v1" kind: "BuildConfig" metadata: name: "sample-build" spec: output: to: kind: "ImageStreamTag" name: "sample-image:latest" source: git: uri: "https://github.com/user/app.git" sourceSecret: name: "basicsecret" strategy: sourceStrategy: from: kind: "ImageStreamTag" name: "python-33-centos7:latest" Procedure You can also use the oc set build-secret command to set the source clone secret on an existing build configuration. To set the source clone secret on an existing build configuration, enter the following command: USD oc set build-secret --source bc/sample-build basicsecret 3.4.2.3. Creating a secret from a .gitconfig file If the cloning of your application is dependent on a .gitconfig file, then you can create a secret that contains it. Add it to the builder service account and then your BuildConfig . Procedure To create a secret from a .gitconfig file: USD oc create secret generic <secret_name> --from-file=<path/to/.gitconfig> Note SSL verification can be turned off if sslVerify=false is set for the http section in your .gitconfig file: [http] sslVerify=false 3.4.2.4. Creating a secret from a .gitconfig file for secured Git If your Git server is secured with two-way SSL and user name with password, you must add the certificate files to your source build and add references to the certificate files in the .gitconfig file. Prerequisites You must have Git credentials. Procedure Add the certificate files to your source build and add references to the certificate files in the .gitconfig file. Add the client.crt , cacert.crt , and client.key files to the /var/run/secrets/openshift.io/source/ folder in the application source code. In the .gitconfig file for the server, add the [http] section shown in the following example: # cat .gitconfig Example output [user] name = <name> email = <email> [http] sslVerify = false sslCert = /var/run/secrets/openshift.io/source/client.crt sslKey = /var/run/secrets/openshift.io/source/client.key sslCaInfo = /var/run/secrets/openshift.io/source/cacert.crt Create the secret: USD oc create secret generic <secret_name> \ --from-literal=username=<user_name> \ 1 --from-literal=password=<password> \ 2 --from-file=.gitconfig=.gitconfig \ --from-file=client.crt=/var/run/secrets/openshift.io/source/client.crt \ --from-file=cacert.crt=/var/run/secrets/openshift.io/source/cacert.crt \ --from-file=client.key=/var/run/secrets/openshift.io/source/client.key 1 The user's Git user name. 2 The password for this user. Important To avoid having to enter your password again, be sure to specify the source-to-image (S2I) image in your builds. However, if you cannot clone the repository, you must still specify your user name and password to promote the build. Additional resources /var/run/secrets/openshift.io/source/ folder in the application source code. 3.4.2.5. Creating a secret from source code basic authentication Basic authentication requires either a combination of --username and --password , or a token to authenticate against the software configuration management (SCM) server. Prerequisites User name and password to access the private repository. Procedure Create the secret first before using the --username and --password to access the private repository: USD oc create secret generic <secret_name> \ --from-literal=username=<user_name> \ --from-literal=password=<password> \ --type=kubernetes.io/basic-auth Create a basic authentication secret with a token: USD oc create secret generic <secret_name> \ --from-literal=password=<token> \ --type=kubernetes.io/basic-auth 3.4.2.6. Creating a secret from source code SSH key authentication SSH key based authentication requires a private SSH key. The repository keys are usually located in the USDHOME/.ssh/ directory, and are named id_dsa.pub , id_ecdsa.pub , id_ed25519.pub , or id_rsa.pub by default. Procedure Generate SSH key credentials: USD ssh-keygen -t ed25519 -C "[email protected]" Note Creating a passphrase for the SSH key prevents OpenShift Container Platform from building. When prompted for a passphrase, leave it blank. Two files are created: the public key and a corresponding private key (one of id_dsa , id_ecdsa , id_ed25519 , or id_rsa ). With both of these in place, consult your source control management (SCM) system's manual on how to upload the public key. The private key is used to access your private repository. Before using the SSH key to access the private repository, create the secret: USD oc create secret generic <secret_name> \ --from-file=ssh-privatekey=<path/to/ssh/private/key> \ --from-file=<path/to/known_hosts> \ 1 --type=kubernetes.io/ssh-auth 1 Optional: Adding this field enables strict server host key check. Warning Skipping the known_hosts file while creating the secret makes the build vulnerable to a potential man-in-the-middle (MITM) attack. Note Ensure that the known_hosts file includes an entry for the host of your source code. 3.4.2.7. Creating a secret from source code trusted certificate authorities The set of Transport Layer Security (TLS) certificate authorities (CA) that are trusted during a Git clone operation are built into the OpenShift Container Platform infrastructure images. If your Git server uses a self-signed certificate or one signed by an authority not trusted by the image, you can create a secret that contains the certificate or disable TLS verification. If you create a secret for the CA certificate, OpenShift Container Platform uses it to access your Git server during the Git clone operation. Using this method is significantly more secure than disabling Git SSL verification, which accepts any TLS certificate that is presented. Procedure Create a secret with a CA certificate file. If your CA uses Intermediate Certificate Authorities, combine the certificates for all CAs in a ca.crt file. Enter the following command: USD cat intermediateCA.crt intermediateCA.crt rootCA.crt > ca.crt Create the secret: USD oc create secret generic mycert --from-file=ca.crt=</path/to/file> 1 1 You must use the key name ca.crt . 3.4.2.8. Source secret combinations You can combine the different methods for creating source clone secrets for your specific needs. 3.4.2.8.1. Creating a SSH-based authentication secret with a .gitconfig file You can combine the different methods for creating source clone secrets for your specific needs, such as a SSH-based authentication secret with a .gitconfig file. Prerequisites SSH authentication .gitconfig file Procedure To create a SSH-based authentication secret with a .gitconfig file, run: USD oc create secret generic <secret_name> \ --from-file=ssh-privatekey=<path/to/ssh/private/key> \ --from-file=<path/to/.gitconfig> \ --type=kubernetes.io/ssh-auth 3.4.2.8.2. Creating a secret that combines a .gitconfig file and CA certificate You can combine the different methods for creating source clone secrets for your specific needs, such as a secret that combines a .gitconfig file and certificate authority (CA) certificate. Prerequisites .gitconfig file CA certificate Procedure To create a secret that combines a .gitconfig file and CA certificate, run: USD oc create secret generic <secret_name> \ --from-file=ca.crt=<path/to/certificate> \ --from-file=<path/to/.gitconfig> 3.4.2.8.3. Creating a basic authentication secret with a CA certificate You can combine the different methods for creating source clone secrets for your specific needs, such as a secret that combines a basic authentication and certificate authority (CA) certificate. Prerequisites Basic authentication credentials CA certificate Procedure Create a basic authentication secret with a CA certificate, run: USD oc create secret generic <secret_name> \ --from-literal=username=<user_name> \ --from-literal=password=<password> \ --from-file=ca-cert=</path/to/file> \ --type=kubernetes.io/basic-auth 3.4.2.8.4. Creating a basic authentication secret with a .gitconfig file You can combine the different methods for creating source clone secrets for your specific needs, such as a secret that combines a basic authentication and .gitconfig file. Prerequisites Basic authentication credentials .gitconfig file Procedure To create a basic authentication secret with a .gitconfig file, run: USD oc create secret generic <secret_name> \ --from-literal=username=<user_name> \ --from-literal=password=<password> \ --from-file=</path/to/.gitconfig> \ --type=kubernetes.io/basic-auth 3.4.2.8.5. Creating a basic authentication secret with a .gitconfig file and CA certificate You can combine the different methods for creating source clone secrets for your specific needs, such as a secret that combines a basic authentication, .gitconfig file, and certificate authority (CA) certificate. Prerequisites Basic authentication credentials .gitconfig file CA certificate Procedure To create a basic authentication secret with a .gitconfig file and CA certificate, run: USD oc create secret generic <secret_name> \ --from-literal=username=<user_name> \ --from-literal=password=<password> \ --from-file=</path/to/.gitconfig> \ --from-file=ca-cert=</path/to/file> \ --type=kubernetes.io/basic-auth 3.5. Binary (local) source Streaming content from a local file system to the builder is called a Binary type build. The corresponding value of BuildConfig.spec.source.type is Binary for these builds. This source type is unique in that it is leveraged solely based on your use of the oc start-build . Note Binary type builds require content to be streamed from the local file system, so automatically triggering a binary type build, like an image change trigger, is not possible. This is because the binary files cannot be provided. Similarly, you cannot launch binary type builds from the web console. To utilize binary builds, invoke oc start-build with one of these options: --from-file : The contents of the file you specify are sent as a binary stream to the builder. You can also specify a URL to a file. Then, the builder stores the data in a file with the same name at the top of the build context. --from-dir and --from-repo : The contents are archived and sent as a binary stream to the builder. Then, the builder extracts the contents of the archive within the build context directory. With --from-dir , you can also specify a URL to an archive, which is extracted. --from-archive : The archive you specify is sent to the builder, where it is extracted within the build context directory. This option behaves the same as --from-dir ; an archive is created on your host first, whenever the argument to these options is a directory. In each of the previously listed cases: If your BuildConfig already has a Binary source type defined, it is effectively ignored and replaced by what the client sends. If your BuildConfig has a Git source type defined, it is dynamically disabled, since Binary and Git are mutually exclusive, and the data in the binary stream provided to the builder takes precedence. Instead of a file name, you can pass a URL with HTTP or HTTPS schema to --from-file and --from-archive . When using --from-file with a URL, the name of the file in the builder image is determined by the Content-Disposition header sent by the web server, or the last component of the URL path if the header is not present. No form of authentication is supported and it is not possible to use custom TLS certificate or disable certificate validation. When using oc new-build --binary=true , the command ensures that the restrictions associated with binary builds are enforced. The resulting BuildConfig has a source type of Binary , meaning that the only valid way to run a build for this BuildConfig is to use oc start-build with one of the --from options to provide the requisite binary data. The Dockerfile and contextDir source options have special meaning with binary builds. Dockerfile can be used with any binary build source. If Dockerfile is used and the binary stream is an archive, its contents serve as a replacement Dockerfile to any Dockerfile in the archive. If Dockerfile is used with the --from-file argument, and the file argument is named Dockerfile, the value from Dockerfile replaces the value from the binary stream. In the case of the binary stream encapsulating extracted archive content, the value of the contextDir field is interpreted as a subdirectory within the archive, and, if valid, the builder changes into that subdirectory before executing the build. 3.6. Input secrets and config maps Important To prevent the contents of input secrets and config maps from appearing in build output container images, use build volumes in your Docker build and source-to-image build strategies. In some scenarios, build operations require credentials or other configuration data to access dependent resources, but it is undesirable for that information to be placed in source control. You can define input secrets and input config maps for this purpose. For example, when building a Java application with Maven, you can set up a private mirror of Maven Central or JCenter that is accessed by private keys. To download libraries from that private mirror, you have to supply the following: A settings.xml file configured with the mirror's URL and connection settings. A private key referenced in the settings file, such as ~/.ssh/id_rsa . For security reasons, you do not want to expose your credentials in the application image. This example describes a Java application, but you can use the same approach for adding SSL certificates into the /etc/ssl/certs directory, API keys or tokens, license files, and more. 3.6.1. What is a secret? The Secret object type provides a mechanism to hold sensitive information such as passwords, OpenShift Container Platform client configuration files, dockercfg files, private source repository credentials, and so on. Secrets decouple sensitive content from the pods. You can mount secrets into containers using a volume plugin or the system can use secrets to perform actions on behalf of a pod. YAML Secret Object Definition apiVersion: v1 kind: Secret metadata: name: test-secret namespace: my-namespace type: Opaque 1 data: 2 username: <username> 3 password: <password> stringData: 4 hostname: myapp.mydomain.com 5 1 Indicates the structure of the secret's key names and values. 2 The allowable format for the keys in the data field must meet the guidelines in the DNS_SUBDOMAIN value in the Kubernetes identifiers glossary. 3 The value associated with keys in the data map must be base64 encoded. 4 Entries in the stringData map are converted to base64 and the entry are then moved to the data map automatically. This field is write-only. The value is only be returned by the data field. 5 The value associated with keys in the stringData map is made up of plain text strings. 3.6.1.1. Properties of secrets Key properties include: Secret data can be referenced independently from its definition. Secret data volumes are backed by temporary file-storage facilities (tmpfs) and never come to rest on a node. Secret data can be shared within a namespace. 3.6.1.2. Types of Secrets The value in the type field indicates the structure of the secret's key names and values. The type can be used to enforce the presence of user names and keys in the secret object. If you do not want validation, use the opaque type, which is the default. Specify one of the following types to trigger minimal server-side validation to ensure the presence of specific key names in the secret data: kubernetes.io/service-account-token . Uses a service account token. kubernetes.io/dockercfg . Uses the .dockercfg file for required Docker credentials. kubernetes.io/dockerconfigjson . Uses the .docker/config.json file for required Docker credentials. kubernetes.io/basic-auth . Use with basic authentication. kubernetes.io/ssh-auth . Use with SSH key authentication. kubernetes.io/tls . Use with TLS certificate authorities. Specify type= Opaque if you do not want validation, which means the secret does not claim to conform to any convention for key names or values. An opaque secret, allows for unstructured key:value pairs that can contain arbitrary values. Note You can specify other arbitrary types, such as example.com/my-secret-type . These types are not enforced server-side, but indicate that the creator of the secret intended to conform to the key/value requirements of that type. 3.6.1.3. Updates to secrets When you modify the value of a secret, the value used by an already running pod does not dynamically change. To change a secret, you must delete the original pod and create a new pod, in some cases with an identical PodSpec . Updating a secret follows the same workflow as deploying a new container image. You can use the kubectl rolling-update command. The resourceVersion value in a secret is not specified when it is referenced. Therefore, if a secret is updated at the same time as pods are starting, the version of the secret that is used for the pod is not defined. Note Currently, it is not possible to check the resource version of a secret object that was used when a pod was created. It is planned that pods report this information, so that a controller could restart ones using an old resourceVersion . In the interim, do not update the data of existing secrets, but create new ones with distinct names. 3.6.2. Creating secrets You must create a secret before creating the pods that depend on that secret. When creating secrets: Create a secret object with secret data. Update the pod service account to allow the reference to the secret. Create a pod, which consumes the secret as an environment variable or as a file using a secret volume. Procedure Use the create command to create a secret object from a JSON or YAML file: USD oc create -f <filename> For example, you can create a secret from your local .docker/config.json file: USD oc create secret generic dockerhub \ --from-file=.dockerconfigjson=<path/to/.docker/config.json> \ --type=kubernetes.io/dockerconfigjson This command generates a JSON specification of the secret named dockerhub and creates the object. YAML Opaque Secret Object Definition apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque 1 data: username: <username> password: <password> 1 Specifies an opaque secret. Docker Configuration JSON File Secret Object Definition apiVersion: v1 kind: Secret metadata: name: aregistrykey namespace: myapps type: kubernetes.io/dockerconfigjson 1 data: .dockerconfigjson:bm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== 2 1 Specifies that the secret is using a docker configuration JSON file. 2 The output of a base64-encoded the docker configuration JSON file 3.6.3. Using secrets After creating secrets, you can create a pod to reference your secret, get logs, and delete the pod. Procedure Create the pod to reference your secret: USD oc create -f <your_yaml_file>.yaml Get the logs: USD oc logs secret-example-pod Delete the pod: USD oc delete pod secret-example-pod Additional resources Example YAML files with secret data: YAML Secret That Will Create Four Files apiVersion: v1 kind: Secret metadata: name: test-secret data: username: <username> 1 password: <password> 2 stringData: hostname: myapp.mydomain.com 3 secret.properties: |- 4 property1=valueA property2=valueB 1 File contains decoded values. 2 File contains decoded values. 3 File contains the provided string. 4 File contains the provided data. YAML of a pod populating files in a volume with secret data apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: containers: - name: secret-test-container image: busybox command: [ "/bin/sh", "-c", "cat /etc/secret-volume/*" ] volumeMounts: # name must match the volume name below - name: secret-volume mountPath: /etc/secret-volume readOnly: true volumes: - name: secret-volume secret: secretName: test-secret restartPolicy: Never YAML of a pod populating environment variables with secret data apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: containers: - name: secret-test-container image: busybox command: [ "/bin/sh", "-c", "export" ] env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: name: test-secret key: username restartPolicy: Never YAML of a Build Config Populating Environment Variables with Secret Data apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: secret-example-bc spec: strategy: sourceStrategy: env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: name: test-secret key: username 3.6.4. Adding input secrets and config maps To provide credentials and other configuration data to a build without placing them in source control, you can define input secrets and input config maps. In some scenarios, build operations require credentials or other configuration data to access dependent resources. To make that information available without placing it in source control, you can define input secrets and input config maps. Procedure To add an input secret, config maps, or both to an existing BuildConfig object: Create the ConfigMap object, if it does not exist: USD oc create configmap settings-mvn \ --from-file=settings.xml=<path/to/settings.xml> This creates a new config map named settings-mvn , which contains the plain text content of the settings.xml file. Tip You can alternatively apply the following YAML to create the config map: apiVersion: core/v1 kind: ConfigMap metadata: name: settings-mvn data: settings.xml: | <settings> ... # Insert maven settings here </settings> Create the Secret object, if it does not exist: USD oc create secret generic secret-mvn \ --from-file=ssh-privatekey=<path/to/.ssh/id_rsa> --type=kubernetes.io/ssh-auth This creates a new secret named secret-mvn , which contains the base64 encoded content of the id_rsa private key. Tip You can alternatively apply the following YAML to create the input secret: apiVersion: core/v1 kind: Secret metadata: name: secret-mvn type: kubernetes.io/ssh-auth data: ssh-privatekey: | # Insert ssh private key, base64 encoded Add the config map and secret to the source section in the existing BuildConfig object: source: git: uri: https://github.com/wildfly/quickstart.git contextDir: helloworld configMaps: - configMap: name: settings-mvn secrets: - secret: name: secret-mvn To include the secret and config map in a new BuildConfig object, run the following command: USD oc new-build \ openshift/wildfly-101-centos7~https://github.com/wildfly/quickstart.git \ --context-dir helloworld --build-secret "secret-mvn" \ --build-config-map "settings-mvn" During the build, the settings.xml and id_rsa files are copied into the directory where the source code is located. In OpenShift Container Platform S2I builder images, this is the image working directory, which is set using the WORKDIR instruction in the Dockerfile . If you want to specify another directory, add a destinationDir to the definition: source: git: uri: https://github.com/wildfly/quickstart.git contextDir: helloworld configMaps: - configMap: name: settings-mvn destinationDir: ".m2" secrets: - secret: name: secret-mvn destinationDir: ".ssh" You can also specify the destination directory when creating a new BuildConfig object: USD oc new-build \ openshift/wildfly-101-centos7~https://github.com/wildfly/quickstart.git \ --context-dir helloworld --build-secret "secret-mvn:.ssh" \ --build-config-map "settings-mvn:.m2" In both cases, the settings.xml file is added to the ./.m2 directory of the build environment, and the id_rsa key is added to the ./.ssh directory. 3.6.5. Source-to-image strategy When using a Source strategy, all defined input secrets are copied to their respective destinationDir . If you left destinationDir empty, then the secrets are placed in the working directory of the builder image. The same rule is used when a destinationDir is a relative path. The secrets are placed in the paths that are relative to the working directory of the image. The final directory in the destinationDir path is created if it does not exist in the builder image. All preceding directories in the destinationDir must exist, or an error will occur. Note Input secrets are added as world-writable, have 0666 permissions, and are truncated to size zero after executing the assemble script. This means that the secret files exist in the resulting image, but they are empty for security reasons. Input config maps are not truncated after the assemble script completes. 3.6.6. Docker strategy When using a docker strategy, you can add all defined input secrets into your container image using the ADD and COPY instructions in your Dockerfile. If you do not specify the destinationDir for a secret, then the files are copied into the same directory in which the Dockerfile is located. If you specify a relative path as destinationDir , then the secrets are copied into that directory, relative to your Dockerfile location. This makes the secret files available to the Docker build operation as part of the context directory used during the build. Example of a Dockerfile referencing secret and config map data Important Users normally remove their input secrets from the final application image so that the secrets are not present in the container running from that image. However, the secrets still exist in the image itself in the layer where they were added. This removal is part of the Dockerfile itself. To prevent the contents of input secrets and config maps from appearing in the build output container images and avoid this removal process altogether, use build volumes in your Docker build strategy instead. 3.6.7. Custom strategy When using a Custom strategy, all the defined input secrets and config maps are available in the builder container in the /var/run/secrets/openshift.io/build directory. The custom build image must use these secrets and config maps appropriately. With the Custom strategy, you can define secrets as described in Custom strategy options. There is no technical difference between existing strategy secrets and the input secrets. However, your builder image can distinguish between them and use them differently, based on your build use case. The input secrets are always mounted into the /var/run/secrets/openshift.io/build directory, or your builder can parse the USDBUILD environment variable, which includes the full build object. Important If a pull secret for the registry exists in both the namespace and the node, builds default to using the pull secret in the namespace. 3.7. External artifacts It is not recommended to store binary files in a source repository. Therefore, you must define a build which pulls additional files, such as Java .jar dependencies, during the build process. How this is done depends on the build strategy you are using. For a Source build strategy, you must put appropriate shell commands into the assemble script: .s2i/bin/assemble File #!/bin/sh APP_VERSION=1.0 wget http://repository.example.com/app/app-USDAPP_VERSION.jar -O app.jar .s2i/bin/run File #!/bin/sh exec java -jar app.jar For a Docker build strategy, you must modify the Dockerfile and invoke shell commands with the RUN instruction : Excerpt of Dockerfile FROM jboss/base-jdk:8 ENV APP_VERSION 1.0 RUN wget http://repository.example.com/app/app-USDAPP_VERSION.jar -O app.jar EXPOSE 8080 CMD [ "java", "-jar", "app.jar" ] In practice, you may want to use an environment variable for the file location so that the specific file to be downloaded can be customized using an environment variable defined on the BuildConfig , rather than updating the Dockerfile or assemble script. You can choose between different methods of defining environment variables: Using the .s2i/environment file] (only for a Source build strategy) Setting in BuildConfig Providing explicitly using oc start-build --env (only for builds that are triggered manually) 3.8. Using docker credentials for private registries You can supply builds with a . docker/config.json file with valid credentials for private container registries. This allows you to push the output image into a private container image registry or pull a builder image from the private container image registry that requires authentication. You can supply credentials for multiple repositories within the same registry, each with credentials specific to that registry path. Note For the OpenShift Container Platform container image registry, this is not required because secrets are generated automatically for you by OpenShift Container Platform. The .docker/config.json file is found in your home directory by default and has the following format: auths: index.docker.io/v1/: 1 auth: "YWRfbGzhcGU6R2labnRib21ifTE=" 2 email: "[email protected]" 3 docker.io/my-namespace/my-user/my-image: 4 auth: "GzhYWRGU6R2fbclabnRgbkSp="" email: "[email protected]" docker.io/my-namespace: 5 auth: "GzhYWRGU6R2deesfrRgbkSp="" email: "[email protected]" 1 URL of the registry. 2 Encrypted password. 3 Email address for the login. 4 URL and credentials for a specific image in a namespace. 5 URL and credentials for a registry namespace. You can define multiple container image registries or define multiple repositories in the same registry. Alternatively, you can also add authentication entries to this file by running the docker login command. The file will be created if it does not exist. Kubernetes provides Secret objects, which can be used to store configuration and passwords. Prerequisites You must have a .docker/config.json file. Procedure Create the secret from your local .docker/config.json file: USD oc create secret generic dockerhub \ --from-file=.dockerconfigjson=<path/to/.docker/config.json> \ --type=kubernetes.io/dockerconfigjson This generates a JSON specification of the secret named dockerhub and creates the object. Add a pushSecret field into the output section of the BuildConfig and set it to the name of the secret that you created, which in the example is dockerhub : spec: output: to: kind: "DockerImage" name: "private.registry.com/org/private-image:latest" pushSecret: name: "dockerhub" You can use the oc set build-secret command to set the push secret on the build configuration: USD oc set build-secret --push bc/sample-build dockerhub You can also link the push secret to the service account used by the build instead of specifying the pushSecret field. By default, builds use the builder service account. The push secret is automatically added to the build if the secret contains a credential that matches the repository hosting the build's output image. USD oc secrets link builder dockerhub Pull the builder container image from a private container image registry by specifying the pullSecret field, which is part of the build strategy definition: strategy: sourceStrategy: from: kind: "DockerImage" name: "docker.io/user/private_repository" pullSecret: name: "dockerhub" You can use the oc set build-secret command to set the pull secret on the build configuration: USD oc set build-secret --pull bc/sample-build dockerhub Note This example uses pullSecret in a Source build, but it is also applicable in Docker and Custom builds. You can also link the pull secret to the service account used by the build instead of specifying the pullSecret field. By default, builds use the builder service account. The pull secret is automatically added to the build if the secret contains a credential that matches the repository hosting the build's input image. To link the pull secret to the service account used by the build instead of specifying the pullSecret field, run: USD oc secrets link builder dockerhub Note You must specify a from image in the BuildConfig spec to take advantage of this feature. Docker strategy builds generated by oc new-build or oc new-app may not do this in some situations. 3.9. Build environments As with pod environment variables, build environment variables can be defined in terms of references to other resources or variables using the Downward API. There are some exceptions, which are noted. You can also manage environment variables defined in the BuildConfig with the oc set env command. Note Referencing container resources using valueFrom in build environment variables is not supported as the references are resolved before the container is created. 3.9.1. Using build fields as environment variables You can inject information about the build object by setting the fieldPath environment variable source to the JsonPath of the field from which you are interested in obtaining the value. Note Jenkins Pipeline strategy does not support valueFrom syntax for environment variables. Procedure Set the fieldPath environment variable source to the JsonPath of the field from which you are interested in obtaining the value: env: - name: FIELDREF_ENV valueFrom: fieldRef: fieldPath: metadata.name 3.9.2. Using secrets as environment variables You can make key values from secrets available as environment variables using the valueFrom syntax. Important This method shows the secrets as plain text in the output of the build pod console. To avoid this, use input secrets and config maps instead. Procedure To use a secret as an environment variable, set the valueFrom syntax: apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: secret-example-bc spec: strategy: sourceStrategy: env: - name: MYVAL valueFrom: secretKeyRef: key: myval name: mysecret Additional resources Input secrets and config maps 3.10. Service serving certificate secrets Service serving certificate secrets are intended to support complex middleware applications that need out-of-the-box certificates. It has the same settings as the server certificates generated by the administrator tooling for nodes and masters. Procedure To secure communication to your service, have the cluster generate a signed serving certificate/key pair into a secret in your namespace. Set the service.beta.openshift.io/serving-cert-secret-name annotation on your service with the value set to the name you want to use for your secret. Then, your PodSpec can mount that secret. When it is available, your pod runs. The certificate is good for the internal service DNS name, <service.name>.<service.namespace>.svc . The certificate and key are in PEM format, stored in tls.crt and tls.key respectively. The certificate/key pair is automatically replaced when it gets close to expiration. View the expiration date in the service.beta.openshift.io/expiry annotation on the secret, which is in RFC3339 format. Note In most cases, the service DNS name <service.name>.<service.namespace>.svc is not externally routable. The primary use of <service.name>.<service.namespace>.svc is for intracluster or intraservice communication, and with re-encrypt routes. Other pods can trust cluster-created certificates, which are only signed for internal DNS names, by using the certificate authority (CA) bundle in the /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt file that is automatically mounted in their pod. The signature algorithm for this feature is x509.SHA256WithRSA . To manually rotate, delete the generated secret. A new certificate is created. 3.11. Secrets restrictions To use a secret, a pod needs to reference the secret. A secret can be used with a pod in three ways: To populate environment variables for containers. As files in a volume mounted on one or more of its containers. By kubelet when pulling images for the pod. Volume type secrets write data into the container as a file using the volume mechanism. imagePullSecrets use service accounts for the automatic injection of the secret into all pods in a namespace. Note To create secrets for store image pull information using the imagePullSecrets object, you cannot use the {serviceaccount-name}-dockercfg pattern. When this pattern is used, the openshift-controller-manager does not create a token or pull secret for that service account. When a template contains a secret definition, the only way for the template to use the provided secret is to ensure that the secret volume sources are validated and that the specified object reference actually points to an object of type Secret . Therefore, a secret needs to be created before any pods that depend on it. The most effective way to ensure this is to have it get injected automatically through the use of a service account. Secret API objects reside in a namespace. They can only be referenced by pods in that same namespace. Individual secrets are limited to 1MB in size. This is to discourage the creation of large secrets that would exhaust apiserver and kubelet memory. However, creation of a number of smaller secrets could also exhaust memory.
[ "source: git: uri: https://github.com/openshift/ruby-hello-world.git 1 ref: \"master\" images: - from: kind: ImageStreamTag name: myinputimage:latest namespace: mynamespace paths: - destinationDir: app/dir/injected/dir 2 sourcePath: /usr/lib/somefile.jar contextDir: \"app/dir\" 3 dockerfile: \"FROM centos:7\\nRUN yum install -y httpd\" 4", "source: dockerfile: \"FROM centos:7\\nRUN yum install -y httpd\" 1", "source: git: uri: https://github.com/openshift/ruby-hello-world.git ref: \"master\" images: 1 - from: 2 kind: ImageStreamTag name: myinputimage:latest namespace: mynamespace paths: 3 - destinationDir: injected/dir 4 sourcePath: /usr/lib/somefile.jar 5 - from: kind: ImageStreamTag name: myotherinputimage:latest namespace: myothernamespace pullSecret: mysecret 6 paths: - destinationDir: injected/dir sourcePath: /usr/lib/somefile.jar", "oc secrets link builder dockerhub", "source: git: 1 uri: \"https://github.com/openshift/ruby-hello-world\" ref: \"master\" contextDir: \"app/dir\" 2 dockerfile: \"FROM openshift/ruby-22-centos7\\nUSER example\" 3", "source: git: uri: \"https://github.com/openshift/ruby-hello-world\" ref: \"master\" httpProxy: http://proxy.example.com httpsProxy: https://proxy.example.com noProxy: somedomain.com, otherdomain.com", "oc annotate secret mysecret 'build.openshift.io/source-secret-match-uri-1=ssh://bitbucket.atlassian.com:7999/*'", "kind: Secret apiVersion: v1 metadata: name: matches-all-corporate-servers-https-only annotations: build.openshift.io/source-secret-match-uri-1: https://*.mycorp.com/* data: --- kind: Secret apiVersion: v1 metadata: name: override-for-my-dev-servers-https-only annotations: build.openshift.io/source-secret-match-uri-1: https://mydev1.mycorp.com/* build.openshift.io/source-secret-match-uri-2: https://mydev2.mycorp.com/* data:", "oc annotate secret mysecret 'build.openshift.io/source-secret-match-uri-1=https://*.mycorp.com/*'", "apiVersion: \"build.openshift.io/v1\" kind: \"BuildConfig\" metadata: name: \"sample-build\" spec: output: to: kind: \"ImageStreamTag\" name: \"sample-image:latest\" source: git: uri: \"https://github.com/user/app.git\" sourceSecret: name: \"basicsecret\" strategy: sourceStrategy: from: kind: \"ImageStreamTag\" name: \"python-33-centos7:latest\"", "oc set build-secret --source bc/sample-build basicsecret", "oc create secret generic <secret_name> --from-file=<path/to/.gitconfig>", "[http] sslVerify=false", "cat .gitconfig", "[user] name = <name> email = <email> [http] sslVerify = false sslCert = /var/run/secrets/openshift.io/source/client.crt sslKey = /var/run/secrets/openshift.io/source/client.key sslCaInfo = /var/run/secrets/openshift.io/source/cacert.crt", "oc create secret generic <secret_name> --from-literal=username=<user_name> \\ 1 --from-literal=password=<password> \\ 2 --from-file=.gitconfig=.gitconfig --from-file=client.crt=/var/run/secrets/openshift.io/source/client.crt --from-file=cacert.crt=/var/run/secrets/openshift.io/source/cacert.crt --from-file=client.key=/var/run/secrets/openshift.io/source/client.key", "oc create secret generic <secret_name> --from-literal=username=<user_name> --from-literal=password=<password> --type=kubernetes.io/basic-auth", "oc create secret generic <secret_name> --from-literal=password=<token> --type=kubernetes.io/basic-auth", "ssh-keygen -t ed25519 -C \"[email protected]\"", "oc create secret generic <secret_name> --from-file=ssh-privatekey=<path/to/ssh/private/key> --from-file=<path/to/known_hosts> \\ 1 --type=kubernetes.io/ssh-auth", "cat intermediateCA.crt intermediateCA.crt rootCA.crt > ca.crt", "oc create secret generic mycert --from-file=ca.crt=</path/to/file> 1", "oc create secret generic <secret_name> --from-file=ssh-privatekey=<path/to/ssh/private/key> --from-file=<path/to/.gitconfig> --type=kubernetes.io/ssh-auth", "oc create secret generic <secret_name> --from-file=ca.crt=<path/to/certificate> --from-file=<path/to/.gitconfig>", "oc create secret generic <secret_name> --from-literal=username=<user_name> --from-literal=password=<password> --from-file=ca-cert=</path/to/file> --type=kubernetes.io/basic-auth", "oc create secret generic <secret_name> --from-literal=username=<user_name> --from-literal=password=<password> --from-file=</path/to/.gitconfig> --type=kubernetes.io/basic-auth", "oc create secret generic <secret_name> --from-literal=username=<user_name> --from-literal=password=<password> --from-file=</path/to/.gitconfig> --from-file=ca-cert=</path/to/file> --type=kubernetes.io/basic-auth", "apiVersion: v1 kind: Secret metadata: name: test-secret namespace: my-namespace type: Opaque 1 data: 2 username: <username> 3 password: <password> stringData: 4 hostname: myapp.mydomain.com 5", "oc create -f <filename>", "oc create secret generic dockerhub --from-file=.dockerconfigjson=<path/to/.docker/config.json> --type=kubernetes.io/dockerconfigjson", "apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque 1 data: username: <username> password: <password>", "apiVersion: v1 kind: Secret metadata: name: aregistrykey namespace: myapps type: kubernetes.io/dockerconfigjson 1 data: .dockerconfigjson:bm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== 2", "oc create -f <your_yaml_file>.yaml", "oc logs secret-example-pod", "oc delete pod secret-example-pod", "apiVersion: v1 kind: Secret metadata: name: test-secret data: username: <username> 1 password: <password> 2 stringData: hostname: myapp.mydomain.com 3 secret.properties: |- 4 property1=valueA property2=valueB", "apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: containers: - name: secret-test-container image: busybox command: [ \"/bin/sh\", \"-c\", \"cat /etc/secret-volume/*\" ] volumeMounts: # name must match the volume name below - name: secret-volume mountPath: /etc/secret-volume readOnly: true volumes: - name: secret-volume secret: secretName: test-secret restartPolicy: Never", "apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: containers: - name: secret-test-container image: busybox command: [ \"/bin/sh\", \"-c\", \"export\" ] env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: name: test-secret key: username restartPolicy: Never", "apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: secret-example-bc spec: strategy: sourceStrategy: env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: name: test-secret key: username", "oc create configmap settings-mvn --from-file=settings.xml=<path/to/settings.xml>", "apiVersion: core/v1 kind: ConfigMap metadata: name: settings-mvn data: settings.xml: | <settings> ... # Insert maven settings here </settings>", "oc create secret generic secret-mvn --from-file=ssh-privatekey=<path/to/.ssh/id_rsa> --type=kubernetes.io/ssh-auth", "apiVersion: core/v1 kind: Secret metadata: name: secret-mvn type: kubernetes.io/ssh-auth data: ssh-privatekey: | # Insert ssh private key, base64 encoded", "source: git: uri: https://github.com/wildfly/quickstart.git contextDir: helloworld configMaps: - configMap: name: settings-mvn secrets: - secret: name: secret-mvn", "oc new-build openshift/wildfly-101-centos7~https://github.com/wildfly/quickstart.git --context-dir helloworld --build-secret \"secret-mvn\" --build-config-map \"settings-mvn\"", "source: git: uri: https://github.com/wildfly/quickstart.git contextDir: helloworld configMaps: - configMap: name: settings-mvn destinationDir: \".m2\" secrets: - secret: name: secret-mvn destinationDir: \".ssh\"", "oc new-build openshift/wildfly-101-centos7~https://github.com/wildfly/quickstart.git --context-dir helloworld --build-secret \"secret-mvn:.ssh\" --build-config-map \"settings-mvn:.m2\"", "FROM centos/ruby-22-centos7 USER root COPY ./secret-dir /secrets COPY ./config / Create a shell script that will output secrets and ConfigMaps when the image is run RUN echo '#!/bin/sh' > /input_report.sh RUN echo '(test -f /secrets/secret1 && echo -n \"secret1=\" && cat /secrets/secret1)' >> /input_report.sh RUN echo '(test -f /config && echo -n \"relative-configMap=\" && cat /config)' >> /input_report.sh RUN chmod 755 /input_report.sh CMD [\"/bin/sh\", \"-c\", \"/input_report.sh\"]", "#!/bin/sh APP_VERSION=1.0 wget http://repository.example.com/app/app-USDAPP_VERSION.jar -O app.jar", "#!/bin/sh exec java -jar app.jar", "FROM jboss/base-jdk:8 ENV APP_VERSION 1.0 RUN wget http://repository.example.com/app/app-USDAPP_VERSION.jar -O app.jar EXPOSE 8080 CMD [ \"java\", \"-jar\", \"app.jar\" ]", "auths: index.docker.io/v1/: 1 auth: \"YWRfbGzhcGU6R2labnRib21ifTE=\" 2 email: \"[email protected]\" 3 docker.io/my-namespace/my-user/my-image: 4 auth: \"GzhYWRGU6R2fbclabnRgbkSp=\"\" email: \"[email protected]\" docker.io/my-namespace: 5 auth: \"GzhYWRGU6R2deesfrRgbkSp=\"\" email: \"[email protected]\"", "oc create secret generic dockerhub --from-file=.dockerconfigjson=<path/to/.docker/config.json> --type=kubernetes.io/dockerconfigjson", "spec: output: to: kind: \"DockerImage\" name: \"private.registry.com/org/private-image:latest\" pushSecret: name: \"dockerhub\"", "oc set build-secret --push bc/sample-build dockerhub", "oc secrets link builder dockerhub", "strategy: sourceStrategy: from: kind: \"DockerImage\" name: \"docker.io/user/private_repository\" pullSecret: name: \"dockerhub\"", "oc set build-secret --pull bc/sample-build dockerhub", "oc secrets link builder dockerhub", "env: - name: FIELDREF_ENV valueFrom: fieldRef: fieldPath: metadata.name", "apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: secret-example-bc spec: strategy: sourceStrategy: env: - name: MYVAL valueFrom: secretKeyRef: key: myval name: mysecret" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/builds_using_buildconfig/creating-build-inputs
1.4. SysV Init Runlevels
1.4. SysV Init Runlevels The SysV init runlevel system provides a standard process for controlling which programs init launches or halts when initializing a runlevel. SysV init was chosen because it is easier to use and more flexible than the traditional BSD-style init process. The configuration files for SysV init are located in the /etc/rc.d/ directory. Within this directory, are the rc , rc.local , rc.sysinit , and, optionally, the rc.serial scripts as well as the following directories: The init.d/ directory contains the scripts used by the /sbin/init command when controlling services. Each of the numbered directories represent the six runlevels configured by default under Red Hat Enterprise Linux. 1.4.1. Runlevels The idea behind SysV init runlevels revolves around the idea that different systems can be used in different ways. For example, a server runs more efficiently without the drag on system resources created by the X Window System. Or there may be times when a system administrator may need to operate the system at a lower runlevel to perform diagnostic tasks, like fixing disk corruption in runlevel 1. The characteristics of a given runlevel determine which services are halted and started by init . For instance, runlevel 1 (single user mode) halts any network services, while runlevel 3 starts these services. By assigning specific services to be halted or started on a given runlevel, init can quickly change the mode of the machine without the user manually stopping and starting services. The following runlevels are defined by default under Red Hat Enterprise Linux: 0 - Halt 1 - Single-user text mode 2 - Not used (user-definable) 3 - Full multi-user text mode 4 - Not used (user-definable) 5 - Full multi-user graphical mode (with an X-based login screen) 6 - Reboot In general, users operate Red Hat Enterprise Linux at runlevel 3 or runlevel 5 - both full multi-user modes. Users sometimes customize runlevels 2 and 4 to meet specific needs, since they are not used. The default runlevel for the system is listed in /etc/inittab . To find out the default runlevel for a system, look for the line similar to the following near the top of /etc/inittab : The default runlevel listed in this example is five, as the number after the first colon indicates. To change it, edit /etc/inittab as root. Warning Be very careful when editing /etc/inittab . Simple typos can cause the system to become unbootable. If this happens, either use a boot diskette, enter single-user mode, or enter rescue mode to boot the computer and repair the file. For more information on single-user and rescue mode, refer to the chapter titled Basic System Recovery in the System Administrators Guide . It is possible to change the default runlevel at boot time by modifying the arguments passed by the boot loader to the kernel. For information on changing the runlevel at boot time, refer to Section 2.8, "Changing Runlevels at Boot Time" .
[ "init.d/ rc0.d/ rc1.d/ rc2.d/ rc3.d/ rc4.d/ rc5.d/ rc6.d/", "id:5:initdefault:" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-boot-init-shutdown-sysv
Chapter 5. Directory Entry Schema Reference
Chapter 5. Directory Entry Schema Reference 5.1. About Directory Server Schema This chapter provides an overview of some of the basic concepts of the directory schema and lists the files in which the schema is described. It describes object classes, attributes, and object identifiers (OIDs) and briefly discusses extending server schema and schema checking. 5.1.1. Schema Definitions The directory schema is a set of rules that defines how data can be stored in the directory. Directory information is stored discrete entries, and each entry is comprised of a set of attributes and their values. The kind of identity being described in the entry is defined in the entry's object classes. An object class specifies the kind of object the entry describes through the defined set of attributes for the object class. Basically, the schema files are lists of the kinds of entries that can be create (the object classes ) and the ways that those entries can be described (the attributes ). The schema defines what the object classes and attributes are. The schema also defines the format that the attribute values contain (the attribute's syntax ) and whether there can only be a single instance of that attribute. Additional schema files can be added to the Directory Server configuration and loaded in the server, so the schema is customizable and can be extended as required. For more detailed information about object classes, attributes, and how the Directory Server uses the schema, see the Deployment Guide . Warning The Directory Server fails to start if the schema definitions contain too few or too many characters. Use exactly one space in those places where the LDAP standards allow the use of zero or many spaces; for example, the place between the NAME keyword and the name of an attribute type. 5.1.1.1. Object Classes In LDAP, an object class defines the set of attributes that can be used to define an entry. The LDAP standard provides object classes for many common types of entries, such as people ( person and inetOrgPerson ), groups ( groupOfUniqueNames ), locations ( locality ), organizations and divisions ( organization and organizationalUnit ), and equipment ( device ). In a schema file, an object class is identified by the objectclasses line, then followed by its OID, name, a description, its direct superior object class (an object class which is required to be used in conjunction with the object class and which shares its attributes with this object class), and the list of required ( MUST ) and allowed ( MAY ) attributes. This is shown in Example 5.1, "person Object Class Schema Entry" . Example 5.1. person Object Class Schema Entry 5.1.1.1.1. Required and Allowed Attributes Every object class defines a number of required attributes and of allowed attributes. Required attributes must be present in entries using the specified object class, while allowed attributes are permissible and available for the entry to use, but are not required for the entry to be valid. As in Example 5.1, "person Object Class Schema Entry" , the person object class requires the cn , sn , and objectClass attributes and allows the description , seeAlso , telephoneNumber , and userPassword attributes. Note All entries require the objectClass attribute, which lists the object classes assigned to the entry. 5.1.1.1.2. Object Class Inheritance An entry can have more than one object class. For example, the entry for a person is defined by the person object class, but the same person may also be described by attributes in the inetOrgPerson and organizationalPerson object classes. Additionally, object classes can be hierarchical. An object class can inherit attributes from another class, in addition to its own required and allowed attributes. The second object class is the superior object class of the first. The server's object class structure determines the list of required and allowed attributes for a particular entry. For example, a user's entry has to have the inetOrgPerson object class. In that case, the entry must also include the superior object class for inetOrgPerson , organizationalPerson , and the superior object class for organizationalPerson , which is person : When the inetOrgPerson object class is assigned to an entry, the entry automatically inherits the required and allowed attributes from the superior object classes. 5.1.1.2. Attributes Directory entries are composed of attributes and their values. These pairs are called attribute-value assertions or AVAs. Any piece of information in the directory is associated with a descriptive attribute. For instance, the cn attribute is used to store a person's full name, such as cn: John Smith . Additional attributes can supply additional information about John Smith: In a schema file, an attribute is identified by the attributetypes line, then followed by its OID, name, a description, syntax (allowed format for its value), optionally whether the attribute is single- or multi-valued, and where the attribute is defined. This is shown in Example 5.2, "description Attribute Schema Entry" . Example 5.2. description Attribute Schema Entry Some attributes can be abbreviated. These abbreviations are listed as part of the attribute definition: 5.1.1.2.1. Directory Server Attribute Syntaxes The attribute's syntax defines the format of the values which the attribute allows; as with other schema elements, the syntax is defined for an attribute using the syntax's OID in the schema file entry. In the Directory Server Console, the syntax is referenced by its friendly name. The Directory Server uses the attribute's syntax to perform sorting and pattern matching on entries. For more information about LDAP attribute syntaxes, see RFC 4517 . Table 5.1. Supported LDAP Attribute Syntaxes Name OID Definition Binary 1.3.6.1.4.1.1466.115.121.1.5 Deprecated. Use Octet string instead. Bit String 1.3.6.1.4.1.1466.115.121.1.6 For values which are bitstings, such as '0101111101'B . Boolean 1.3.6.1.4.1.1466.115.121.1.7 For attributes with only two allowed values, TRUE or FALSE. Country String 1.3.6.1.4.1.1466.115.121.1.11 For values which are limited to exactly two printable string characters; for example, US for the United States. DN 1.3.6.1.4.1.1466.115.121.1.12 For values which are distinguished names (DNs). Delivery Method 1.3.6.1.4.1.1466.115.121.1.14 For values which are contained a preferred method of delivering information or contacting an entity. The different values are separated by a dollar sign (USD). For example: [literal,subs="+quotes,verbatim"] ... . telephone USD physical ... . Directory String 1.3.6.1.4.1.1466.115.121.1.15 For values which are valid UTF-8 strings. These values are not necessarily case-insensitive. Both case-sensitive and case-insensitive matching rules are available for Directory String and related syntaxes. Enhanced Guide 1.3.6.1.4.1.1466.115.121.1.21 For values which contain complex search parameters based on attributes and filters. Facsimile 1.3.6.1.4.1.1466.115.121.1.22 For values which contain fax numbers. Fax 1.3.6.1.4.1.1466.115.121.1.23 For values which contain the images of transmitted faxes. Generalized Time 1.3.6.1.4.1.1466.115.121.1.24 For values which are encoded as printable strings. The time zone must be specified. It is strongly recommended to use GMT time. Guide 1.3.6.1.4.1.1466.115.121.1.25 Obsolete. For values which contain complex search parameters based on attributes and filters. IA5 String 1.3.6.1.4.1.1466.115.121.1.26 For values which are valid strings. These values are not necessarily case-insensitive. Both case-sensitive and case-insensitive matching rules are available for IA5 String and related syntaxes. Integer 1.3.6.1.4.1.1466.115.121.1.27 For values which are whole numbers. JPEG 1.3.6.1.4.1.1466.115.121.1.28 For values which contain image data. Name and Optional UID 1.3.6.1.4.1.1466.115.121.1.34 For values which contain a combination value of a DN and (optional) unique ID. Numeric String 1.3.6.1.4.1.1466.115.121.1.36 For values which contain a string of both numerals and spaces. OctetString 1.3.6.1.4.1.1466.115.121.1.40 For values which are binary; this replaces the binary syntax. Object Class Description 1.3.6.1.4.1.1466.115.121.1.37 For values which contain object class definitions. OID 1.3.6.1.4.1.1466.115.121.1.38 For values which contain OID definitions. Postal Address 1.3.6.1.4.1.1466.115.121.1.41 For values which are encoded in the format postal-address = dstring * ("USD" dstring ) . For example: [literal,subs="+quotes,verbatim"] ... . 1234 Main St.USDRaleigh, NC 12345USDUSA ... . Each dstring component is encoded as a DirectoryString value. Backslashes and dollar characters, if they occur, are quoted, so that they will not be mistaken for line delimiters. Many servers limit the postal address to 6 lines of up to thirty characters. Printable String 1.3.6.1.4.1.1466.115.121.1.44 For values which contain printable strings. Space-Insensitive String 2.16.840.1.113730.3.7.1 For values which contain space-insensitive strings. TelephoneNumber 1.3.6.1.4.1.1466.115.121.1.50 For values which are in the form of telephone numbers. It is recommended to use telephone numbers in international form. Teletex Terminal Identifier 1.3.6.1.4.1.1466.115.121.1.51 For values which contain an international telephone number. Telex Number 1.3.6.1.4.1.1466.115.121.1.52 For values which contain a telex number, country code, and answerback code of a telex terminal. URI For values in the form of a URL, introduced by a string such as http:// , https:// , ftp:// , ldap:// , and ldaps:// . The URI has the same behavior as IA5 String. See RFC 4517 for more information on this syntax. 5.1.1.2.2. Single- and Multi-Valued Attributes By default, most attributes are multi-valued. This means that an entry can contain the same attribute multiple times, with different values. For example: The cn , tel , and objectclass attributes, for example, all can have more than one value. Attributes that are single-valued - that is, only one instance of the attribute can be specified - are specified in the schema as only allowing a single value. For example, uidNumber can only have one possible value, so its schema entry has the term SINGLE-VALUE . If the attribute is multi-valued, there is no value expression. 5.1.2. Default Directory Server Schema Files Template schema definitions for Directory Server are stored in the /etc/dirsrv/schema directory. These default schema files are used to generate the schema files for new Directory Server instances. Each server instance has its own instance-specific schema directory in /etc/dirsrv/slapd- instance /schema . The schema files in the instance directory are used only by that instance. To modify the directory schema, create new attributes and new object classes in the instance-specific schema directory. Because the default schema is used for creating new instances and each individual instance has its own schema files, it is possible to have slightly different schema for each instance, matching the use of each instance. Any custom attributes added using the Directory Server Console or LDAP commands are stored in the 99user.ldif file; other custom schema files can be added to the /etc/dirsrv/slapd- instance /schema directory for each instance. Do not make any modifications with the standard files that come with Red Hat Directory Server. For more information about how the Directory Server stores information and suggestions for planning directory schema, see the Deployment Guide . Table 5.2. Schema Files Schema File Purpose 00core.ldif Recommended core schema from the X.500 and LDAP standards (RFCs). This schema is used by the Directory Server itself for the instance configuration and to start the server instance. 01core389.ldif Recommended core schema from the X.500 and LDAP standards (RFCs). This schema is used by the Directory Server itself for the instance configuration and to start the server instance. 02common.ldif Standard-related schema from RFC 2256, LDAPv3, and standard schema defined by Directory Server which is used to configure entries. 05rfc2927.ldif Schema from RFC 2927, "MIME Directory Profile for LDAP Schema." 05rfc4523.ldif Schema definitions for X.509 certificates. 05rfc4524.ldif Cosine LDAP/X.500 schema. 06inetorgperson.ldif inetorgperson schema elements from RFC 2798, RFC 2079, and part of RFC 1274. 10rfc2307.ldif Schema from RFC 2307, "An Approach for Using LDAP as a Network Information Service." 20subscriber.ldif Common schema element for Directory Server-Nortel subscriber interoperability. 25java-object.ldif Schema from RFC 2713, "Schema for Representing Java Objects in an LDAP Directory." 28pilot.ldif Schema from the pilot RFCs, especially RFC 1274, that are no longer recommended for use in new deployments. 30ns-common.ldif Common schema. 50ns-admin.ldif Schemas used by the Administration Server. 50ns-certificate.ldif Schemas used by Red Hat Certificate System. 50ns-directory.ldif Schema used by legacy Directory Server 4.x servers. 50ns-mail.ldif Schema for mail servers. 50ns-value.ldif Schema for value items in Directory Server. 50ns-web.ldif Schema for web servers. 60autofs.ldif Object classes for automount configuration; this is one of several schema files used for NIS servers. 60eduperson.ldif Schema elements for education-related people and organization entries. 60mozilla.ldif Schema elements for Mozilla-related user profiles. 60nss-ldap.ldif Schema elements for GSS-API service names. 60pam-plugin.ldif Schema elements for integrating directory services with PAM modules. 60pureftpd.ldif Schema elements for defining FTP user accounts. 60rfc2739.ldif Schema elements for calendars and vCard properties. 60rfc3712.ldif Schema elements for configuring printers. 60sabayon.ldif Schema elements for defining sabayon user entries. 60sudo.ldif Schema elements for defining sudo users and roles. 60trust.ldif Schema elements for defining trust relationships for NSS or PAM. 99user.ldif Custom schema elements added through the Directory Server Console. 5.1.3. Object Identifiers (OIDs) All schema elements have object identifiers (OIDs) assigned to them, including attributes and object classes. An OID is a sequence of integers, usually written as a dot-separated string. All custom attributes and classes must conform to the X.500 and LDAP standards. Warning If an OID is not specified for a schema element, Directory Server automatically uses ObjectClass_name -oid and attribute_name -oid . However, using text OIDs instead of numeric OIDs can lead to problems with clients, server interoperability, and server behavior, assigning a numeric OID is strongly recommended. OIDs can be built on. The base OID is a root number which is used for every schema element for an organization, and then schema elements can be incremented from there. For example, a base OID could be 1 . The company then uses 1.1 for attributes, so every new attribute has an OID of 1.1.x . It uses 1.2 for object classes, so every new object class has an OID of 1.2.x . For Directory Server-defined schema elements, the base OIDs are as follows: The Netscape base OID is 2.16.840.1.113730 . The Directory Server base OID is 2.16.840.1.113730.3 . All Netscape-defined attributes have the base OID 2.16.840.1.113370.3.1 . All Netscape-defined object classes have the base OID 2.16.840.1.113730.3.2 . For more information about OIDs or to request a prefix, go to the Internet Assigned Number Authority (IANA) website at http://www.iana.org/ . 5.1.4. Extending the Schema The Directory Server schema includes hundreds of object classes and attributes that can be used to meet most of directory requirements. This schema can be extended with new object classes and attributes that meet evolving requirements for the directory service in the enterprise by creating custom schema files. When adding new attributes to the schema, a new object class should be created to contain them. Adding a new attribute to an existing object class can compromise the Directory Server's compatibility with existing LDAP clients that rely on the standard LDAP schema and may cause difficulties when upgrading the server. For more information about extending server schema, see the Deployment Guide . 5.1.5. Schema Checking Schema checking means that the Directory Server checks every entry when it is created, modified, or in a database imported using LDIF to make sure that it complies with the schema definitions in the schema files. Schema checking verifies three things: Object classes and attributes used in the entry are defined in the directory schema. Attributes required for an object class are contained in the entry. Only attributes allowed by the object class are contained in the entry. You should run Directory Server with schema checking turned on. For information on enabling schema checking, see the Administration Guide . 5.1.6. Syntax Validation Syntax validation means that the Directory Server checks that the value of an attribute matches the required syntax for that attribute. For example, syntax validation will confirm that a new telephoneNumber attribute actually has a valid telephone number for its value. With its basic configuration, syntax validation (like schema checking) will check any directory modification to make sure the attribute value matches the required syntax and will reject any modifications that violate the syntax. Optionally, syntax validation can be configured to log warning messages about syntax violations, and either reject the change or allow the modification process to succeed. All syntaxes are validated against RFC 4514 , except for DNs. By default, DNs are validated against RFC 1779 or RFC 2253 , which are less strict than RFC 4514 . Strict validation for DNs has to be explicitly configured. This feature checks all attribute syntaxes listed in Table 5.1, "Supported LDAP Attribute Syntaxes" , with the exception of binary syntaxes (which cannot be verified) and non-standard syntaxes, which do not have a defined required format. The unvalidated syntaxes are as follows: Fax (binary) OctetString (binary) JPEG (binary) Binary (non-standard) Space Insensitive String (non-standard) URI (non-standard) When syntax validation is enabled, new attribute values are checked whenever an attribute is added or modified to an entry. (This does not include replication changes, since the syntax would have been checked on the supplier server.) It is also possible to check existing attribute values for syntax violations by running the syntax-validation.pl script. For information on options for syntax validation, see the Administration Guide . 5.2. Entry Attribute Reference The attributes listed in this reference are manually assigned or available to directory entries. The attributes are listed in alphabetical order with their definition, syntax, and OID. 5.2.1. abstract The abstract attribute contains an abstract for a document entry. OID 0.9.2342.19200300.102.1.9 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Internet White Pages Pilot 5.2.2. accessTo This attribute defines what specific hosts or servers a user is allowed to access. OID 5.3.6.1.1.1.1.1 Syntax IA5String Multi- or Single-Valued Multi-valued Defined in nss_ldap/pam_ldap 5.2.3. accountInactivityLimit The accountInactivityLimit attribute sets the time period, in seconds, from the last login time of an account before that account is locked for inactivity. OID 1.3.6.1.4.1.11.1.3.2.1.3 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Directory Server 5.2.4. acctPolicySubentry The acctPolicySubentry attribute identifies any entry which belongs to an account policy (specifically, an account lockout policy). The value of this attribute points to the account policy which is applied to the entry. This can be set on an individual user entry or on a CoS template entry or role entry. OID 1.3.6.1.4.1.11.1.3.2.1.2 Syntax DN Multi- or Single-Valued Single-valued Defined in Directory Server 5.2.5. administratorContactInfo This attribute contains the contact information for the LDAP or server administrator. OID 2.16.840.1.113730.3.1.74 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Administration Services 5.2.6. adminRole This attribute contains the role assigned to the user identified in the entry. OID 2.16.840.1.113730.3.1.601 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Netscape Administration Services 5.2.7. adminUrl This attribute contains the URL of the Administration Server. OID 2.16.840.1.113730.3.1.75 Syntax IA5String Multi- or Single-Valued Multi-valued Defined in Netscape Administration Services 5.2.8. aliasedObjectName The aliasedObjectName attribute is used by the Directory Server to identify alias entries. This attribute contains the DN (distinguished name) for the entry for which this entry is the alias. For example: OID 2.5.4.1 Syntax DN Multi- or Single-Valued Single-valued Defined in RFC 2256 5.2.9. associatedDomain The associatedDomain attribute contains the DNS domain associated with the entry in the directory tree. For example, the entry with the distinguished name c=US,o=Example Corporation has the associated domain of EC.US . These domains should be represented in RFC 822 order. OID 0.9.2342.19200300.100.1.37 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 1274 5.2.10. associatedName The associatedName identifies an organizational directory tree entry associated with a DNS domain. For example: OID 0.9.2342.19200300.100.1.38 Syntax DN Multi- or Single-Valued Multi-valued Defined in RFC 1274 5.2.11. attributeTypes This attribute is used in a schema file to identify an attribute defined within the subschema. OID 2.5.21.5 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2252 5.2.12. audio The audio attribute contains a sound file using a binary format. This attribute uses a u-law encoded sound data. For example: OID 0.9.2342.19200300.100.1.55 Syntax Binary Multi- or Single-Valued Multi-valued Defined in RFC 1274 5.2.13. authorCn The authorCn attribute contains the common name of the document's author. For example: OID 0.9.2342.19200300.102.1.11 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Internet White Pages Pilot 5.2.14. authorityRevocationList The authorityRevocationList attribute contains a list of revoked CA certificates. This attribute should be requested and stored in a binary format, like authorityRevocationList;binary . For example: OID 2.5.4.38 Syntax Binary Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.15. authorSn The authorSn attribute contains the last name or family name of the author of a document entry. For example: OID 0.9.2342.19200300.102.1.12 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Internet White Pages Pilot 5.2.16. automountInformation This attribute contains information used by the autofs automounter. Note The automountInformation attribute is defined in 60autofs.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 60autofs.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. OID 1.3.6.1.1.1.1.33 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in RFC 2307 5.2.17. bootFile This attribute contains the boot image file name. Note The bootFile attribute is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. OID 1.3.6.1.1.1.1.24 Syntax IA5String Multi- or Single-Valued Multi-valued Defined in RFC 2307 5.2.18. bootParameter This attribute contains the value for rpc.bootparamd . Note The bootParameter attribute is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. OID 1.3.6.1.1.1.1.23 Syntax IA5String Multi- or Single-Valued Multi-valued Defined in RFC 2307 5.2.19. buildingName The buildingName attribute contains the building name associated with the entry. For example: OID 0.9.2342.19200300.100.1.48 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 1274 5.2.20. businessCategory The businessCategory attribute identifies the type of business in which the entry is engaged. The attribute value should be a broad generalization, such as a corporate division level. For example: OID 2.5.4.15 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.21. c (countryName) The countryName , or c , attribute contains the two-character country code to represent the country names. The country codes are defined by the ISO. For example: OID 2.5.4.6 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in RFC 2256 5.2.22. cACertificate The cACertificate attribute contains a CA certificate. The attribute should be requested and stored binary format, such as cACertificate;binary . For example: OID 2.5.4.37 Syntax Binary Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.23. carLicense The carLicense attribute contains an entry's automobile license plate number. For example: OID 2.16.840.1.113730.3.1.1 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2798 5.2.24. certificateRevocationList The certificateRevocationList attribute contains a list of revoked user certificates. The attribute value is to be requested and stored in binary form, as certificateACertificate;binary . For example: OID 2.5.4.39 Syntax Binary Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.25. cn (commonName) The commonName attribute contains the name of an entry. For user entries, the cn attribute is typically the person's full name. For example: With the LDAPReplica or LDAPServerobject object classes, the cn attribute value has the following format: OID 2.5.4.3 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.26. co (friendlyCountryName) The friendlyCountryName attribute contains a country name; this can be any string. Often, the country is used with the ISO-designated two-letter country code, while the co attribute contains a readable country name. For example: OID 0.9.2342.19200300.100.1.43 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 1274 5.2.27. cosAttribute The cosAttribute contains the name of the attribute for which to generate a value for the CoS. There can be more than one cosAttribute value specified. This attribute is used by all types of CoS definition entries. OID 2.16.840.1.113730.3.1.550 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Directory Server 5.2.28. cosIndirectSpecifier The cosIndirectSpecifier specifies the attribute values used by an indirect CoS to identify the template entry. OID 2.16.840.1.113730.3.1.577 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Directory Server 5.2.29. cosPriority The cosPriority attribute specifies which template provides the attribute value when CoS templates compete to provide an attribute value. This attribute represents the global priority of a template. A priority of zero is the highest priority. OID 2.16.840.1.113730.3.1.569 Syntax Integer Multi- or Single-Valued Single-valued Defined in Directory Server 5.2.30. cosSpecifier The cosSpecifier attribute contains the attribute value used by a classic CoS, which, along with the template entry's DN, identifies the template entry. OID 2.16.840.1.113730.3.1.551 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Directory Server 5.2.31. cosTargetTree The cosTargetTree attribute defines the subtrees to which the CoS schema applies. The values for this attribute for the schema and for multiple CoS schema may overlap their target trees arbitrarily. OID 2.16.840.1.113730.3.1.552 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Directory Server 5.2.32. cosTemplateDn The cosTemplateDn attribute contains the DN of the template entry which contains a list of the shared attribute values. Changes to the template entry attribute values are automatically applied to all the entries within the scope of the CoS. A single CoS might have more than one template entry associated with it. OID 2.16.840.1.113730.3.1.553 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Directory Server 5.2.33. crossCertificatePair The value for the crossCertificatePair attribute must be requested and stored in binary format, such as certificateCertificateRepair;binary . For example: OID 2.5.4.40 Syntax Binary Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.34. dc (domainComponent) The dc attribute contains one component of a domain name. For example: OID 0.9.2342.19200300.100.1.25 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in RFC 2247 5.2.35. deltaRevocationList The deltaRevocationList attribute contains a certificate revocation list (CRL). The attribute value is requested and stored in binary format, such as deltaRevocationList;binary . OID 2.5.4.53 Syntax Binary Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.36. departmentNumber The departmentNumber attribute contains an entry's department number. For example: OID 2.16.840.1.113730.3.1.2 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2798 5.2.37. description The description attribute provides a human-readable description for an entry. For person or organization object classes, this can be used for the entry's role or work assignment. For example: OID 2.5.4.13 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.38. destinationIndicator The destinationIndicator attribute contains the city and country associated with the entry. This attribute was once required to provide public telegram service and is generally used in conjunction with the registeredAddress attribute. For example: OID 2.5.4.27 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.39. displayName The displayName attributes contains the preferred name of a person to use when displaying that person's entry. This is especially useful for showing the preferred name for an entry in a one-line summary list. Since other attribute types, such as cn , are multi-valued, they cannot be used to display a preferred name. For example: OID 2.16.840.1.113730.3.1.241 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in RFC 2798 5.2.40. dITRedirect The dITRedirect attribute indicates that the object described by one entry now has a newer entry in the directory tree. This attribute may be used when an individual's place of work changes, and the individual acquires a new organizational DN. OID 0.9.2342.19200300.100.1.54 Syntax DN Defined in RFC 1274 5.2.41. dmdName The dmdName attribute value specifies a directory management domain (DMD), the administrative authority that operates the Directory Server. OID 2.5.4.54 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in RFC 2256 5.2.42. dn (distinguishedName) The dn attribute contains an entry's distinguished name. For example: OID 2.5.4.49 Syntax DN Defined in RFC 2256 5.2.43. dNSRecord The dNSRecord attribute contains DNS resource records, including type A (Address), type MX (Mail Exchange), type NS (Name Server), and type SOA (Start of Authority) resource records. For example: OID 0.9.2342.19200300.100.1.26 Syntax IA5String Multi- or Single-Valued Multi-valued Defined in Internet Directory Pilot 5.2.44. documentAuthor The documentAuthor attribute contains the DN of the author of a document entry. For example: OID 0.9.2342.19200300.100.1.14 Syntax DN Multi- or Single-Valued Multi-valued Defined in RFC 1274 5.2.45. documentIdentifier The documentIdentifier attribute contains a unique identifier for a document. For example: OID 0.9.2342.19200300.100.1.11 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 1274 5.2.46. documentLocation The documentLocation attribute contains the location of the original version of a document. For example: OID 0.9.2342.19200300.100.1.15 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 1274 5.2.47. documentPublisher The documentPublisher attribute contains the person or organization who published a document. For example: OID 0.9.2342.19200300.100.1.56 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in RFC 1274 5.2.48. documentStore The documentStore attribute contains information on where the document is stored. OID 0.9.2342.19200300.102.1.10 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Internet White Pages Pilot 5.2.49. documentTitle The documentTitle attribute contains a document's title. For example: OID 0.9.2342.19200300.100.1.12 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 1274 5.2.50. documentVersion The documentVersion attribute contains the current version number for the document. For example: OID 0.9.2342.19200300.100.1.13 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 1274 5.2.51. drink (favouriteDrink) The favouriteDrink attribute contains a person's favorite beverage. This can be shortened to drink . For example: OID 0.9.2342.19200300.100.1.5 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 1274 5.2.52. dSAQuality The dSAQuality attribute contains the rating of the directory system agents' (DSA) quality. This attribute allows a DSA manager to indicate the expected level of availability of the DSA. For example: OID 0.9.2342.19200300.100.1.49 Syntax Directory-String Multi- or Single-Valued Single-valued Defined in RFC 1274 5.2.53. employeeNumber The employeeNumber attribute contains the employee number for the person. For example: OID 2.16.840.1.113730.3.1.3 Syntax Directory-String Multi- or Single-Valued Single-valued Defined in RFC 2798 5.2.54. employeeType The employeeType attribute contains the employment type for the person. For example: OID 2.16.840.1.113730.3.1.4 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2798 5.2.55. enhancedSearchGuide The enhancedSearchGuide attribute contains information used by an X.500 client to construct search filters. For example: OID 2.5.4.47 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2798 5.2.56. fax (facsimileTelephoneNumber) The facsimileTelephoneNumber attribute contains the entry's facsimile number; this attribute can be abbreviated as fax . For example: OID 2.5.4.23 Syntax TelephoneNumber Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.57. gecos The gecos attribute is used to determine the GECOS field for the user. This is comparable to the cn attribute, although using a gecos attribute allows additional information to be embedded in the GECOS field aside from the common name. This field is also useful if the common name stored in the directory is not the user's full name. Note The gecos attribute is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. OID 1.3.6.1.1.1.1.2 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in RFC 2307 5.2.58. generationQualifier The generationQualifier attribute contains the generation qualifier for a person's name, which is usually appended as a suffix to the name. For example: OID 2.5.4.44 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.59. gidNumber The gidNumber attribute contains a unique numeric identifier for a group entry or to identify the group for a user entry. This is analogous to the group number in Unix. Note The gidNumber attribute is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. OID 1.3.6.1.1.1.1.1 Syntax Integer Multi- or Single-Valued Single-valued Defined in RFC 2307 5.2.60. givenName The givenName attribute contains an entry's given name, which is usually the first name. For example: OID 2.5.4.42 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.61. homeDirectory The homeDirectory attribute contains the path to the user's home directory. Note The homeDirectory attribute is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. OID 1.3.6.1.1.1.1.3 Syntax IA5String Multi- or Single-Valued Single-valued Defined in RFC 2307 5.2.62. homePhone The homePhone attribute contains the entry's residential phone number. For example: Note Although RFC 1274 defines both homeTelephoneNumber and homePhone as names for the residential phone number attribute, Directory Server only implements the homePhone name. OID 0.9.2342.19200300.100.1.20 Syntax TelephoneNumber Multi- or Single-Valued Multi-valued Defined in RFC 1274 5.2.63. homePostalAddress The homePostalAddress attribute contains an entry's home mailing address. Since this attribute generally spans multiple lines, each line break has to be represented by a dollar sign ( USD ). To represent an actual dollar sign ( USD ) or backslash ( \ ) in the attribute value, use the escaped hex values \24 and \5c , respectively. For example: To represent the following string: The entry value is: OID 0.9.2342.19200300.100.1.39 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 1274 5.2.64. host The host contains the host name of a computer. For example: OID 0.9.2342.19200300.100.1.9 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 1274 5.2.65. houseIdentifier The houseIdentifier contains an identifier for a specific building at a location. For example: OID 2.5.4.51 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.66. inetDomainBaseDN This attribute identifies the base DN of user subtree for a DNS domain. OID 2.16.840.1.113730.3.1.690 Syntax DN Multi- or Single-Valued Single-valued Defined in Subscriber interoperability 5.2.67. inetDomainStatus This attribute shows the current status of the domain. A domain has a status of active , inactive , or deleted . OID 2.16.840.1.113730.3.1.691 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Subscriber interoperability 5.2.68. inetSubscriberAccountId This attribute contains the a unique attribute used to link the user entry for the subscriber to a billing system. OID 2.16.840.1.113730.3.1.694 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Subscriber interoperability 5.2.69. inetSubscriberChallenge The inetSubscriberChallenge attribute contains some kind of question or prompt, the challenge phrase, which is used to confirm the identity of the user in the subscriberIdentity attribute. This attribute is used in conjunction with the inetSubscriberResponse attribute, which contains the response to the challenge. OID 2.16.840.1.113730.3.1.695 Syntax IA5String Multi- or Single-Valued Single-valued Defined in Subscriber interoperability 5.2.70. inetSubscriberResponse The inetSubscriberResponse attribute contains the answer to the challenge question in the inetSubscriberChallenge attribute to verify the user in the subscriberIdentity attribute. OID 2.16.840.1.113730.3.1.696 Syntax IA5String Multi- or Single-Valued Multi-valued Defined in Subscriber interoperability 5.2.71. inetUserHttpURL This attribute contains the web addresses associated with the user. OID 2.16.840.1.113730.3.1.693 Syntax IA5String Multi- or Single-Valued Multi-valued Defined in Subscriber interoperability 5.2.72. inetUserStatus This attribute shows the current status of the user (subscriber). A user has a status of active , inactive , or deleted . OID 2.16.840.1.113730.3.1.692 Syntax DirectoryString Multi- or Single-Valued Single-Valued Defined in Subscriber interoperability 5.2.73. info The info attribute contains any general information about an object. Avoid using this attribute for specific information and rely instead on specific, possibly custom, attribute types. For example: OID 0.9.2342.19200300.100.1.4 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 1274 5.2.74. initials The initials contains a person's initials; this does not contain the entry's surname. For example: Directory Server and Active Directory handle the initials attribute differently. The Directory Server allows a practically unlimited number of characters, while Active Directory has a restriction of six characters. If an entry is synced with a Windows peer and the value of the initials attribute is longer than six characters, then the value is automatically truncated to six characters when it is synchronized. There is no information written to the error log to indicate that synchronization changed the attribute value, either. OID 2.5.4.43 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.75. installationTimeStamp This contains the time that the server instance was installed. OID 2.16.840.1.113730.3.1.73 Syntax DirectoryString Multi- or Single-Valued Multi-Valued Defined in Netscape Administration Services 5.2.76. internationalISDNNumber The internationalISDNNumber attribute contains the ISDN number of a document entry. This attribute uses the internationally recognized format for ISDN addresses given in CCITT Rec. E. 164. OID 2.5.4.25 Syntax IA5String Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.77. ipHostNumber This contains the IP address for a server. Note The ipHostNumber attribute is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. OID 1.3.6.1.1.1.1.19 Syntax DirectoryString Multi- or Single-Valued Multi-Valued Defined in RFC 2307 5.2.78. ipNetmaskNumber This contains the IP netmask for the server. Note The ipHostNumber attribute is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. OID 2.16.840.1.113730.3.1.73 Syntax DirectoryString Multi- or Single-Valued Multi-Valued Defined in RFC 2307 5.2.79. ipNetworkNumber This identifies the IP network. Note The ipNetworkNumber attribute is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. OID 1.3.6.1.1.1.1.20 Syntax DirectoryString Multi- or Single-Valued Single-Valued Defined in RFC 2307 5.2.80. ipProtocolNumber This attribute identifies the IP protocol version number. Note The ipProtocolNumber attribute is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. OID 1.3.6.1.1.1.1.17 Syntax Integer Multi- or Single-Valued Single-Valued Defined in RFC 2307 5.2.81. ipServicePort This attribute gives the port used by the IP service. Note The ipServicePort attribute is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. OID 1.3.6.1.1.1.1.15 Syntax Integer Multi- or Single-Valued Single-Valued Defined in RFC 2307 5.2.82. ipServiceProtocol This identifies the protocol used by the IP service. Note The ipServiceProtocol attribute is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. OID 1.3.6.1.1.1.1.16 Syntax DirectoryString Multi- or Single-Valued Multi-Valued Defined in RFC 2307 5.2.83. janetMailbox The janetMailbox contains a JANET email address, usually for users located in the United Kingdom who do not use RFC 822 email address. Entries with this attribute must also contain the rfc822Mailbox attribute. OID 0.9.2342.19200300.100.1.46 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 1274 5.2.84. jpegPhoto The jpegPhoto attribute contains a JPEG photo, a binary value. For example: OID 0.9.2342.19200300.100.1.60 Syntax Binary Multi- or Single-Valued Multi-valued Defined in RFC 2798 5.2.85. keyWords The keyWord attribute contains keywords associated with the entry. For example: OID 0.9.2342.19200300.102.1.7 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Internet White Pages Pilot 5.2.86. knowledgeInformation This attribute is no longer used. OID 2.5.4.2 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.87. l (localityName) The localityName , or l , attribute contains the county, city, or other geographical designation associated with the entry. For example: OID 2.5.4.7 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.88. labeledURI The labeledURI contains a Uniform Resource Identifier (URI) which is related, in some way, to the entry. Values placed in the attribute should consist of a URI (currently only URLs are supported), optionally followed by one or more space characters and a label. OID 1.3.6.1.4.1.250.1.57 Syntax IA5String Multi- or Single-Valued Multi-valued Defined in RFC 2709 5.2.89. loginShell The loginShell attribute contains the path to a script that is launched automatically when a user logs into the domain. Note The loginShell attribute is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. OID 1.3.6.1.1.1.1.4 Syntax IA5String Multi- or Single-Valued Single-valued Defined in RFC 2307 5.2.90. macAddress This attribute gives the MAC address for a server or piece of equipment. Note The macAddress attribute is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. OID 1.3.6.1.1.1.1.22 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2307 5.2.91. mail The mail attribute contains a user's primary email address. This attribute value is retrieved and displayed by whitepage applications. For example: OID 0.9.2342.19200300.100.1.3 Syntax DirectyString Multi- or Single-Valued Single-valued Defined in RFC 1274 5.2.92. mailAccessDomain This attribute lists the domain which a user can use to access the messaging server. OID 2.16.840.1.113730.3.1.12 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Messaging Server 5.2.93. mailAlternateAddress The mailAlternateAddress attribute contains additional email addresses for a user. This attribute does not reflect the default or primary email address; that email address is set by the mail attribute. For example: OID 2.16.840.1.113730.3.1.13 Syntax DirectyString Multi- or Single-Valued Multi-valued Defined in RFC 1274 5.2.94. mailAutoReplyMode This attribute sets whether automatic replies are enabled for the messaging server. OID 2.16.840.1.113730.3.1.14 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Messaging Server 5.2.95. mailAutoReplyText This attribute stores the text to used in an auto-reply email. OID 2.16.840.1.113730.3.1.15 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Messaging Server 5.2.96. mailDeliveryOption This attribute defines the mail delivery mechanism to use for the mail user. OID 2.16.840.1.113730.3.1.16 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Messaging Server 5.2.97. mailEnhancedUniqueMember This attribute contains the DN of a unique member of a mail group. OID 2.16.840.1.113730.3.1.31 Syntax DN Multi- or Single-Valued Multi-valued Defined in Netscape Messaging Server 5.2.98. mailForwardingAddress This attribute contains an email address to which to forward a user's email. OID 2.16.840.1.113730.3.1.17 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Messaging Server 5.2.99. mailHost The mailHost attribute contains the host name of a mail server. For example: OID 2.16.840.1.113730.3.1.18 Syntax DirectyString Multi- or Single-Valued Multi-valued Defined in Netscape Messaging Server 5.2.100. mailMessageStore This identifies the location of a user's email box. OID 2.16.840.1.113730.3.1.19 Syntax IA5String Multi- or Single-Valued Multi-valued Defined in Netscape Messaging Server 5.2.101. mailPreferenceOption The mailPreferenceOption defines whether a user should be included on a mailing list, both electronic and physical. There are three options. 0 Does not appear in mailing lists. 1 Add to any mailing lists. 2 Added only to mailing lists which the provider views as relevant to the user interest. If the attribute is absent, then the default is to assume that the user is not included on any mailing list. This attribute should be interpreted by anyone using the directory to derive mailing lists and its value respected. For example: OID 0.9.2342.19200300.100.1.47 Syntax Integer Multi- or Single-Valued Single-valued Defined in RFC 1274 5.2.102. mailProgramDeliveryInfo This attribute contains any commands to use for programmed mail delivery. OID 2.16.840.1.113730.3.1.20 Syntax IA5String Multi- or Single-Valued Multi-valued Defined in Netscape Messaging Server 5.2.103. mailQuota This attribute sets the amount of disk space allowed for a user's mail box. OID 2.16.840.1.113730.3.1.21 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Messaging Server 5.2.104. mailRoutingAddress This attribute contains the routing address to use when forwarding the emails received by the user to another messaging server. OID 2.16.840.1.113730.3.1.24 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Messaging Server 5.2.105. manager The manager contains the distinguished name (DN) of the manager for the person. For example: OID 0.9.2342.19200300.100.1.10 Syntax DN Multi- or Single-Valued Multi-valued Defined in RFC 1274 5.2.106. member The member attribute contains the distinguished names (DNs) of each member of a group. For example: OID 2.5.4.31 Syntax DN Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.107. memberCertificateDescription This attribute is a multi-valued attribute where each value is a description, a pattern, or a filter matching the subject DN of a certificate, usually a certificate used for TLS client authentication. memberCertificateDescription matches any certificate that contains a subject DN with the same attribute-value assertions (AVAs) as the description. The description may contain multiple ou AVAs. A matching DN must contain those same ou AVAs, in the same order, although it may be interspersed with other AVAs, including other ou AVAs. For any other attribute type (not ou ), there should be at most one AVA of that type in the description. If there are several, all but the last are ignored. A matching DN must contain that same AVA but no other AVA of the same type nearer the root (later, syntactically). AVAs are considered the same if they contain the same attribute description (case-insensitive comparison) and the same attribute value (case-insensitive comparison, leading and trailing whitespace ignored, and consecutive whitespace characters treated as a single space). To be considered a member of a group with the following memberCertificateDescription value, a certificate needs to include ou=x , ou=A , and dc=example , but not dc=company . To match the group's requirements, a certificate's subject DNs must contain the same ou attribute types in the same order as defined in the memberCertificateDescription attribute. OID 2.16.840.1.113730.3.1.199 Syntax IA5String Multi- or Single-Valued Multi-valued Defined in Directory Server 5.2.108. memberNisNetgroup This attribute merges the attribute values of another netgroup into the current one by listing the name of the merging netgroup. Note The memberNisNetgroup attribute is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. OID 1.3.6.1.1.1.1.13 Syntax IA5String Multi- or Single-Valued Multi-valued Defined in RFC 2307 5.2.109. memberOf This attribute contains the name of a group to which the user is a member. memberOf is the default attribute generated by the MemberOf Plug-in on the user entry of a group member. This attribute is automatically synchronized to the listed member attributes in a group entry, so that displaying group membership for entries is managed by Directory Server. Note This attribute is only synchronized between group entries and the corresponding members' user entries if the MemberOf Plug-in is enabled and is configured to use this attribute. OID 1.2.840.113556.1.2.102 Syntax DN Multi- or Single-Valued Multi-valued Defined in Netscape Delegated Administrator 5.2.110. memberUid The memberUid attribute contains the login name of the member of a group; this can be different than the DN identified in the member attribute. Note The memberUID attribute is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. OID 1.3.6.1.1.1.1.12 Syntax IA5String Multi- or Single-Valued Single-valued Defined in RFC 2307 5.2.111. memberURL This attribute identifies a URL associated with each member of a group. Any type of labeled URL can be used. OID 2.16.840.1.113730.3.1.198 Syntax IA5String Multi- or Single-Valued Multi-valued Defined in Directory Server 5.2.112. mepManagedBy This attribute contains a pointer in an automatically-generated entry that points back to the DN of the originating entry. This attribute is set by the Managed Entries Plug-in and cannot be modified manually. OID 2.16.840.1.113730.3.1.2086 Syntax DN Multi- or Single-Valued Single-valued Defined in Directory Server 5.2.113. mepManagedEntry This attribute contains a pointer to an automatically-generated entry which corresponds to the current entry. This attribute is set by the Managed Entries Plug-in and cannot be modified manually. OID 2.16.840.1.113730.3.1.2087 Syntax DN Multi- or Single-Valued Single-valued Defined in Directory Server 5.2.114. mepMappedAttr This attribute sets an attribute in the Managed Entries template entry which must exist in the generated entry. The mapping means that some value of the originating entry is used to supply the given attribute. The values of these attributes will be tokens in the form attribute: USDattr . For example: As long as the syntax of the expanded token of the attribute does not violate the required attribute syntax, then other terms and strings can be used in the attribute. For example: OID 2.16.840.1.113730.3.1.2089 Syntax OctetString Multi- or Single-Valued Multi-valued Defined in Directory Server 5.2.115. mepRDNAttr This attribute sets which attribute to use as the naming attribute in the automatically-generated entry created by the Managed Entries Plug-in. Whatever attribute type is given in the naming attribute should be present in the managed entries template entry as a mepMappedAttr . OID 2.16.840.1.113730.3.1.2090 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Directory Server 5.2.116. mepStaticAttr This attribute sets an attribute with a defined value that must be added to the automatically-generated entry managed by the Managed Entries Plug-in. This value will be used for every entry generated by that instance of the Managed Entries Plug-in. OID 2.16.840.1.113730.3.1.2088 Syntax OctetString Multi- or Single-Valued Multi-valued Defined in Directory Server 5.2.117. mgrpAddHeader This attribute contains information about the header in the messages. OID 2.16.840.1.113730.3.1.781 Syntax IA5String Multi- or Single-Valued Multi-valued Defined in Netscape Messaging Server 5.2.118. mgrpAllowedBroadcaster This attribute sets whether to allow the user to send broadcast messages. OID 2.16.840.1.113730.3.1.22 Syntax IA5String Multi- or Single-Valued Multi-valued Defined in Netscape Messaging Server 5.2.119. mgrpAllowedDomain This attribute sets the domains for the mail group. OID 2.16.840.1.113730.3.1.23 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Messaging Server 5.2.120. mgrpApprovePassword This attribute sets whether a user must approve a password used to access their email. OID mgrpApprovePassword-oid Syntax IA5String Multi- or Single-Valued Single-valued Defined in Netscape Messaging Server 5.2.121. mgrpBroadcasterPolicy This attribute defines the policy for broadcasting emails. OID 2.16.840.1.113730.3.1.788 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Messaging Server 5.2.122. mgrpDeliverTo This attribute contains information about the delivery destination for email. OID 2.16.840.1.113730.3.1.25 Syntax IA5String Multi- or Single-Valued Multi-valued Defined in Netscape Messaging Server 5.2.123. mgrpErrorsTo This attribute contains information about where to deliver error messages for the messaging server. OID 2.16.840.1.113730.3.1.26 Syntax IA5String Multi- or Single-Valued Single-valued Defined in Netscape Messaging Server 5.2.124. mgrpModerator This attribute contains the contact name for the mailing list moderator. OID 2.16.840.1.113730.3.1.33 Syntax IA5String Multi- or Single-Valued Multi-valued Defined in Netscape Messaging Server 5.2.125. mgrpMsgMaxSize This attribute sets the maximum size allowed for email messages. OID 2.16.840.1.113730.3.1.32 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Netscape Messaging Server 5.2.126. mgrpMsgRejectAction This attribute defines what actions the messaging server should take for rejected messages. OID 2.16.840.1.113730.3.1.28 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Messaging Server 5.2.127. mgrpMsgRejectText This attribute sets the text to use for rejection notifications. OID 2.16.840.1.113730.3.1.29 Syntax IA5String Multi- or Single-Valued Multi-valued Defined in Netscape Messaging Server 5.2.128. mgrpNoDuplicateChecks This attribute defines whether the messaging server checks for duplicate emails. OID 2.16.840.1.113730.3.1.789 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Netscape Messaging Server 5.2.129. mgrpRemoveHeader This attribute sets whether the header is removed in reply messages. OID 2.16.840.1.113730.3.1.801 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Messaging Server 5.2.130. mgrpRFC822MailMember This attribute identifies the member of a mail group. OID 2.16.840.1.113730.3.1.30 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Messaging Server 5.2.131. mobile The mobile , or mobileTelephoneNumber , contains the entry's mobile or cellular phone number. For example: OID 0.9.2342.19200300.100.1.41 Syntax TelephoneNumber Multi- or Single-Valued Multi-valued Defined in RFC 1274 5.2.132. mozillaCustom1 This attribute is used by Mozilla Thunderbird to manage a shared address book. OID 1.3.6.1.4.1.13769.4.1 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Mozilla Address Book 5.2.133. mozillaCustom2 This attribute is used by Mozilla Thunderbird to manage a shared address book. OID 1.3.6.1.4.1.13769.4.2 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Mozilla Address Book 5.2.134. mozillaCustom3 This attribute is used by Mozilla Thunderbird to manage a shared address book. OID 1.3.6.1.4.1.13769.4.3 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Mozilla Address Book 5.2.135. mozillaCustom4 This attribute is used by Mozilla Thunderbird to manage a shared address book. OID 1.3.6.1.4.1.13769.4.4 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Mozilla Address Book 5.2.136. mozillaHomeCountryName This attribute sets the country used by Mozilla Thunderbird in a shared address book. OID 1.3.6.1.4.1.13769.3.6 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Mozilla Address Book 5.2.137. mozillaHomeLocalityName This attribute sets the city used by Mozilla Thunderbird in a shared address book. OID 1.3.6.1.4.1.13769.3.3 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Mozilla Address Book 5.2.138. mozillaHomePostalCode This attribute sets the postal code used by Mozilla Thunderbird in a shared address book. OID 1.3.6.1.4.1.13769.3.5 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Mozilla Address Book 5.2.139. mozillaHomeState This attribute sets the state or province used by Mozilla Thunderbird in a shared address book. OID 1.3.6.1.4.1.13769.3.4 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Mozilla Address Book 5.2.140. mozillaHomeStreet This attribute sets the street address used by Mozilla Thunderbird in a shared address book. OID 1.3.6.1.4.1.13769.3.1 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Mozilla Address Book 5.2.141. mozillaHomeStreet2 This attribute contains the second line of a street address used by Mozilla Thunderbird in a shared address book. OID 1.3.6.1.4.1.13769.3.2 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Mozilla Address Book 5.2.142. mozillaHomeUrl This attribute contains a URL used by Mozilla Thunderbird in a shared address book. OID 1.3.6.1.4.1.13769.3.7 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Mozilla Address Book 5.2.143. mozillaNickname (xmozillanickname) This attribute contains a nickname used by Mozilla Thunderbird for a shared address book. OID 1.3.6.1.4.1.13769.2.1 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Mozilla Address Book 5.2.144. mozillaSecondEmail (xmozillasecondemail) This attribute contains an alternate or secondary email address for an entry in a shared address book for Mozilla Thunderbird. OID 1.3.6.1.4.1.13769.2.2 Syntax IA5String Multi- or Single-Valued Single-valued Defined in Mozilla Address Book 5.2.145. mozillaUseHtmlMail (xmozillausehtmlmail) This attribute sets an email type preference for an entry in a shared address book in Mozilla Thunderbird. OID 1.3.6.1.4.1.13769.2.3 Syntax Boolean Multi- or Single-Valued Single-valued Defined in Mozilla Address Book 5.2.146. mozillaWorkStreet2 This attribute contains a street address for a workplace or office for an entry in Mozilla Thunderbird's shared address book. OID 1.3.6.1.4.1.13769.3.8 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Mozilla Address Book 5.2.147. mozillaWorkUrl This attribute contains a URL for a work site in an entry in a shared address book in Mozilla Thunderbird. OID 1.3.6.1.4.1.13769.3.9 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Mozilla Address Book 5.2.148. multiLineDescription This attribute contains a description of an entry which spans multiple lines in the LDIF file. OID 1.3.6.1.4.1.250.1.2 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Internet White Pages Pilot 5.2.149. name The name attribute identifies the attribute supertype which can be used to form string attribute types for naming. It is unlikely that values of this type will occur in an entry. LDAP server implementations that do not support attribute subtyping do not need to recognize this attribute in requests. Client implementations should not assume that LDAP servers are capable of performing attribute subtyping. OID 2.5.4.41 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.150. netscapeReversiblePassword This attribute contains the password for HTTP Digest/MD5 authentication. OID 2.16.840.1.113730.3.1.812 Syntax OctetString Multi- or Single-Valued Multi-valued Defined in Netscape Web Server 5.2.151. NisMapEntry This attribute contains the information for a NIS map to be used by Network Information Services. Note This attribute is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. OID 1.3.6.1.1.1.1.27 Syntax IA5String Multi- or Single-Valued Single-valued Defined in RFC 2307 5.2.152. nisMapName This attribute contains the name of a mapping used by a NIS server. OID 1.3.6.1.1.1.1.26 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2307 5.2.153. nisNetgroupTriple This attribute contains information on a netgroup used by a NIS server. Note This attribute is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. OID 1.3.6.1.1.1.1.14 Syntax IA5String Multi- or Single-Valued Multi-valued Defined in RFC 2307 5.2.154. nsAccessLog This entry identifies the access log used by a server. OID nsAccessLog-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.155. nsAdminAccessAddresses This attribute contains the IP address of the Administration Server used by the instance. OID nsAdminAccessAddresses-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Administration Services 5.2.156. nsAdminAccessHosts This attribute contains the host name of the Administration Server. OID nsAdminAccessHosts-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Administration Services 5.2.157. nsAdminAccountInfo This attribute contains other information about the Administration Server account. OID nsAdminAccountInfo-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Administration Services 5.2.158. nsAdminCacheLifetime This sets the length of time to store the cache used by the Directory Server. OID nsAdminCacheLifetime-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Administration Services 5.2.159. nsAdminCgiWaitPid This attribute defines the wait time for Administration Server CGI process IDs. OID nsAdminCgiWaitPid-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Administration Services 5.2.160. nsAdminDomainName This attribute contains the name of the administration domain containing the Directory Server instance. OID nsAdminDomainName-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Administration Services 5.2.161. nsAdminEnableEnduser This attribute sets whether to allow end user access to admin services. OID nsAdminEnableEnduser-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Administration Services 5.2.162. nsAdminEndUserHTMLIndex This attribute sets whether to allow end users to access the HTML index of admin services. OID nsAdminEndUserHTMLIndex-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Administration Services 5.2.163. nsAdminGroupName This attribute gives the name of the admin guide. OID nsAdminGroupName-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Administration Services 5.2.164. nsAdminOneACLDir This attribute gives the directory path to the directory containing access control lists for the Administration Server. OID nsAdminOneACLDir-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Administration Services 5.2.165. nsAdminSIEDN This attribute contains the DN of the serer instance entry (SIE) for the Administration Server. OID nsAdminSIEDN-oid Syntax DN Multi- or Single-Valued Multi-valued Defined in Netscape Administration Services 5.2.166. nsAdminUsers This attribute gives the path and name of the file which contains the information for the Administration Server admin user. OID nsAdminUsers-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Administration Services 5.2.167. nsAIMid This attribute contains the AOL Instant Messaging user ID for the user. OID 2.16.840.1.113730.3.2.300 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Directory Server 5.2.168. nsBaseDN This contains the base DN used in the Directory Server's server instance definition entry. OID nsBaseDN-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Directory Server 5.2.169. nsBindDN This attribute contains the bind DN defined in the Directory Server SIE. OID nsBindDN-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Directory Server 5.2.170. nsBindPassword This attribute contains the password used by the bind DN defined in nsBindDN . OID nsBindPassword-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Directory Server 5.2.171. nsBuildNumber This defines, in the Directory Server SIE, the build number of the server instance. OID nsBuildNumber-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.172. nsBuildSecurity This defines, in the Directory Server SIE, the build security level. OID nsBuildSecurity-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.173. nsCertConfig This attribute defines the configuration for the Red Hat Certificate System. OID nsCertConfig-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Certificate System 5.2.174. nsClassname OID nsClassname-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.175. nsConfigRoot This attribute contains the root DN of the configuration directory. OID nsConfigRoot-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.176. nscpAIMScreenname This attribute gives the AIM screen name of a user. OID 1.3.6.1.4.1.13769.2.4 Syntax TelephoneString Multi- or Single-Valued Multi-valued Defined in Mozilla Address Book 5.2.177. nsDefaultAcceptLanguage This attribute contains the language codes which are accepted for HTML clients. OID nsDefaultAcceptLanguage-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.178. nsDefaultObjectClass This attribute stores object class information in a container entry. OID nsDefaultObjectClass-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Administration Services 5.2.179. nsDeleteclassname OID nsDeleteclassname-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Administration Services 5.2.180. nsDirectoryFailoverList This attribute contains a list of Directory Servers to use for failover. OID nsDirectoryFailoverList-oid Syntax IA5String Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.181. nsDirectoryInfoRef This attribute refers to a DN of an entry with information about the server. OID nsDirectoryInfoRef-oid Syntax DN Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.182. nsDirectoryURL This attribute contains the Directory Server URL. OID nsDirectoryURL-oid Syntax IA5String Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.183. nsDisplayName This attribute contains a display name. OID nsDisplayName-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Administration Services 5.2.184. nsErrorLog This attribute identifies the error log used by the server. OID nsErrorLog-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.185. nsExecRef This attribute contains the path or location of an executable which can be used to perform server tasks. OID nsExecRef-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.186. nsExpirationDate This attribute contains the expiration date of an application. OID nsExpirationDate-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.187. nsGroupRDNComponent This attribute defines the attribute to use for the RDN of a group entry. OID nsGroupRDNComponent-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.188. nsHardwarePlatform This attribute indicates the hardware on which the server is running. The value of this attribute is the same as the output from uname -m . For example: OID nsHardwarePlatform-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.189. nsHelpRef This attribute contains a reference to an online help file. OID nsHelpRef-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.190. nsHostLocation This attribute contains information about the server host. OID nsHostLocation-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.191. nsICQid This attribute contains an ICQ ID for the user. OID 2.16.840.1.113730.3.1.2014 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Directory Server 5.2.192. nsInstalledLocation This attribute contains the installation directory for Directory Servers which are version 7.1 or older. OID nsInstalledLocation-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.193. nsJarfilename This attribute gives the jar file name used by the Console. OID nsJarfilename-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.194. nsLdapSchemaVersion This gives the version number of the LDAP directory schema. OID nsLdapSchemaVersion-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.195. nsLicensedFor The nsLicensedFor attribute identifies the server the user is licensed to use. Administration Server expects each nsLicenseUser entry to contain zero or more instances of this attribute. Valid keywords for this attribute include the following: slapd for a licensed Directory Server client. mail for a licensed mail server client. news for a licensed news server client. cal for a licensed calender server client. For example: OID 2.16.840.1.113730.3.1.36 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Administration Server 5.2.196. nsLicenseEndTime Reserved for future use. OID 2.16.840.1.113730.3.1.38 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Administration Server 5.2.197. nsLicenseStartTime Reserved for future use. OID 2.16.840.1.113730.3.1.37 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Administration Server 5.2.198. nsLogSuppress This attribute sets whether to suppress server logging. OID nsLogSuppress-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape 5.2.199. nsmsgDisallowAccess This attribute defines access to a messaging server. OID nsmsgDisallowAccess-oid Syntax IA5String Multi- or Single-Valued Multi-valued Defined in Netscape Messaging Server 5.2.200. nsmsgNumMsgQuota This attribute sets a quota for the number of messages which will be kept by the messaging server. OID nsmsgNumMsgQuota-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Messaging Server 5.2.201. nsMSNid This attribute contains the MSN instant messaging ID for the user. OID 2.16.840.1.113730.3.1.2016 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Directory Server 5.2.202. nsNickName This attribute gives a nickname for an application. OID nsNickName-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape 5.2.203. nsNYR OID nsNYR-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Administration Services 5.2.204. nsOsVersion This attribute contains the version number of the operating system for the host on which the server is running. OID nsOsVersion-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape 5.2.205. nsPidLog OID nsPidLog-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape 5.2.206. nsPreference This attribute stores the Console preference settings. OID nsPreference-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Administration Services 5.2.207. nsProductName This contains the name of the product, such as Red Hat Directory Server or Administration Server. OID nsProductName-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape 5.2.208. nsProductVersion This contains the version number of the Directory Server or Administration Server. OID nsProductVersion-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape 5.2.209. nsRevisionNumber This attribute contains the revision number of the Directory Server or Administration Server. OID nsRevisionNumber-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape 5.2.210. nsSecureServerPort This attribute contains the TLS port for the Directory Server. Note This attribute does not configure the TLS port for the Directory Server. This is configured in nsslapd-secureport configuration attribute in the Directory Server's dse.ldif file. Configuration attributes are described in the Configuration, Command, and File Reference . OID nsSecureServerPort-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Directory Server 5.2.211. nsSerialNumber This attribute contains a serial number or tracking number assigned to a specific server application, such as Red Hat Directory Server or Administration Server. OID nsSerialNumber-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape 5.2.212. nsServerAddress This attribute contains the IP address of the server host on which the Directory Server is running. OID nsServerAddress-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape 5.2.213. nsServerCreationClassname This attribute gives the class name to use when creating a server. OID nsServerCreationClassname-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape 5.2.214. nsServerID This contains the server's instance name. For example: OID nsServerID-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape 5.2.215. nsServerMigrationClassname This attribute contains the name of the class to use when migrating a server. OID nsServerMigrationClassname-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape 5.2.216. nsServerPort This attribute contains the standard LDAP port for the Directory Server. Note This attribute does not configure the standard port for the Directory Server. This is configured in nsslapd-port configuration attribute in the Directory Server's dse.ldif file. Configuration attributes are described in the Configuration, Command, and File Reference . OID nsServerPort-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape 5.2.217. nsServerSecurity This shows whether the Directory Server requires a secure TLS or SSL connection. OID nsServerSecurity-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape 5.2.218. nsSNMPContact This attribute contains the contact information provided by the SNMP. OID 2.16.840.1.113730.3.1.235 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Directory Server 5.2.219. nsSNMPDescription This contains a description of the SNMP service. OID 2.16.840.1.113730.3.1.236 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Directory Server 5.2.220. nsSNMPEnabled This attribute shows whether SNMP is enabled for the server. OID 2.16.840.1.113730.3.1.232 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Directory Server 5.2.221. nsSNMPLocation This attribute shows the location provided by the SNMP service. OID 2.16.840.1.113730.3.1.234 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Directory Server 5.2.222. nsSNMPMasterHost This attribute shows the host name for the SNMP master agent. OID 2.16.840.1.113730.3.1.237 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Directory Server 5.2.223. nsSNMPMasterPort This attribute shows the port number for the SNMP subagent. OID 2.16.840.1.113730.3.1.238 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Directory Server 5.2.224. nsSNMPOrganization This attribute contains the organization information provided by SNMP. OID 2.16.840.1.113730.3.1.233 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Directory Server 5.2.225. nsSuiteSpotUser This attribute has been obsoleted. This attribute identifies the Unix user who installed the server. OID nsSuiteSpotUser-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape 5.2.226. nsTaskLabel OID nsTaskLabel-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape 5.2.227. nsUniqueAttribute This sets a unique attribute for the server preferences. OID nsUniqueAttribute-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Administration Services 5.2.228. nsUserIDFormat This attribute sets the format to use to generate the uid attribute from the givenname and sn attributes. OID nsUserIDFormat-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Administration Services 5.2.229. nsUserRDNComponent This attribute sets the attribute type to set the RDN for user entries. OID nsUserRDNComponent-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Administration Services 5.2.230. nsValueBin OID 2.16.840.1.113730.3.1.247 Syntax Binary Multi- or Single-Valued Multi-valued Defined in Netscape servers - value item 5.2.231. nsValueCES OID 2.16.840.1.113730.3.1.244 Syntax IA5String Multi- or Single-Valued Multi-valued Defined in Netscape servers - value item 5.2.232. nsValueCIS OID 2.16.840.1.113730.3.1.243 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape servers - value item 5.2.233. nsValueDefault OID 2.16.840.1.113730.3.1.250 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape servers - value item 5.2.234. nsValueDescription OID 2.16.840.1.113730.3.1.252 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape servers - value item 5.2.235. nsValueDN OID 2.16.840.1.113730.3.1.248 Syntax DN Multi- or Single-Valued Multi-valued Defined in Netscape servers - value item 5.2.236. nsValueFlags OID 2.16.840.1.113730.3.1.251 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape servers - value item 5.2.237. nsValueHelpURL OID 2.16.840.1.113730.3.1.254 Syntax IA5String Multi- or Single-Valued Multi-valued Defined in Netscape servers - value item 5.2.238. nsValueInt OID 2.16.840.1.113730.3.1.246 Syntax Integer Multi- or Single-Valued Multi-valued Defined in Netscape servers - value item 5.2.239. nsValueSyntax OID 2.16.840.1.113730.3.1.253 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape servers - value item 5.2.240. nsValueTel OID 2.16.840.1.113730.3.1.245 Syntax TelephoneString Multi- or Single-Valued Multi-valued Defined in Netscape servers - value item 5.2.241. nsValueType OID 2.16.840.1.113730.3.1.249 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape servers - value item 5.2.242. nsVendor This contains the name of the server vendor. OID nsVendor-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape 5.2.243. nsViewConfiguration This attribute stores the view configuration used by Console. OID nsViewConfiguration-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Administration Services 5.2.244. nsViewFilter This attribute sets the attribute-value pair which is used to identify entries belonging to the view. OID 2.16.840.1.113730.3.1.3023 Syntax IA5String Multi- or Single-Valued Multi-valued Defined in Directory Server 5.2.245. nsWellKnownJarfiles OID nsWellKnownJarfiles-oid Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Administration Services 5.2.246. nswmExtendedUserPrefs This attribute is used to store user preferences for accounts in a messaging server. OID 2.16.840.1.113730.3.1.520 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Messaging Server 5.2.247. nsYIMid This attribute contains the Yahoo instant messaging user name for the user. OID 2.16.840.1.113730.3.1.2015 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Directory Server 5.2.248. ntGroupAttributes This attribute points to a binary file which contains information about the group. For example: OID 2.16.840.1.113730.3.1.536 Syntax Binary Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 5.2.249. ntGroupCreateNewGroup The ntGroupCreateNewGroup attribute is used by Windows Sync to determine whether the Directory Server should create new group entry when a new group is created on a Windows server. true creates the new entry; false ignores the Windows entry. OID 2.16.840.1.113730.3.1.45 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 5.2.250. ntGroupDeleteGroup The ntGroupDeleteGroup attribute is used by Windows Sync to determine whether the Directory Server should delete a group entry when the group is deleted on a Windows sync peer server. true means the account is deleted; false ignores the deletion. OID 2.16.840.1.113730.3.1.46 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 5.2.251. ntGroupDomainId The ntGroupDomainID attribute contains the domain ID string for a group. OID 2.16.840.1.113730.3.1.44 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 5.2.252. ntGroupId The ntGroupId attribute points to a binary file which identifies the group. For example: OID 2.16.840.1.113730.3.1.110 Syntax Binary Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 5.2.253. ntGroupType In Active Directory, there are two major types of groups: security and distribution. Security groups are most similar to groups in Directory Server, since security groups can have policies configured for access controls, resource restrictions, and other permissions. Distribution groups are for mailing distribution. These are further broken down into global and local groups. The Directory Server ntGroupType supports all four group types: The ntGroupType attribute identifies the type of Windows group. The valid values are as follows: -21483646 for global/security -21483644 for domain local/security 2 for global/distribution 4 for domain local/distribution This value is set automatically when the Windows groups are synchronized. To determine the type of group, you must manually configure it when the group gets created. By default, Directory Server groups do not have this attribute and are synchronized as global/security groups. OID 2.16.840.1.113730.3.1.47 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 5.2.254. ntUniqueId The ntUniqueId attribute contains a generated number used for internal server identification and operation. For example: OID 2.16.840.1.113730.3.1.111 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 5.2.255. ntUserAcctExpires This attribute indicates when the entry's Windows account will expire. This value is stored as a string in GMT format. For example: OID 2.16.840.1.113730.3.1.528 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 5.2.256. ntUserAuthFlags This attribute contains authorization flags set for the Windows account. OID 2.16.840.1.113730.3.1.60 Syntax Binary Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 5.2.257. ntUserBadPwCount This attribute sets the number of bad password failures are allowed before an account is locked. OID 2.16.840.1.113730.3.1.531 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 5.2.258. ntUserCodePage The ntUserCodePage attribute contains the code page for the user's language of choice. For example: OID 2.16.840.1.113730.3.1.533 Syntax Binary Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 5.2.259. ntUserComment This attribute contains a text description or note about the user entry. OID 2.16.840.1.113730.3.1.522 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 5.2.260. ntUserCountryCode This attribute contains the two-character country code for the country where the user is located. OID 2.16.840.1.113730.3.1.532 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 5.2.261. ntUserCreateNewAccount The ntUserCreateNewAccount attribute is used by Windows Sync to determine whether the Directory Server should create a new user entry when a new user is created on a Windows server. true creates the new entry; false ignores the Windows entry. OID 2.16.840.1.113730.3.1.42 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 5.2.262. ntUserDeleteAccount The ntUserDeleteAccount attribute IS Used by Windows Sync to determine whether a Directory Server entry will be automatically deleted when the user is deleted from the Windows sync peer server. true means the user entry is deleted; false ignores the deletion. OID 2.16.840.1.113730.3.1.43 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 5.2.263. ntUserDomainId The ntUserDomainId attribute contains the Windows domain login ID. For example: OID 2.16.840.1.113730.3.1.41 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 5.2.264. ntUserFlags This attribute contains additional flags set for the Windows account. OID 2.16.840.1.113730.3.1.523 Syntax Binary Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 5.2.265. ntUserHomeDir The ntUserHomeDir attribute contains an ASCII string representing the Windows user's home directory. This attribute can be null. For example: OID 2.16.840.1.113730.3.1.521 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 5.2.266. ntUserHomeDirDrive This attribute contains information about the drive on which the user's home directory is stored. OID 2.16.840.1.113730.3.1.535 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 5.2.267. ntUserLastLogoff The ntUserLastLogoff attribute contains the time of the last logoff. This value is stored as a string in GMT format. If security logging is turned on, then this attribute is updated on synchronization only if some other aspect of the user's entry has changed. OID 2.16.840.1.113730.3.1.527 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 5.2.268. ntUserLastLogon The ntUserLastLogon attribute contains the time that the user last logged into the Windows domain. This value is stored as a string in GMT format. If security logging is turned on, then this attribute is updated on synchronization only if some other aspect of the user's entry has changed. OID 2.16.840.1.113730.3.1.526 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 5.2.269. ntUserLogonHours The ntUserLogonHours attribute contains the time periods that a user is allowed to log onto the Active Directory domain. This attribute corresponds to the logonHours attribute in Active Directory. OID 2.16.840.1.113730.3.1.530 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 5.2.270. ntUserLogonServer The ntUserLogonServer attribute defines the Active Directory server to which the user's logon request is forwarded. OID 2.16.840.1.113730.3.1.65 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 5.2.271. ntUserMaxStorage The ntUserMaxStorage attribute contains the maximum amount of disk space available for the user. OID 2.16.840.1.113730.3.1.529 Syntax Binary Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 5.2.272. ntUserNumLogons This attribute shows the number of successful logons to the Active Directory domain for the user. OID 2.16.840.1.113730.3.1.64 Syntax Binary Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 5.2.273. ntUserParms The ntUserParms attribute contains a Unicode string reserved for use by applications. OID 2.16.840.1.113730.3.1.62 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 5.2.274. ntUserPasswordExpired This attribute shows whether the password for the Active Directory account has expired. OID 2.16.840.1.113730.3.1.68 Syntax Binary Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 5.2.275. ntUserPrimaryGroupId The ntUserPrimaryGroupId attribute contains the group ID of the primary group to which the user belongs. OID 2.16.840.1.113730.3.1.534 Syntax Binary Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 5.2.276. ntUserPriv This attribute shows the type of privileges allowed for the user. OID 2.16.840.1.113730.3.1.59 Syntax Binary Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 5.2.277. ntUserProfile The ntUserProfile attribute contains the path to a user's profile. For example: OID 2.16.840.1.113730.3.1.67 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 5.2.278. ntUserScriptPath The ntUserScriptPath attribute contains the path to an ASCII script used by the user to log into the domain. OID 2.16.840.1.113730.3.1.524 Syntax Binary Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 5.2.279. ntUserUniqueId The ntUserUniqueId attribute contains a unique numeric ID for the Windows user. OID 2.16.840.1.113730.3.1.66 Syntax Binary Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 5.2.280. ntUserUnitsPerWeek The ntUserUnitsPerWeek attribute contains the total amount of time that the user has spent logged into the Active Directory domain. OID 2.16.840.1.113730.3.1.63 Syntax Binary Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 5.2.281. ntUserUsrComment The ntUserUsrComment attribute contains additional comments about the user. OID 2.16.840.1.113730.3.1.61 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 5.2.282. ntUserWorkstations The ntUserWorkstations attribute contains a list of names, in ASCII strings, of work stations which the user is allowed to log in to. There can be up to eight work stations listed, separated by commas. Specify null to permit users to log on from any workstation. For example: OID 2.16.840.1.113730.3.1.525 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Netscape NT Synchronization 5.2.283. o (organizationName) The organizationName , or o , attribute contains the organization name. For example: OID 2.5.4.10 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.284. objectClass The objectClass attribute identifies the object classes used for an entry. For example: OID 2.5.4.0 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.285. objectClasses This attribute is used in a schema file to identify an object class allowed by the subschema definition. OID 2.5.21.6 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2252 5.2.286. obsoletedByDocument The obsoletedByDocument attribute contains the distinguished name of a document which obsoletes the current document entry. OID 0.9.2342.19200300.102.1.4 Syntax DN Multi- or Single-Valued Multi-valued Defined in Internet White Pages Pilot 5.2.287. obsoletesDocument The obsoletesDocument attribute contains the distinguished name of a documented which is obsoleted by the current document entry. OID 0.9.2342.19200300.102.1.3 Syntax DN Multi- or Single-Valued Multi-valued Defined in Internet White Pages Pilot 5.2.288. oncRpcNumber The oncRpcNumber attribute contains part of the RPC map and stores the RPC number for UNIX RPCs. Note The oncRpcNumber attribute is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. OID 1.3.6.1.1.1.1.18 Syntax Integer Multi- or Single-Valued Single-valued Defined in RFC 2307 5.2.289. organizationalStatus The organizationalStatus identifies the person's category within an organization. OID 0.9.2342.19200300.100.1.45 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 1274 5.2.290. otherMailbox The otherMailbox attribute contains values for email types other than X.400 and RFC 822. OID 0.9.2342.19200300.100.1.22 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 1274 5.2.291. ou (organizationalUnitName) The organizationalUnitName , or ou , contains the name of an organizational division or a subtree within the directory hierarchy. OID 2.5.4.11 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.292. owner The owner attribute contains the DN of the person responsible for an entry. For example: OID 2.5.4.32 Syntax DN Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.293. pager The pagerTelephoneNumber , or pager , attribute contains a person's pager phone number. OID 0.9.2342.19200300.100.1.42 Syntax TelephoneNumber Multi- or Single-Valued Multi-valued Defined in RFC 1274 5.2.294. parentOrganization The parentOrganization attribute identifies the parent organization of an organization or organizational unit. OID 1.3.6.1.4.1.1466.101.120.41 Syntax DN Multi- or Single-Valued Single-valued Defined in Netscape 5.2.295. personalSignature The personalSignature attribute contains the entry's signature file, in binary format. OID 0.9.2342.19200300.100.1.53 Syntax Binary Multi- or Single-Valued Multi-valued Defined in RFC 1274 5.2.296. personalTitle The personalTitle attribute contains a person's honorific, such as Ms. , Dr. , Prof. , and Rev. OID 0.9.2342.19200300.100.1.40 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 1274 5.2.297. photo The photo attribute contains a photo file, in a binary format. OID 0.9.2342.19200300.100.1.7 Syntax Binary Multi- or Single-Valued Multi-valued Defined in RFC 1274 5.2.298. physicalDeliveryOfficeName The physicalDeliveryOffice contains the city or town in which a physical postal delivery office is located. OID 2.5.4.19 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.299. postalAddress The postalAddress attribute identifies the entry's mailing address. This field is intended to include multiple lines. When represented in LDIF format, each line should be separated by a dollar sign (USD). To represent an actual dollar sign (USD) or backslash (\) within the entry text, use the escaped hex values \24 and \5c respectively. For example, to represent the string: provide the string: OID 2.5.4.16 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.300. postalCode The postalCode contains the zip code for an entry located within the United States. OID 2.5.4.17 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.301. postOfficeBox The postOfficeBox attribute contains the postal address number or post office box number for an entry's physical mailing address. OID 2.5.4.18 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.302. preferredDeliveryMethod The preferredDeliveryMethod contains an entry's preferred contact or delivery method. For example: OID 2.5.4.28 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.303. preferredLanguage The preferredLanguage attribute contains a person's preferred written or spoken language. The value should conform to the syntax for HTTP Accept-Language header values. OID 2.16.840.1.113730.3.1.39 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in RFC 2798 5.2.304. preferredLocale A locale identifies language-specific information about how users of a specific region, culture, or custom expect data to be presented, including how data of a given language is interpreted and how data is to be sorted. Directory Server supports three locales for American English, Japanese, and German. The preferredLocale attribute sets which locale is preferred by a user. OID 1.3.6.1.4.1.1466.101.120.42 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Netscape 5.2.305. preferredTimeZone The preferredTimeZone attribute sets the time zone to use for the user entry. OID 1.3.6.1.4.1.1466.101.120.43 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in Netscape 5.2.306. presentationAddress The presentationAddress attribute contains the OSI presentation address for an entry. This attribute includes the OSI Network Address and up to three selectors, one each for use by the transport, session, and presentation entities. For example: OID 2.5.4.29 Syntax IA5String Multi- or Single-Valued Single-valued Defined in RFC 2256 5.2.307. protocolInformation The protocolInformation attribute, used together with the presentationAddress attribute, provides additional information about the OSO network service. OID 2.5.4.48 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.308. pwdReset When an administrator changes the password of a user, Directory Server sets the pwdReset operational attribute in the user's entry to true . Applications can use this attribute to identify if a password of a user has been reset by an administrator. Note The pwdReset attribute is an operational attribute and, therefore, users cannot edit it. OID 1.3.6.1.4.1.1466.115.121.1.7 Syntax Boolean Multi- or Single-Valued Single-valued Defined in RFC draft-behera-ldap-password-policy 5.2.309. ref The ref attribute is used to support LDAPv3 smart referrals. The value of this attribute is an LDAP URL: The port number is optional. For example: OID 2.16.840.1.113730.3.1.34 Syntax IA5String Multi- or Single-Valued Multi-valued Defined in LDAPv3 Referrals Internet Draft 5.2.310. registeredAddress This attribute contains a postal address for receiving telegrams or expedited documents. The recipient's signature is usually required on delivery. OID 2.5.4.26 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.311. roleOccupant This attribute contains the distinguished name of the person acting in the role defined in the organizationalRole entry. OID 2.5.4.33 Syntax DN Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.312. roomNumber This attribute specifies the room number of an object. The cn attribute should be used for naming room objects. OID 0.9.2342.19200300.100.1.6 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 1274 5.2.313. searchGuide The searchGuide attribute specifies information for suggested search criteria when using the entry as the base object in the directory tree for a search operation. When constructing search filters, use the enhancedSearchGuide attribute instead. OID 2.5.4.14 Syntax IA5String Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.314. secretary The secretary attribute identifies an entry's secretary or administrative assistant. OID 0.9.2342.19200300.100.1.21 Syntax DN Multi- or Single-Valued Multi-valued Defined in RFC 1274 5.2.315. seeAlso The seeAlso attribute identifies another Directory Server entry that may contain information related to this entry. OID 2.5.4.34 Syntax DN Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.316. serialNumber The serialNumber attribute contains the serial number of a device. OID 2.5.4.5 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.317. serverHostName The serverHostName attribute contains the host name of the server on which the Directory Server is running. OID 2.16.840.1.113730.3.1.76 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Red Hat Administration Services 5.2.318. serverProductName The serverProductName attribute contains the name of the server product. OID 2.16.840.1.113730.3.1.71 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Red Hat Administration Services 5.2.319. serverRoot This attribute is obsolete. This attribute shows the installation directory (server root) of Directory Servers version 7.1 or older. OID 2.16.840.1.113730.3.1.70 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Administration Services 5.2.320. serverVersionNumber The serverVersionNumber attribute contains the server version number. OID 2.16.840.1.113730.3.1.72 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Red Hat Administration Services 5.2.321. shadowExpire The shadowExpire attribute contains the date that the shadow account expires. The format of the date is in the number days since EPOCH, in UTC. To calculate this on the system, run a command like the following, using -d for the current date and -u to specify UTC: The result (14617 in the example) is then the value of shadowExpire . Note The shadowExpire attribute is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. OID 1.3.6.1.1.1.1.10 Syntax Integer Multi- or Single-Valued Single-valued Defined in RFC 2307 5.2.322. shadowFlag The shadowFlag attribute identifies what area in the shadow map stores the flag values. Note The shadowFlag attribute is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. OID 1.3.6.1.1.1.1.11 Syntax Integer Multi- or Single-Valued Single-valued Defined in RFC 2307 5.2.323. shadowInactive The shadowInactive attribute sets how long, in days, the shadow account can be inactive. Note The shadowInactive attribute is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. OID 1.3.6.1.1.1.1.9 Syntax Integer Multi- or Single-Valued Single-valued Defined in RFC 2307 5.2.324. shadowLastChange The shadowLastChange attribute contains the number of days between January 1, 1970 and the day when the user password was last set. For example, if an account's password was last set on Nov 4, 2016, the shadowLastChange attribute is set to 0 The following exceptions are existing: When the passwordMustChange parameter is enabled in the cn=config entry, new accounts have 0 set in the shadowLastChange attribute. When you create an account without password, the shadowLastChange attribute is not added. The shadowLastChange attribute is automatically updated for accounts synchronized from Active Directory. Note The shadowLastChange attribute is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. OID 1.3.6.1.1.1.1.5 Syntax Integer Multi- or Single-Valued Single-valued Defined in RFC 2307 5.2.325. shadowMax The shadowMax attribute sets the maximum number of days that a shadow password is valid. Note The shadowMax attribute is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. OID 1.3.6.1.1.1.1.7 Syntax Integer Multi- or Single-Valued Single-valued Defined in RFC 2307 5.2.326. shadowMin The shadowMin attribute sets the minimum number of days that must pass between changing the shadow password. Note The shadowMin attribute is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. OID 1.3.6.1.1.1.1.6 Syntax Integer Multi- or Single-Valued Single-valued Defined in RFC 2307 5.2.327. shadowWarning The shadowWarning attribute sets how may days in advance of password expiration to send a warning to the user. Note The shadowWarning attribute is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. OID 1.3.6.1.1.1.1.8 Syntax Integer Multi- or Single-Valued Single-valued Defined in RFC 2307 5.2.328. singleLevelQuality The singleLevelQuality specifies the purported data quality at the level immediately below in the directory tree. OID 0.9.2342.19200300.100.1.50 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in RFC 1274 5.2.329. sn (surname) The surname , or sn , attribute contains an entry's surname , also called a last name or family name. OID 2.5.4.4 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.330. st (stateOrProvinceName) The stateOrProvinceName , or st , attributes contains the entry's state or province. OID 2.5.4.8 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.331. street The streetAddress , or street , attribute contains an entry's street name and residential address. OID 2.5.4.9 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.332. subject The subject attribute contains information about the subject matter of the document entry. OID 0.9.2342.19200300.102.1.8 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Internet White Pages Pilot 5.2.333. subtreeMaximumQuality The subtreeMaximumQuality attribute specifies the purported maximum data quality for a directory subtree. OID 0.9.2342.19200300.100.1.52 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in RFC 1274 5.2.334. subtreeMinimumQuality The subtreeMinimumQuality specifies the purported minimum data quality for a directory subtree. OID 0.9.2342.19200300.100.1.51 Syntax DirectoryString Multi- or Single-Valued Single-valued Defined in RFC 1274 5.2.335. supportedAlgorithms The supportedAlgorithms attribute contains algorithms which are requested and stored in a binary form, such as supportedAlgorithms;binary . OID 2.5.4.52 Syntax Binary Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.336. supportedApplicationContext This attribute contains the identifiers of OSI application contexts. OID 2.5.4.30 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.337. telephoneNumber The telephoneNumber contains an entry's phone number. For example: OID 2.5.4.20 Syntax TelephoneNumber Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.338. teletexTerminalIdentifier The teletexTerminalIdentifier attribute contains an entry's teletex terminal identifier. The first printable string in the example is the encoding of the first portion of the teletex terminal identifier to be encoded, and the subsequent 0 or more octet strings are subsequent portions of the teletex terminal identifier: OID 2.5.4.22 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.339. telexNumber This attribute defines the telex number of the entry. The format of the telex number is as follows: actual-number is the syntactic representation of the number portion of the telex number being encoded. country is the TELEX country code. answerback is the answerback code of a TELEX terminal. OID 2.5.4.21 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.340. title The title attribute contains a person's title within the organization. OID 2.5.4.12 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.341. ttl (TimeToLive) The TimeToLive , or ttl , attribute contains the time, in seconds, that cached information about an entry should be considered valid. Once the specified time has elapsed, the information is considered out of date. A value of zero ( 0 ) indicates that the entry should not be cached. OID 1.3.6.1.4.250.1.60 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in LDAP Caching Internet Draft 5.2.342. uid (userID) The userID , more commonly uid , attribute contains the entry's unique user name. OID 0.9.2342.19200300.100.1.1 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 1274 5.2.343. uidNumber The uidNumber attribute contains a unique numeric identifier for a user entry. This is analogous to the user number in Unix. Note The uidNumber attribute is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. OID 1.3.6.1.1.1.1.0 Syntax Integer Multi- or Single-Valued Single-valued Defined in RFC 2307 5.2.344. uniqueIdentifier This attribute identifies a specific item used to distinguish between two entries when a distinguished name has been reused. This attribute is intended to detect any instance of a reference to a distinguished name that has been deleted. This attribute is assigned by the server. OID 0.9.2342.19200300.100.1.44 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 1274 5.2.345. uniqueMember The uniqueMember attribute identifies a group of names associated with an entry where each name was given a uniqueIdentifier to ensure its uniqueness. A value for the uniqueMember attribute is a DN followed by the uniqueIdentifier . OID 2.5.4.50 Syntax DN Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.346. updatedByDocument The updatedByDocument attribute contains the distinguished name of a document that is an updated version of the document entry. OID 0.9.2342.19200300.102.1.6 Syntax DN Multi- or Single-Valued Multi-valued Defined in Internet White Pages Pilot 5.2.347. updatesDocument The updatesDocument attribute contains the distinguished name of a document for which this document is an updated version. OID 0.9.2342.19200300.102.1.5 Syntax DN Multi- or Single-Valued Multi-valued Defined in Internet White Pages Pilot 5.2.348. userCertificate This attribute is stored and requested in the binary form, as userCertificate;binary . OID 2.5.4.36 Syntax Binary Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.349. userClass This attribute specifies a category of computer user. The semantics of this attribute are arbitrary. The organizationalStatus attribute makes no distinction between computer users and other types of users users and may be more applicable. OID 0.9.2342.19200300.100.1.8 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in RFC 1274 5.2.350. userPassword This attribute identifies the entry's password and encryption method in the format {encryption method}encrypted password . For example: Transferring cleartext passwords is strongly discouraged where the underlying transport service cannot guarantee confidentiality. Transferring in cleartext may result in disclosure of the password to unauthorized parties. OID 2.5.4.35 Syntax Binary Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.351. userPKCS12 This attribute provides a format for the exchange of personal identity information. The attribute is stored and requested in binary form, as userPKCS12;binary . The attribute values are PFX PDUs stored as binary data. OID 2.16.840.1.113730.3.1.216 Syntax Binary Multi- or Single-Valued Multi-valued Defined in RFC 2798 5.2.352. userSMIMECertificate The userSMIMECertificate attribute contains certificates which can be used by mail clients for S/MIME. This attribute requests and stores data in a binary format. For example: OID 2.16.840.1.113730.3.1.40 Syntax Binary Multi- or Single-Valued Multi-valued Defined in RFC 2798 5.2.353. vacationEndDate This attribute shows the ending date of the user's vacation period. OID 2.16.840.1.113730.3.1.708 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Messaging Server 5.2.354. vacationStartDate This attribute shows the start date of the user's vacation period. OID 2.16.840.1.113730.3.1.707 Syntax DirectoryString Multi- or Single-Valued Multi-valued Defined in Netscape Messaging Server 5.2.355. x121Address The x121Address attribute contains a user's X.121 address. OID 2.5.4.24 Syntax IA5String Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.2.356. x500UniqueIdentifier Reserved for future use. An X.500 identifier is a binary method of identification useful for differentiating objects when a distinguished name has been reused. OID 2.5.4.45 Syntax Binary Multi- or Single-Valued Multi-valued Defined in RFC 2256 5.3. Entry Object Class Reference This reference is an alphabetical list of the object classes accepted by the default schema. It gives a definition of each object class and lists its required and allowed attributes. The object classes listed are available to support entry information. The required attributes listed for an object class must be present in the entry when that object class is added to the directory's ldif file. If an object class has a superior object class, both of these object classes with all required attributes must be present in the entry. If required attributes are not listed in the ldif file, than the server will not restart. Note The LDAP RFCs and X.500 standards allow for an object class to have more than one superior object class. This behavior is not currently supported by Directory Server. 5.3.1. account The account object class defines entries for computer accounts. This object class is defined in RFC 1274 . Superior Class top OID 0.9.2342.19200300.100.4.5 Table 5.3. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes for the entry. Section 5.2.342, "uid (userID)" Gives the defined account's user ID. Table 5.4. Allowed Attributes Attribute Definition Section 5.2.37, "description" Gives a text description of the entry. Section 5.2.64, "host" Gives the host name for the machine on which the account resides. Section 5.2.87, "l (localityName)" Gives the city or geographical location of the entry. Section 5.2.283, "o (organizationName)" Gives the organization to which the account belongs. Section 5.2.291, "ou (organizationalUnitName)" Gives the organizational unit or division to which the account belongs. Section 5.2.315, "seeAlso" Contains a URL to another entry or site with related information. 5.3.2. accountpolicy The accountpolicy object class defines entries for account inactivation or expiration policies. This is used for a user directory configuration entry, which works in conjunction with the Account Policy Plug-in configuration. Superior Class top OID 1.3.6.1.4.1.11.1.3.2.2.1 Table 5.5. Allowed Attributes Attribute Definition Section 5.2.3, "accountInactivityLimit" Sets the period, in seconds, from the last login time of an account before that account is locked for inactivity. 5.3.3. alias The alias object class points to other directory entries. This object class is defined in RFC 2256 . Note Aliasing entries is not supported in Red Hat Directory Server. Superior Class top OID 2.5.6.1 Table 5.6. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Defines the object classes for the entry. Section 5.2.8, "aliasedObjectName" Gives the distinguished name of the entry for which this entry is an alias. 5.3.4. bootableDevice The bootableDevice object class points to a device with boot parameters. This object class is defined in RFC 2307 . Note This object class is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. Superior Class top OID 1.3.6.1.1.1.2.12 Table 5.7. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Defines the object classes for the entry. Section 5.2.25, "cn (commonName)" Gives the common name of the device. Table 5.8. Allowed Attributes Attribute Definition Section 5.2.17, "bootFile" Gives the boot image file. Section 5.2.18, "bootParameter" Gives the parameters used by the boot process for the device. Section 5.2.37, "description" Gives a text description of the entry. Section 5.2.87, "l (localityName)" Gives the city or geographical location of the entry. Section 5.2.283, "o (organizationName)" Gives the organization to which the device belongs. Section 5.2.291, "ou (organizationalUnitName)" Gives the organizational unit or division to which the device belongs. Section 5.2.292, "owner" Gives the DN (distinguished name) of the person responsible for the device. Section 5.2.315, "seeAlso" Contains a URL to another entry or site with related information. Section 5.2.316, "serialNumber" Contains the serial number of the device. 5.3.5. cacheObject The cacheObject is an object that contains the time to live ( ttl ) attribute type. This object class is defined in the LDAP Caching Internet Draft. Superior Class top OID 1.3.6.1.4.1.250.3.18 Table 5.9. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Defines the object classes for the entry. Table 5.10. Allowed Attributes Attribute Definition Section 5.2.341, "ttl (TimeToLive)" The time that the object remains (lives) in the cache. 5.3.6. cosClassicDefinition The cosClassicDefinition object class defines a class of service template entry using the entry's DN (distinguished name), given in the Section 5.2.32, "cosTemplateDn" attribute, and the value of one of the target attributes, specified in the Section 5.2.30, "cosSpecifier" attribute. This object class is defined in RFC 1274 . Superior Class cosSuperDefinition OID 2.16.840.1.113730.3.2.100 Table 5.11. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 5.2.27, "cosAttribute" Provides the name of the attribute for which the CoS generates a value. There can be more than one cosAttribute value specified. Table 5.12. Allowed Attributes Attribute Definition Section 5.2.25, "cn (commonName)" Gives the common name of the entry. Section 5.2.30, "cosSpecifier" Specifies the attribute value used by a classic CoS, which, along with the template entry's DN, identifies the template entry. Section 5.2.32, "cosTemplateDn" Provides the DN of the template entry which is associated with the CoS definition. Section 5.2.37, "description" Gives a text description of the entry. 5.3.7. cosDefinition The cosDefinition object class defines which class of service is being used; this object class provide compatibility with the DS4.1 CoS Plug-in. This object class is defined in RFC 1274 . Superior Class top OID 2.16.840.1.113730.3.2.84 Table 5.13. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Table 5.14. Allowed Attributes Attribute Definition Section 6.2, "aci" Evaluates what rights are granted or denied when the Directory Server receives an LDAP request from a client. Section 5.2.25, "cn (commonName)" Gives the common name of the entry. Section 5.2.27, "cosAttribute" Provides the name of the attribute for which the CoS generates a value. There can be more than one cosAttribute value specified. Section 5.2.30, "cosSpecifier" Specifies the attribute value used by a classic CoS, which, along with the template entry's DN, identifies the template entry. Section 5.2.31, "cosTargetTree" Defines the subtrees in the directory to which the CoS schema applies. Section 5.2.32, "cosTemplateDn" Provides the DN of the template entry which is associated with the CoS definition. Section 5.2.342, "uid (userID)" Gives the user ID for the entry. 5.3.8. cosIndirectDefinition The cosIndirectDefinition defines the template entry using the value of one of the target entry's attributes. The attribute of the target entry is specified in the Section 5.2.28, "cosIndirectSpecifier" attribute. This object class is defined by Directory Server. Superior Class cosSuperDefinition OID 2.16.840.1.113730.3.2.102 Table 5.15. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 5.2.27, "cosAttribute" Provides the name of the attribute for which the CoS generates a value. There can be more than one cosAttribute value specified. Table 5.16. Allowed Attributes Attribute Definition Section 5.2.25, "cn (commonName)" Gives the common name of the entry. Section 5.2.28, "cosIndirectSpecifier" Specifies the attribute value used by an indirect CoS to identify the template entry. Section 5.2.37, "description" Gives a text description of the entry. 5.3.9. cosPointerDefinition This object class identifies the template entry associated with the CoS definition using the template entry's DN value. The DN of the template entry is specified in the Section 5.2.28, "cosIndirectSpecifier" attribute. This object class is defined by Directory Server. Superior Class cosSuperDefinition OID 2.16.840.1.113730.3.2.101 Table 5.17. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 5.2.27, "cosAttribute" Provides the name of the attribute for which the CoS generates a value. There can be more than one cosAttribute value specified. Table 5.18. Allowed Attributes Attribute Definition Section 5.2.25, "cn (commonName)" Gives the common name of the entry. Section 5.2.32, "cosTemplateDn" Provides the DN of the template entry which is associated with the CoS definition. Section 5.2.37, "description" Gives a text description of the entry. 5.3.10. cosSuperDefinition All CoS definition object classes inherit from the cosSuperDefinition object class. This object class is defined by Directory Server. Superior Class LDAPsubentry OID 2.16.840.1.113730.3.2.99 Table 5.19. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 5.2.27, "cosAttribute" Provides the name of the attribute for which the CoS generates a value. There can be more than one cosAttribute value specified. Table 5.20. Allowed Attributes Attribute Definition Section 5.2.25, "cn (commonName)" Gives the common name of the entry. Section 5.2.37, "description" Gives a text description of the entry. 5.3.11. cosTemplate The cosTemplate object class contains a list of the shared attribute values for the CoS. This object class is defined by Directory Server. Superior Class top OID 2.16.840.1.113730.3.2.128 Table 5.21. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Table 5.22. Allowed Attributes Attribute Definition Section 5.2.25, "cn (commonName)" Gives the common name of the entry. Section 5.2.29, "cosPriority" Specifies which template provides the attribute value when CoS templates compete to provide an attribute value. 5.3.12. country The country object class defines entries which represent countries. This object class is defined in RFC 2256 . Superior Class top OID 2.5.6.2 Table 5.23. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 5.2.21, "c (countryName)" Contains the two-character code representing country names, as defined by ISO, in the directory. Table 5.24. Allowed Attributes Attribute Definition Section 5.2.37, "description" Gives a text description of the entry. Section 5.2.313, "searchGuide" Specifies information for suggested search criteria when using the entry as the base object in the directory tree for a search. 5.3.13. dcObject The dcObject object class allows domain components to be defined for an entry. This object class is defined as auxiliary because it is commonly used in combination with another object class, such as o ( organization ), ou ( organizationalUnit ), or l ( locality ). For example: This object class is defined in RFC 2247 . Superior Class top OID 1.3.6.1.4.1.1466.344 Table 5.25. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 5.2.34, "dc (domainComponent)" Contains one component of a domain name. 5.3.14. device The device object class stores information about network devices, such as printers, in the directory. This object class is defined in RFC 2247 . Superior Class top OID 2.5.6.14 Table 5.26. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the device. Section 5.2.25, "cn (commonName)" Gives the common name of the device. Table 5.27. Allowed Attributes Attribute Definition Section 5.2.37, "description" Gives a text description of the entry. Section 5.2.87, "l (localityName)" Gives the city or geographical location of the entry. Section 5.2.283, "o (organizationName)" Gives the organization to which the device belongs. Section 5.2.291, "ou (organizationalUnitName)" Gives the organizational unit or division to which the device belongs. Section 5.2.292, "owner" Gives the DN (distinguished name) of the person responsible for the device. Section 5.2.315, "seeAlso" Contains a URL to another entry or site with related information. Section 5.2.316, "serialNumber" Contains the serial number of the device. 5.3.15. document The document object class defines directory entries that represent documents. RFC 1247 . Superior Class top OID 0.9.2342.19200300.100.4.6 Table 5.28. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 5.2.45, "documentIdentifier" Gives the unique ID for the document. Table 5.29. Allowed Attributes Attribute Definition Section 5.2.1, "abstract" Contains the abstract for the document. Section 5.2.12, "audio" Stores a sound file in binary format. Section 5.2.13, "authorCn" Gives the author's common name or given name. Section 5.2.15, "authorSn" Gives the author's surname. Section 5.2.25, "cn (commonName)" Gives the common name of the entry. Section 5.2.37, "description" Gives a text description of the entry. Section 5.2.40, "dITRedirect" Contains the DN (distinguished name) of the entry to use as a redirect for the document entry. Section 5.2.44, "documentAuthor" Contains the DN (distinguished name) of the author. Section 5.2.46, "documentLocation" Gives the location of the original document. Section 5.2.47, "documentPublisher" Identifies the person or organization that published the document. Section 5.2.48, "documentStore" Section 5.2.49, "documentTitle" Contains the title of the document. Section 5.2.50, "documentVersion" Gives the version number of the document. Section 5.2.73, "info" Contains information about the document. Section 5.2.84, "jpegPhoto" Stores a JPG image. Section 5.2.85, "keyWords" Contains keywords related to the document. Section 5.2.87, "l (localityName)" Gives the city or geographical location of the entry. Section 6.13, "lastModifiedBy" Gives the DN (distinguished name) of the last user which modified the document entry. Section 6.14, "lastModifiedTime" Gives the time of the last modification. Section 5.2.105, "manager" Gives the DN (distinguished name) of the entry's manager. Section 5.2.283, "o (organizationName)" Gives the organization to which the document belongs. Section 5.2.286, "obsoletedByDocument" Gives the DN (distinguished name) of another document entry which obsoletes this document. Section 5.2.287, "obsoletesDocument" Gives the DN (distinguished name) of another document entry which is obsoleted by this document. Section 5.2.291, "ou (organizationalUnitName)" Gives the organizational unit or division to which the document belongs. Section 5.2.297, "photo" Stores a photo of the document in binary format. Section 5.2.315, "seeAlso" Contains a URL to another entry or site with related information. Section 5.2.332, "subject" Describes the subject of the document. Section 5.2.344, "uniqueIdentifier" Distinguishes between two entries when a distinguished name has been reused. Section 5.2.346, "updatedByDocument" Gives the DN (distinguished name) of another document entry which updates this document. Section 5.2.347, "updatesDocument" Gives the DN (distinguished name) of another document entry which is updated by this document. 5.3.16. documentSeries The documentSeries object class defines an entry that represents a series of documents. This object class is defined in RFC 1274 . Superior Class top OID 0.9.2342.19200300.100.4.9 Table 5.30. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 5.2.25, "cn (commonName)" Gives the common name of the entry. Table 5.31. Allowed Attributes Attribute Definition Section 5.2.37, "description" Gives a text description of the entry. Section 5.2.87, "l (localityName)" Gives the place where the document series is physically located. Section 5.2.283, "o (organizationName)" Gives the organization to which the document series belongs. Section 5.2.291, "ou (organizationalUnitName)" Gives the organizational unit or division to which the series belongs. Section 5.2.315, "seeAlso" Contains a URL to another entry or site with related information. Section 5.2.337, "telephoneNumber" Gives the telephone number of the person responsible for the document series. 5.3.17. domain The domain object class defines directory entries that represent DNS domains. Use the Section 5.2.34, "dc (domainComponent)" attribute to name entries of this object class. This object class is also used for Internet domain names, such as example.com . The domain object class can only be used for a directory entry which does not correspond to an organization, organizational unit, or any other object which has an object class defined for it. object for which an object class has been defined. This object class is defined in RFC 2252 . Superior Class top OID 0.9.2342.19200300.100.4.13 Table 5.32. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 5.2.34, "dc (domainComponent)" Contains one component of a domain name. Table 5.33. Allowed Attributes Attribute Definition Section 5.2.10, "associatedName" Gives the name of an entry within the organizational directory tree which is associated with a DNS domain. Section 5.2.20, "businessCategory" Gives the type of business in which this domain is engaged. Section 5.2.37, "description" Gives a text description of the entry. Section 5.2.38, "destinationIndicator" Gives the country and city associated with the entry; this was once required to provide public telegram service. Section 5.2.56, "fax (facsimileTelephoneNumber)" Gives the fax number for the domain. Section 5.2.76, "internationalISDNNumber" Gives the ISDN number for the domain. Section 5.2.87, "l (localityName)" Gives the city or geographical location of the entry. Section 5.2.283, "o (organizationName)" Gives the organization to which the entry belongs. Section 5.2.298, "physicalDeliveryOfficeName" Gives a location where physical deliveries can be made. Section 5.2.301, "postOfficeBox" Gives the post office box number for the domain. Section 5.2.299, "postalAddress" Contains the mailing address for the domain. Section 5.2.300, "postalCode" Gives the postal code for the domain, such as the zip code in the United States. Section 5.2.302, "preferredDeliveryMethod" Shows the person's preferred method of contact or message delivery. Section 5.2.310, "registeredAddress" Gives a postal address suitable to receive expedited documents when the recipient must verify delivery. Section 5.2.313, "searchGuide" Specifies information for suggested search criteria when using the entry as the base object in the directory tree for a search. Section 5.2.315, "seeAlso" Contains a URL to another entry or site with related information. Section 5.2.330, "st (stateOrProvinceName)" Gives the state or province where the domain is located. Section 5.2.331, "street" Gives the street name and address number for the domain's physical location. Section 5.2.337, "telephoneNumber" Gives the phone number for the domain. Section 5.2.338, "teletexTerminalIdentifier" Gives the ID for a domain's teletex terminal. Section 5.2.339, "telexNumber" Gives the telex number for the domain. Section 5.2.350, "userPassword" Stores the password with which the entry can bind to the directory. Section 5.2.355, "x121Address" Gives the X.121 address for the domain. 5.3.18. domainRelatedObject The domainRelatedObject object class defines entries that represent DNS or NRS domains which are equivalent to an X.500 domain, such as an organization or organizational unit. This object class is defined in RFC 1274 . Superior Class top OID 0.9.2342.19200300.100.4.17 Table 5.34. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 5.2.9, "associatedDomain" Specifies a DNS domain associated with an object in the directory tree. 5.3.19. dSA The dSA object class defines entries that represent DSAs. This object class is defined in RFC 1274 . Superior Class top OID 2.5.6.13 Table 5.35. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 5.2.25, "cn (commonName)" Gives the common name of the entry. Section 5.2.306, "presentationAddress" Contains the entry's OSI presentation address. Table 5.36. Allowed Attributes Attribute Definition Section 5.2.37, "description" Gives a text description of the entry. Section 5.2.86, "knowledgeInformation" Section 5.2.87, "l (localityName)" Gives the city or geographical location of the entry. Section 5.2.283, "o (organizationName)" Gives the organization to which the entry belongs. Section 5.2.291, "ou (organizationalUnitName)" Gives the organizational unit or division to which the entry belongs. Section 5.2.315, "seeAlso" Contains a URL to another entry or site with related information. Section 5.2.336, "supportedApplicationContext" Contains the identifiers of OSI application contexts. 5.3.20. extensibleObject When present in an entry, extensibleObject permits the entry to hold optionally any attribute. The allowed attribute list of this class is implicitly the set of all attributes known to the server. This object class is defined in RFC 2252 . Superior Class top OID 1.3.6.1.4.1.1466.101.120.111 Table 5.37. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Allowed Attributes All attributes known to the server. 5.3.21. friendlyCountry The friendlyCountry object class defines country entries within the directory. This object class allows more friendly names than the country object class. This object class is defined in RFC 1274 . Superior Class top OID 0.9.2342.19200300.100.4.18 Table 5.38. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 5.2.26, "co (friendlyCountryName)" Stores the human-readable country name. Section 5.2.21, "c (countryName)" Contains the two-character code representing country names, as defined by ISO, in the directory. Table 5.39. Allowed Attributes Attribute Definition Section 5.2.37, "description" Gives a text description of the entry. Section 5.2.313, "searchGuide" Specifies information for suggested search criteria when using the entry as the base object in the directory tree for a search. 5.3.22. groupOfCertificates The groupOfCertificates object class describes a set of X.509 certificates. Any certificate that matches one of the Section 5.2.107, "memberCertificateDescription" values is considered a member of the group. Superior Class top OID 2.16.840.1.113730.3.2.31 Table 5.40. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 5.2.25, "cn (commonName)" Gives the common name of the entry. Table 5.41. Allowed Attributes Attribute Definition Section 5.2.20, "businessCategory" Gives the type of business in which the group is engaged. Section 5.2.37, "description" Gives a text description of the entry. Section 5.2.107, "memberCertificateDescription" Contains the values used to determine if a particular certificate is a member of this group. Section 5.2.283, "o (organizationName)" Gives the organization to which the entry belongs. Section 5.2.291, "ou (organizationalUnitName)" Gives the organizational unit or division to which the entry belongs. Section 5.2.292, "owner" Contains the DN (distinguished name) of the person responsible for the group. Section 5.2.315, "seeAlso" Contains a URL to another entry or site with related information. 5.3.23. groupOfMailEnhancedUniqueNames The groupOfMailEnhancedUniqueNames object class is used for a mail group which must have unique members. This object class is defined for Netscape Messaging Server. Superior Class top OID 2.16.840.1.113730.3.2.5 Table 5.42. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 5.2.25, "cn (commonName)" Gives the common name of the entry. Table 5.43. Allowed Attributes Attribute Definition Section 5.2.20, "businessCategory" Gives the type of business in which the group is engaged. Section 5.2.37, "description" Gives a text description of the entry. Section 5.2.97, "mailEnhancedUniqueMember" Contains a unique DN value to identify a member of the mail group. Section 5.2.283, "o (organizationName)" Gives the organization to which the entry belongs. Section 5.2.291, "ou (organizationalUnitName)" Gives the organizational unit or division to which the entry belongs. Section 5.2.292, "owner" Contains the DN (distinguished name) of the person responsible for the group. Section 5.2.315, "seeAlso" Contains a URL to another entry or site with related information. 5.3.24. groupOfNames The groupOfNames object class contains entries for a group of names. This object class is defined in RFC 2256 . Note The definition for this object class in Directory Server differs from the standard definition. In the standard definition, Section 5.2.106, "member" is a required attribute, while in Directory Server it is an allowed attribute. Directory Server, therefore, allows a group to have no members. Superior Class top OID 2.5.6.9 Table 5.44. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 5.2.25, "cn (commonName)" Gives the common name of the entry. Table 5.45. Allowed Attributes Attribute Definition Section 5.2.20, "businessCategory" Gives the type of business in which the entry is engaged. Section 5.2.37, "description" Gives a text description of the entry. Section 5.2.106, "member" Contains the DN (distinguished name) of a group member. Section 5.2.283, "o (organizationName)" Gives the organization to which the entry belongs. Section 5.2.291, "ou (organizationalUnitName)" Gives the organizational unit or division to which the entry belongs. Section 5.2.292, "owner" Contains the DN (distinguished name) of the person responsible for the group. Section 5.2.315, "seeAlso" Contains a URL to another entry or site with related information. 5.3.25. groupOfUniqueNames The groupOfUniqueNames object class defines a group which contains unique names. Note The definition for this object class in Directory Server differs from the standard definition. In the standard definition, Section 5.2.345, "uniqueMember" is a required attribute, while in Directory Server it is an allowed attribute. Directory Server, therefore, allows a group to have no members. This object class is defined in RFC 2256 . Superior Class top OID 2.5.6.17 Table 5.46. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 5.2.25, "cn (commonName)" Gives the common name of the entry. Table 5.47. Allowed Attributes Attribute Definition Section 5.2.20, "businessCategory" Gives the type of business in which the entry is engaged. Section 5.2.37, "description" Gives a text description of the entry. Section 5.2.283, "o (organizationName)" Gives the organization to which the entry belongs. Section 5.2.291, "ou (organizationalUnitName)" Gives the organizational unit or division to which the entry belongs. Section 5.2.292, "owner" Contains the DN (distinguished name) of the person responsible for the group. Section 5.2.315, "seeAlso" Contains a URL to another entry or site with related information. Section 5.2.345, "uniqueMember" Contains the DN (distinguished name) of a member of the group; this DN must be unique. 5.3.26. groupOfURLs The groupOfURLs object class is an auxiliary object class for the groupOfUniqueNames and groupOfNames object classes. This group consists of a list of labeled URLs. Superior Class top OID 2.16.840.1.113730.3.2.33 Table 5.48. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 5.2.25, "cn (commonName)" Gives the common name of the entry. Table 5.49. Allowed Attributes Attribute Definition Section 5.2.20, "businessCategory" Gives the type of business in which the group is engaged. Section 5.2.37, "description" Gives a text description of the entry. Section 5.2.111, "memberURL" Contains a URL associated with each member of the group. Section 5.2.283, "o (organizationName)" Gives the organization to which the entry belongs. Section 5.2.291, "ou (organizationalUnitName)" Gives the organizational unit or division to which the entry belongs. Section 5.2.292, "owner" Contains the DN (distinguished name) of the person responsible for the group. Section 5.2.315, "seeAlso" Contains a URL to another entry or site with related information. 5.3.27. ieee802Device The ieee802Device object class points to a device with a MAC address. This object class is defined in RFC 2307. Note This object class is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. Superior Class top OID 1.3.6.1.1.1.2.11 Table 5.50. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Defines the object classes for the entry. Section 5.2.25, "cn (commonName)" Gives the common name of the device. Table 5.51. Allowed Attributes Attribute Definition Section 5.2.37, "description" Gives a text description of the entry. Section 5.2.87, "l (localityName)" Gives the city or geographical location of the entry. Section 5.2.90, "macAddress" Gives the MAC address of the device. Section 5.2.283, "o (organizationName)" Gives the organization to which the device belongs. Section 5.2.291, "ou (organizationalUnitName)" Gives the organizational unit or division to which the device belongs. Section 5.2.292, "owner" Gives the DN (distinguished name) of the person responsible for the device. Section 5.2.315, "seeAlso" Contains a URL to another entry or site with related information. Section 5.2.316, "serialNumber" Contains the serial number of the device. 5.3.28. inetAdmin The inetAdmin object class is a marker for an administrative group or user. This object class is defined for the Netscape Delegated Administrator. Superior Class top OID 2.16.840.1.113730.3.2.112 Table 5.52. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Table 5.53. Allowed Attributes Attribute Definition Section 5.2.6, "adminRole" Identifies a role to which the administrative user belongs. Section 5.2.109, "memberOf" Contains a group name to which the administrative user belongs. This is dynamically managed by the MemberOf Plug-in. 5.3.29. inetDomain The inetDomain object class is a auxiliary class for virtual domain nodes. This object class is defined for the Netscape Delegated Administrator. Superior Class top OID 2.16.840.1.113730.3.2.129 Table 5.54. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Table 5.55. Allowed Attributes Attribute Definition Section 5.2.66, "inetDomainBaseDN" Defines the base DN of the user subtree for a DNS domain. Section 5.2.67, "inetDomainStatus" Gives the status of the domain. The status can be active, inactive, or deleted. 5.3.30. inetOrgPerson The inetOrgPerson object class defines entries representing people in an organization's enterprise network. This object class inherits the Section 5.2.25, "cn (commonName)" and Section 5.2.329, "sn (surname)" attributes from the person object class. This object class is defined in RFC 2798 . Superior Class person OID 2.16.840.1.113730.3.2.2 Table 5.56. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 5.2.25, "cn (commonName)" Gives the common name of the entry. Section 5.2.329, "sn (surname)" Gives the person's family name or last name. Table 5.57. Allowed Attributes Attribute Definition Section 5.2.12, "audio" Stores a sound file in binary format. Section 5.2.20, "businessCategory" Gives the type of business in which the entry is engaged. Section 5.2.23, "carLicense" Gives the license plate number of the person's vehicle. Section 5.2.36, "departmentNumber" Gives the department for which the person works. Section 5.2.37, "description" Gives a text description of the entry. Section 5.2.38, "destinationIndicator" Gives the country and city associated with the entry; this was once required to provide public telegram service. Section 5.2.39, "displayName" Shows the preferred name of a person to use when displaying entries. Section 5.2.53, "employeeNumber" Contains the person's employee number. Section 5.2.54, "employeeType" Shows the person's type of employment (for example, full time). Section 5.2.56, "fax (facsimileTelephoneNumber)" Contains the person's fax number. Section 5.2.60, "givenName" Contains the person's first name. Section 5.2.62, "homePhone" Gives the person's home phone number. Section 5.2.63, "homePostalAddress" Gives the person's home mailing address. Section 5.2.74, "initials" Gives the person's initials. Section 5.2.76, "internationalISDNNumber" Contains the ISDN number for the entry. Section 5.2.84, "jpegPhoto" Stores a JPG image. Section 5.2.87, "l (localityName)" Gives the city or geographical location of the entry. Section 5.2.88, "labeledURI" Contains a URL which is relevant to the entry. Section 5.2.91, "mail" Contains the person's email address. Section 5.2.105, "manager" Contains the DN (distinguished name) of the direct supervisor of the person entry. Section 5.2.131, "mobile" Gives the person's mobile phone number. Section 5.2.283, "o (organizationName)" Gives the organization to which the entry belongs. Section 5.2.291, "ou (organizationalUnitName)" Gives the organizational unit or division to which the entry belongs. Section 5.2.293, "pager" Gives the person's pager number. Section 5.2.297, "photo" Stores a photo of a person, in binary format. Section 5.2.298, "physicalDeliveryOfficeName" Gives a location where physical deliveries can be made. Section 5.2.301, "postOfficeBox" Gives the post office box number for the entry. Section 5.2.299, "postalAddress" Contains the mailing address for the entry. Section 5.2.300, "postalCode" Gives the postal code for the entry, such as the zip code in the United States. Section 5.2.302, "preferredDeliveryMethod" Shows the person's preferred method of contact or message delivery. Section 5.2.303, "preferredLanguage" Gives the person's preferred written or spoken language. Section 5.2.310, "registeredAddress" Gives a postal address suitable to receive expedited documents when the recipient must verify delivery. Section 5.2.312, "roomNumber" Gives the room number where the person is located. Section 5.2.314, "secretary" Contains the DN (distinguished name) of the person's secretary or administrative assistant. Section 5.2.315, "seeAlso" Contains a URL to another entry or site with related information. Section 5.2.330, "st (stateOrProvinceName)" Gives the state or province where the entry is located. Section 5.2.331, "street" Gives the street name and number for the person's physical location. Section 5.2.337, "telephoneNumber" Gives the telephone number for the entry. Section 5.2.338, "teletexTerminalIdentifier" Gives the identifier for the person's teletex terminal. Section 5.2.339, "telexNumber" Gives the telex number associated with the entry. Section 5.2.340, "title" Shows the person's job title. Section 5.2.342, "uid (userID)" Contains the person's user ID (usually his logon ID). Section 5.2.348, "userCertificate" Stores a user's certificate in cleartext (not used). Section 5.2.350, "userPassword" Stores the password with which the entry can bind to the directory. Section 5.2.352, "userSMIMECertificate" Stores the person's certificate in binary form so it can be used by S/MIME clients. Section 5.2.355, "x121Address" Gives the X.121 address for the person. Section 5.2.356, "x500UniqueIdentifier" Reserved for future use. 5.3.31. inetSubscriber The inetSubscriber object class is used for general user account management. This object class is defined for the Netscape subscriber interoperability. Superior Class top OID 2.16.840.1.113730.3.2.134 Table 5.58. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Table 5.59. Allowed Attributes Attribute Definition Section 5.2.68, "inetSubscriberAccountId" Contains a unique attribute linking the subscriber to a billing system. Section 5.2.69, "inetSubscriberChallenge" Contains some kind of question or prompt, the challenge phrase, which is used to confirm the identity of the user. Section 5.2.70, "inetSubscriberResponse" Contains the answer to the challenge question. 5.3.32. inetUser The inetUser object class is an auxiliary class which must be present in an entry in order to deliver subscriber services. This object class is defined for the Netscape subscriber interoperability. Superior Class top OID 2.16.840.1.113730.3.2.130 Table 5.60. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Table 5.61. Allowed Attributes Attribute Definition Section 5.2.71, "inetUserHttpURL" Contains web addresses associated with the user. Section 5.2.72, "inetUserStatus" Gives the status of the user. The status can be active, inactive, or deleted. Section 5.2.109, "memberOf" Contains a group name to which the user belongs. This is dynamically managed by the MemberOf Plug-in. Section 5.2.342, "uid (userID)" Contains the person's user ID (usually his logon ID). Section 5.2.350, "userPassword" Stores the password with which the user can use to access the user account. 5.3.33. ipHost The ipHost object class stores IP information about a host. This object class is defined in RFC 2307 . Note This object class is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. Superior Class top OID 1.3.6.1.1.1.2.6 Table 5.62. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Defines the object classes for the entry. Section 5.2.25, "cn (commonName)" Gives the common name of the device. Section 5.2.77, "ipHostNumber" Contains the IP address of the device or host. Table 5.63. Allowed Attributes Attribute Definition Section 5.2.37, "description" Gives a text description of the entry. Section 5.2.87, "l (localityName)" Gives the city or geographical location of the entry. Section 5.2.105, "manager" Contains the DN (distinguished name) of the maintainer or supervisor of the entry. Section 5.2.283, "o (organizationName)" Gives the organization to which the device belongs. Section 5.2.291, "ou (organizationalUnitName)" Gives the organizational unit or division to which the device belongs. Section 5.2.292, "owner" Gives the DN (distinguished name) of the person responsible for the device. Section 5.2.315, "seeAlso" Contains a URL to another entry or site with related information. Section 5.2.316, "serialNumber" Contains the serial number of the device. 5.3.34. ipNetwork The ipNetwork object class stores IP information about a network. This object class is defined in RFC 2307 . Note This object class is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. Superior Class top OID 1.3.6.1.1.1.2.7 Table 5.64. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Defines the object classes for the entry. Section 5.2.25, "cn (commonName)" Gives the common name of the device. Section 5.2.79, "ipNetworkNumber" Contains the IP number for the network. Table 5.65. Allowed Attributes Attribute Definition Section 5.2.37, "description" Gives a text description of the entry. Section 5.2.87, "l (localityName)" Gives the city or geographical location of the entry. Section 5.2.105, "manager" Contains the DN (distinguished name) of the maintainer or supervisor of the entry. Section 5.2.78, "ipNetmaskNumber" Contains the IP netmask for the network. 5.3.35. ipProtocol The ipProtocol object class shows the IP protocol version. This object class is defined in RFC 2307 . Note This object class is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. Superior Class top OID 1.3.6.1.1.1.2.4 Table 5.66. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Defines the object classes for the entry. Section 5.2.25, "cn (commonName)" Gives the common name of the device. Section 5.2.80, "ipProtocolNumber" Contains the IP protocol number for the network. Table 5.67. Allowed Attributes Attribute Definition Section 5.2.37, "description" Gives a text description of the entry. 5.3.36. ipService The ipService object class stores information about the IP service. This object class is defined in RFC 2307 . Note This object class is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. Superior Class top OID 1.3.6.1.1.1.2.3 Table 5.68. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Defines the object classes for the entry. Section 5.2.25, "cn (commonName)" Gives the common name of the device. Section 5.2.81, "ipServicePort" Gives the port number used by the IP service. Section 5.2.82, "ipServiceProtocol" Contains the IP protocol number for the service. Table 5.69. Allowed Attributes Attribute Definition Section 5.2.37, "description" Gives a text description of the entry. 5.3.37. labeledURIObject This object class can be added to existing directory objects to allow URI values to be included. Using this object class does not preclude including the Section 5.2.88, "labeledURI" attribute type directly in other object classes as appropriate. This object class is defined in RFC 2079 . Superior Class top OID 1.3.6.1.4.1.250.3.15 Table 5.70. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Table 5.71. Allowed Attributes Attribute Definition Section 5.2.88, "labeledURI" Gives a URI which is relevant to the entry's object. 5.3.38. locality The locality object class defines entries that represent localities or geographic areas. This object class is defined in RFC 2256 . Superior Class top OID 2.5.6.3 Table 5.72. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Table 5.73. Allowed Attributes Attribute Definition Section 5.2.37, "description" Gives a text description of the entry. Section 5.2.87, "l (localityName)" Gives the city or geographical location of the entry. Section 5.2.313, "searchGuide" Specifies information for suggested search criteria when using the entry as the base object in the directory tree for a search. Section 5.2.315, "seeAlso" Contains a URL to another entry or site with related information. Section 5.2.330, "st (stateOrProvinceName)" Gives the state or province associated with the locality. Section 5.2.331, "street" Gives a street and number associated with the locality. 5.3.39. mailGroup The mailGroup object class defines the mail attributes for a group. This object is defined in the schema for the Netscape Messaging Server. Superior Class top OID 2.16.840.1.113730.3.2.4 Table 5.74. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Table 5.75. Allowed Attributes Attribute Definition Section 5.2.25, "cn (commonName)" Gives the common name of the entry. Section 5.2.91, "mail" Stores email addresses for the group. Section 5.2.93, "mailAlternateAddress" Contains secondary email addresses for the group. Section 5.2.99, "mailHost" Contains the host name of the mail server. Section 5.2.292, "owner" Contains the DN (distinguished name) of the person responsible for the group. 5.3.40. mailRecipient The mailRecipient object class defines a mail account for a user. This object is defined in the schema for the Netscape Messaging Server. Superior Class top OID 2.16.840.1.113730.3.2.3 Table 5.76. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Table 5.77. Allowed Attributes Attribute Definition Section 5.2.25, "cn (commonName)" Gives the common name of the entry. Section 5.2.91, "mail" Stores email addresses for the group. Section 5.2.92, "mailAccessDomain" Contains the domain from which the user can access the messaging server. Section 5.2.93, "mailAlternateAddress" Contains secondary email addresses for the group. Section 5.2.94, "mailAutoReplyMode" Specifies whether autoreply mode for the account is enabled. Section 5.2.95, "mailAutoReplyText" Contains the text use for automatic reply emails. Section 5.2.96, "mailDeliveryOption" Specifies the mail delivery mechanism to be used for the mail user. Section 5.2.98, "mailForwardingAddress" Specifies the mail delivery mechanism to use for the mail user. Section 5.2.99, "mailHost" Contains the host name of the mail server. Section 5.2.100, "mailMessageStore" Specifies the location of the user's mail box. Section 5.2.102, "mailProgramDeliveryInfo" Specifies the commands used for programmed mail delivery. Section 5.2.103, "mailQuota" Specifies the disk space allowed for the user's mail box. Section 5.2.104, "mailRoutingAddress" Contains a routing address to use when forwarding the mail from this entry's account to another messaging server. Section 5.2.148, "multiLineDescription" Contains a text description of the entry which spans more than one line. Section 5.2.342, "uid (userID)" Gives the defined account's user ID. Section 5.2.350, "userPassword" Stores the password with which the entry can access the account. 5.3.41. mepManagedEntry The mepManagedEntry object class identifies an entry which was been generated by an instance of the Managed Entries Plug-in. This object class is defined in Directory Server. Superior Class top OID 2.16.840.1.113730.3.2.319 Table 5.78. Allowed Attributes Attribute Definition Section 5.2.112, "mepManagedBy" Gives the DN of the originating entry which corresponds to the managed entry. 5.3.42. mepOriginEntry The mepOriginEntry object class identifies an entry which is within a subtree that is monitored by an instance of the Managed Entries Plug-in and which has had a managed entry created by the plug-in, for which this is the originating entry. This object class is defined in Directory Server. Superior Class top OID 2.16.840.1.113730.3.2.320 Table 5.79. Allowed Attributes Attribute Definition Section 5.2.113, "mepManagedEntry" Gives the DN of the managed entry entry which was created by the Managed Entries Plug-in instance and which corresponds to this originating entry. 5.3.43. mepTemplateEntry The mepTemplateEntry object class identifies an entry which is used as a template by an instance of the Managed Entries Plug-in to create the managed entries. This object class is defined in Directory Server. Superior Class top OID 2.16.840.1.113730.3.2.321 Table 5.80. Allowed Attributes Attribute Definition Section 5.2.25, "cn (commonName)" Gives the common name of the entry. Section 5.2.114, "mepMappedAttr" Contains an attribute-token pair that the plug-in uses to create an attribute in the managed entry with a value taken from the originating entry. Section 5.2.115, "mepRDNAttr" Specifies which attribute to use as the naming attribute in the managed entry. Section 5.2.116, "mepStaticAttr" Contains an attribute-value pair that will be used, with that specified value, in the managed entry. 5.3.44. netscapeCertificateServer The netscapeCertificateServer object class stores information about a Netscape certificate server. This object is defined in the schema for the Netscape Certificate Management System. Superior Class top OID 2.16.840.1.113730.3.2.18 Table 5.81. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. 5.3.45. netscapeDirectoryServer The netscapeDirectoryServer object class stores information about a Directory Server instance. This object is defined in the schema for the Netscape Directory Server. Superior Class top OID 2.16.840.1.113730.3.2.23 Table 5.82. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. 5.3.46. NetscapeLinkedOrganization NetscapeLinkedOrganization is an auxiliary object class. This object is defined in the schema for the Netscape server suite. Superior Class top OID 1.3.6.1.4.1.1466.101.120.141 Table 5.83. Allowed Attributes Attribute Definition Section 5.2.294, "parentOrganization" Identifies the parent organization for the linked organization defined for the server suite. 5.3.47. netscapeMachineData The netscapeMachineData object class distinguishes between machine data and non-machine data. This object is defined in the schema for the Netscape Directory Server. Superior Class top OID 2.16.840.1.113730.3.2.32 5.3.48. NetscapePreferences NetscapePreferences is an auxiliary object class which stores the user preferences. This object is defined by Netscape. Superior Class top OID 1.3.6.1.4.1.1466.101.120.142 Table 5.84. Required Attributes Attribute Definition Section 5.2.303, "preferredLanguage" Gives the person's preferred written or spoken language. Section 5.2.304, "preferredLocale" Gives the person's preferred locale. A locale setting defines cultural or national settings like date formats and currencies. Section 5.2.305, "preferredTimeZone" Gives the person's preferred time zone. 5.3.49. netscapeReversiblePasswordObject netscapeReversiblePasswordObject is an auxiliary object class to store a password. This object is defined in the schema for the Netscape Web Server. Superior Class top OID 2.16.840.1.113730.3.2.154 Table 5.85. Allowed Attributes Attribute Definition Section 5.2.150, "netscapeReversiblePassword" Contains a password used for HTTP Digest/MD5 authentication. 5.3.50. netscapeServer The netscapeServer object class contains instance-specific information about a Netscape server and its installation. Superior Class top OID 2.16.840.1.113730.3.2.10 Table 5.86. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 5.2.25, "cn (commonName)" Gives the common name of the entry. Table 5.87. Allowed Attributes Attribute Definition Section 5.2.5, "administratorContactInfo" Contains the contact information for the server administrator. Section 5.2.7, "adminUrl" Contains the URL for the Administration Server used by the instance. Section 5.2.37, "description" Gives a text description of the entry. Section 5.2.75, "installationTimeStamp" Contains the time that the server instance was installed. Section 5.2.317, "serverHostName" Contains the host name of the server on which the Directory Server instance is running. Section 5.2.318, "serverProductName" Contains the product name of the server type. Section 5.2.319, "serverRoot" Specifies the top directory where the server product is installed. Section 5.2.320, "serverVersionNumber" Contains the product version number. Section 5.2.350, "userPassword" Stores the password with which the entry can bind to the directory. 5.3.51. netscapeWebServer The netscapeWebServer object class identifies an installed Netscape Web Server. Superior Class top OID 2.16.840.1.113730.3.2.29 Table 5.88. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 5.2.25, "cn (commonName)" Gives the common name of the entry. Section 5.2.214, "nsServerID" Contains the server's name or ID. Table 5.89. Allowed Attributes Attribute Definition Section 5.2.37, "description" Gives a text description of the entry. Section 5.2.216, "nsServerPort" Contains the server's port number. 5.3.52. newPilotPerson The newPilotPerson object class is a subclass of the person to allow additional attributes to be assigned to entries of the person object class. This object class inherits the Section 5.2.25, "cn (commonName)" and Section 5.2.329, "sn (surname)" attributes from the person object class. This object class is defined in Internet White Pages Pilot. Superior Class person OID 0.9.2342.19200300.100.4.4 Table 5.90. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 5.2.25, "cn (commonName)" Gives the common name of the entry. Section 5.2.329, "sn (surname)" Gives the person's family name or last name. Table 5.91. Allowed Attributes Attribute Definition Section 5.2.20, "businessCategory" Gives the type of business in which the entry is engaged. Section 5.2.37, "description" Gives a text description of the entry. Section 5.2.51, "drink (favouriteDrink)" Gives the person's favorite drink. Section 5.2.62, "homePhone" Gives the person's home phone number. Section 5.2.63, "homePostalAddress" Gives the person's home mailing address. Section 5.2.83, "janetMailbox" Gives the person's email address; this is primarily for use in Great Britain or organizations which do no use RFC 822 mail addresses. Section 5.2.91, "mail" Contains the person's email address. Section 5.2.101, "mailPreferenceOption" Indicates the user's preference for including his name on mailing lists (electronic or physical). Section 5.2.131, "mobile" Gives the person's mobile phone number. Section 5.2.289, "organizationalStatus" Gives the common job category for a person's function. Section 5.2.290, "otherMailbox" Contains values for electronic mailbox types other than X.400 and RFC 822. Section 5.2.293, "pager" Gives the person's pager number. Section 5.2.295, "personalSignature" Contains the person's signature file. Section 5.2.296, "personalTitle" Gives the person's honorific. Section 5.2.302, "preferredDeliveryMethod" Shows the person's preferred method of contact or message delivery. Section 5.2.312, "roomNumber" Gives the room number where the person is located. Section 5.2.314, "secretary" Contains the DN (distinguished name) of the person's secretary or administrative assistant. Section 5.2.315, "seeAlso" Contains a URL to another entry or site with related information. Section 5.2.337, "telephoneNumber" Gives the telephone number for the entry. Section 5.2.342, "uid (userID)" Contains the person's user ID (usually his logon ID). Section 5.2.349, "userClass" Describes the type of computer user this entry is. Section 5.2.350, "userPassword" Stores the password with which the entry can bind to the directory. 5.3.53. nisMap This object class points to a NIS map. This object class is defined in RFC 2307 , which defines object classes and attributes to use LDAP as a network information service. Note This object class is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. Superior Class top OID 1.3.6.1.1.1.2.13 Table 5.92. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 5.2.152, "nisMapName" Contains the NIS map name. Table 5.93. Allowed Attributes Attribute Definition Section 5.2.37, "description" Gives a text description of the entry. 5.3.54. nisNetgroup This object class contains a netgroup used within a NIS domain. Adding this object class allows administrators to use netgroups to control login and service authentication in NIS. This object class is defined in RFC 2307 , which defines object classes and attributes to use LDAP as a network information service. Note This object class is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. Superior Class top OID 1.3.6.1.1.1.2.8 Table 5.94. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 5.2.25, "cn (commonName)" Gives the common name of the entry. Table 5.95. Allowed Attributes Attribute Definition Section 5.2.37, "description" Gives a text description of the entry. Section 5.2.108, "memberNisNetgroup" Merges the attribute values of another netgroup into the current one by listing the name of the merging netgroup. Section 5.2.153, "nisNetgroupTriple" Contains a user name ( ,bobby,example.com ) or a machine name ( shellserver1,,example.com ). 5.3.55. nisObject This object class contains information about an object in a NIS domain. This object class is defined in RFC 2307 , which defines object classes and attributes to use LDAP as a network information service. Note This object class is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. Superior Class top OID 1.3.6.1.1.1.2.10 Table 5.96. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 5.2.25, "cn (commonName)" Gives the common name of the entry. Section 5.2.151, "NisMapEntry" Identifies the NIS map entry. Section 5.2.152, "nisMapName" Contains the name of the NIS map. Table 5.97. Allowed Attributes Attribute Definition Section 5.2.37, "description" Gives a text description of the entry. 5.3.56. nsAdminConfig This object class stores the configuration parameters for the Administration Server. This object is defined for the Administration Services. Superior Class nsConfig OID nsAdminConfig-oid Table 5.98. Allowed Attributes Attribute Definition Section 5.2.155, "nsAdminAccessAddresses" Identifies the Administration Server IP addresses. Section 5.2.156, "nsAdminAccessHosts" Contains the Administration Server host name or a list of Administration Server host names. Section 5.2.158, "nsAdminCacheLifetime" Notes the length of the cache timeout period. Section 5.2.159, "nsAdminCgiWaitPid" Contains the PID of the CGI process the server is waiting for. Section 5.2.161, "nsAdminEnableEnduser" Sets whether to allow or disallow end user access to the Administration Server web services pages. Section 5.2.164, "nsAdminOneACLDir" Contains the path of the local ACL directory for the Administration Server. Section 5.2.166, "nsAdminUsers" Points to the file which contains the admin user info. 5.3.57. nsAdminConsoleUser This object class stores the configuration parameters for the Administration Server. This object is defined for the Administration Services. Superior Class top OID nsAdminConsoleUser-oid Table 5.99. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 5.2.25, "cn (commonName)" Gives the common name of the entry. Table 5.100. Allowed Attributes Attribute Definition Section 5.2.206, "nsPreference" Stores preference information for console settings. 5.3.58. nsAdminDomain This object class stores user information to access Admin Console. This object is defined for the Administration Services. Superior Class organizationalUnit OID nsAdminDomain-oid Table 5.101. Allowed Attributes Attribute Definition Section 5.2.160, "nsAdminDomainName" Identifies the administration domain for the servers. 5.3.59. nsAdminGlobalParameters This object class stores the configuration parameters for the Administration Server. This object is defined for the Administration Services. Superior Class top OID nsAdminGlobalParameters-oid Table 5.102. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 5.2.25, "cn (commonName)" Gives the common name of the entry. Table 5.103. Allowed Attributes Attribute Definition Section 5.2.162, "nsAdminEndUserHTMLIndex" Sets whether to allow or disallow end-user access to the HTML index pages. Section 5.2.202, "nsNickName" Gives the nickname for the application. 5.3.60. nsAdminGroup This object class stores group information for administrator users in the Administration Server. This object is defined for the Administration Services. Superior Class top OID nsAdminGroup-oid Table 5.104. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 5.2.25, "cn (commonName)" Gives the common name of the entry. Table 5.105. Allowed Attributes Attribute Definition Section 5.2.37, "description" Gives a text description of the entry. Section 5.2.163, "nsAdminGroupName" Contains the name for the admin group. Section 5.2.165, "nsAdminSIEDN" Shows the DN of the server instance entry (SIE) for the Administration Server instance. Section 5.2.175, "nsConfigRoot" Gives the full path to the Administration Server instance's configuration directory. 5.3.61. nsAdminObject This object class contains information about an object used by Administration Server, such as a task. This object is defined for the Administration Services. Superior Class top OID nsAdminObject-oid Table 5.106. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 5.2.25, "cn (commonName)" Gives the common name of the entry. Table 5.107. Allowed Attributes Attribute Definition Section 5.2.174, "nsClassname" Contains the class name associated with the task or resource editor for the Administration Server. Section 5.2.193, "nsJarfilename" Gives the name of the JAR file used by the Administration Server Console to access the object. 5.3.62. nsAdminResourceEditorExtension This object class contains an extension used by the Console Resource Editor. This object is defined for the Administration Services. Superior Class nsAdminObject OID nsAdminResourceEditorExtension-oid Table 5.108. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 5.2.25, "cn (commonName)" Gives the common name of the entry. Table 5.109. Allowed Attributes Attribute Definition Section 5.2.157, "nsAdminAccountInfo" Contains information about the Administration Server account. Section 5.2.179, "nsDeleteclassname" Contains the name of a class to be deleted. 5.3.63. nsAdminServer This object class defines the Administration Server instance. This object is defined for the Administration Services. Superior Class top OID nsAdminServer-oid Table 5.110. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 5.2.25, "cn (commonName)" Gives the common name of the entry. Section 5.2.214, "nsServerID" Contains the Directory Server ID, such as slapd-example . Table 5.111. Allowed Attributes Attribute Definition Section 5.2.37, "description" Gives a text description of the entry. 5.3.64. nsAIMpresence nsAIMpresence is an auxiliary object class which defines the status of an AOL instance messaging account. This object is defined for the Directory Server. Superior Class top OID 2.16.840.1.113730.3.2.300 Table 5.112. Allowed Attributes Attribute Definition Section 5.2.167, "nsAIMid" Contains the AIM user ID for the entry. Section 6.23, "nsAIMStatusGraphic" Contains a pointer to the graphic image which indicates the AIM account's status. Section 6.24, "nsAIMStatusText" Contains the text to indicate the AIM account's status. 5.3.65. nsApplication nsApplication defines an application or server entry. This is defined by Netscape. Superior Class top OID nsApplication-oid Table 5.113. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Defines the object classes for the entry. Section 5.2.25, "cn (commonName)" Gives the common name of the entry. Table 5.114. Allowed Attributes Attribute Definition Section 5.2.37, "description" Gives a text description of the entry. Section 5.2.75, "installationTimeStamp" Contains the time that the server instance was installed. Section 5.2.171, "nsBuildNumber" Contains the build number for the server instance. Section 5.2.172, "nsBuildSecurity" Contains the level of security used to make the build. Section 5.2.186, "nsExpirationDate" Contains the date that the license for the application expires. Section 5.2.192, "nsInstalledLocation" For servers which are version 7.1 or older, shows the installation directory for the server. Section 5.2.194, "nsLdapSchemaVersion" Gives the version of the LDAP schema files used by the Directory Server. Section 5.2.202, "nsNickName" Gives the nickname for the application. Section 5.2.207, "nsProductName" Gives the name of the server product. Section 5.2.208, "nsProductVersion" Shows the version number of the server product. Section 5.2.209, "nsRevisionNumber" Contains the revision number (minor version) for the product. Section 5.2.211, "nsSerialNumber" Gives the serial number assigned to the server product. Section 5.2.215, "nsServerMigrationClassname" Gives the class to use to migrate a server instance. Section 5.2.213, "nsServerCreationClassname" Gives the class to use to create a server instance. Section 5.2.242, "nsVendor" Contains the name of the vendor who designed the server. 5.3.66. nsCertificateServer The nsCertificateServer object class stores information about a Red Hat Certificate System instance. This object is defined in the schema for the Certificate System. Superior Class top OID nsCertificateServer-oid Table 5.115. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 5.2.214, "nsServerID" Contains the server's name or ID. Table 5.116. Allowed Attributes Attribute Definition Section 5.2.173, "nsCertConfig" Contains configuration settings for a Red Hat Certificate System instance. Section 5.2.216, "nsServerPort" Contains the server's port number. Section 5.2.317, "serverHostName" Contains the host name of the server on which the Directory Server instance is running. 5.3.67. nsComplexRoleDefinition Any role that is not a simple role is, by definition, a complex role. This object class is defined by Directory Server. Superior Class nsRoleDefinition OID 2.16.840.1.113730.3.2.95 Table 5.117. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Table 5.118. Allowed Attributes Attribute Definition Section 5.2.25, "cn (commonName)" Gives the common name of the entry. Section 5.2.37, "description" Gives a text description of the entry. 5.3.68. nsContainer Some entries do not define any specific entity, but they create a defined space within the directory tree as a parent entry for similar or related child entries. These are container entries , and they are identified by the nsContainer object class. Superior Class top OID 2.16.840.1.113730.3.2.104 Table 5.119. Required Attributes Attribute Definition objectClass Defines the object classes for the entry. cn Gives the common name of the entry. 5.3.69. nsCustomView The nsCustomView object class defines information about custom views of the Directory Server data in the Directory Server Console. This is defined for Administration Services. Superior Class nsAdminObject OID nsCustomView-oid Table 5.120. Allowed Attributes Attribute Definition Section 5.2.183, "nsDisplayName" Contains the name of the custom view setting profile. 5.3.70. nsDefaultObjectClasses nsDefaultObjectClasses sets default object classes to use when creating a new object of a certain type within the directory. This is defined for Administration Services. Superior Class top OID nsDefaultObjectClasses-oid Table 5.121. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Defines the object classes for the entry. Section 5.2.25, "cn (commonName)" Gives the common name of the device. Table 5.122. Allowed Attributes Attribute Definition Section 5.2.178, "nsDefaultObjectClass" Contains an object class to assign by default to an object type. 5.3.71. nsDirectoryInfo nsDirectoryInfo contains information about a directory instance. This is defined for Administration Services. Superior Class top OID nsDirectoryInfo-oid Table 5.123. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Defines the object classes for the entry. Section 5.2.25, "cn (commonName)" Gives the common name of the device. Table 5.124. Allowed Attributes Attribute Definition Section 5.2.169, "nsBindDN" Contains the bind DN defined for the server in its server instance entry. Section 5.2.170, "nsBindPassword" Contains the password for the bind identity in the SIE. Section 5.2.180, "nsDirectoryFailoverList" Contains a list of URLs of other Directory Server instances to use for failover support if the instance in nsDirectoryURL is unavailable. Section 5.2.181, "nsDirectoryInfoRef" Contains a reference to a distinguished name (DN) in the directory. Section 5.2.182, "nsDirectoryURL" Contains a URL to access the Directory Server instance. 5.3.72. nsDirectoryServer nsDirectoryServer is the defining object class for a Directory Server instance. This is defined for the Directory Server. Superior Class top OID nsDirectoryServer-oid Table 5.125. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Defines the object classes for the entry. Section 5.2.214, "nsServerID" Contains the server's name or ID. Table 5.126. Allowed Attributes Attribute Definition Section 5.2.168, "nsBaseDN" Contains the base DN for the server instance. Section 5.2.169, "nsBindDN" Contains the bind DN defined for the server in its server instance entry. Section 5.2.170, "nsBindPassword" Contains the password for the bind identity in the SIE. Section 5.2.210, "nsSecureServerPort" Contains the server's TLS port number. Section 5.2.216, "nsServerPort" Contains the server's port number. Section 5.2.317, "serverHostName" Contains the host name of the server on which the Directory Server instance is running. 5.3.73. nsFilteredRoleDefinition The nsFilteredRoleDefinition object class defines how entries are assigned to the role, depending upon the attributes contained by each entry. This object class is defined in Directory Server. Superior Class nsComplexRoleDefinition OID 2.16.840.1.113730.3.2.97 Table 5.127. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 6.39, "nsRoleFilter" Specifies the filter used to identify entries in the filtered role. Table 5.128. Allowed Attributes Attribute Definition Section 5.2.25, "cn (commonName)" Gives the common name of the entry. Section 5.2.37, "description" Gives a text description of the entry. 5.3.74. nsGlobalParameters The nsGlobalParameters object class contains global preference settings. This object class is defined in Administrative Services. Superior Class top OID nsGlobalParameters-oid Table 5.129. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 5.2.25, "cn (commonName)" Gives the common name of the entry. Table 5.130. Allowed Attributes Attribute Definition Section 5.2.187, "nsGroupRDNComponent" Defines the default attribute type used in the RDN of the group entry. Section 5.2.227, "nsUniqueAttribute" Defines a unique attribute in the preferences. Section 5.2.228, "nsUserIDFormat" Sets the format to generate the user ID from the givenname and sn attributes. Section 5.2.229, "nsUserRDNComponent" Sets the attribute type to use as the naming component in the user DN. nsNYR Not used. nsWellKnownJarfiles Not used. 5.3.75. nsHost The nsHost object class stores information about the server host. This object class is defined in Administrative Services. Superior Class top OID nsHost-oid Table 5.131. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 5.2.25, "cn (commonName)" Gives the common name of the entry. Table 5.132. Allowed Attributes Attribute Definition Section 5.2.37, "description" Gives a text description of the entry. Section 5.2.87, "l (localityName)" Gives the city or geographical location of the entry. Section 5.2.188, "nsHardwarePlatform" Identifies the hardware platform for the host on which the Directory Server instance is running. This is the same information as running uname -m . Section 5.2.190, "nsHostLocation" Gives the location of the server host. Section 5.2.204, "nsOsVersion" Contains the operating system version of the server host. Section 5.2.317, "serverHostName" Contains the host name of the server on which the Directory Server instance is running. 5.3.76. nsICQpresence nsICQpresence is an auxiliary object class which defines the status of an ICQ messaging account. This object is defined for the Directory Server. Superior Class top OID 2.16.840.1.113730.3.2.301 Table 5.133. Allowed Attributes Attribute Definition Section 5.2.191, "nsICQid" Contains the ICQ user ID for the entry. Section 6.28, "nsICQStatusGraphic" Contains a pointer to the graphic image which indicates the ICQ account's status. Section 6.29, "nsICQStatusText" Contains the text to indicate the ICQ account's status. 5.3.77. nsLicenseUser The nsLicenseUser object class tracks tracks licenses for servers that are licensed on a per-client basis. nsLicenseUser is intended to be used with the inetOrgPerson object class. You can manage the contents of this object class through the Users and Groups area of the Administration Server. This object class is defined in the Administration Server schema. Superior Class top OID 2.16.840.1.113730.3.2.7 Table 5.134. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Table 5.135. Allowed Attributes Attribute Definition Section 5.2.195, "nsLicensedFor" Identifies the server that the user is licensed to use. Section 5.2.196, "nsLicenseEndTime" Reserved for future use. Section 5.2.197, "nsLicenseStartTime" Reserved for future use. 5.3.78. nsManagedRoleDefinition The nsManagedRoleDefinition object class specifies the member assignments of a role to an explicit, enumerated list of members. This object class is defined in Directory Server. Superior Class nsComplexRoleDefinition OID 2.16.840.1.113730.3.2.96 Table 5.136. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Table 5.137. Allowed Attributes Attribute Definition Section 5.2.25, "cn (commonName)" Gives the common name of the entry. Section 5.2.37, "description" Gives a text description of the entry. 5.3.79. nsMessagingServerUser nsICQpresence is an auxiliary object class that describes a messaging server user. This object class is defined for Netscape Messaging Server. Superior Class top OID 2.16.840.113730.3.2.37 Table 5.138. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes for the entry. Table 5.139. Allowed Attributes Attribute Definition Section 5.2.25, "cn (commonName)" Gives the common name of the entry. Section 5.2.92, "mailAccessDomain" Contains the domain from which the user can access the messaging server. Section 5.2.93, "mailAlternateAddress" Contains secondary email addresses for the group. Section 5.2.94, "mailAutoReplyMode" Specifies whether autoreply mode for the account is enabled. Section 5.2.95, "mailAutoReplyText" Contains the text use for automatic reply emails. Section 5.2.96, "mailDeliveryOption" Specifies the mail delivery mechanism to be used for the mail user. Section 5.2.98, "mailForwardingAddress" Specifies the mail delivery mechanism to use for the mail user. Section 5.2.100, "mailMessageStore" Specifies the location of the user's mail box. Section 5.2.102, "mailProgramDeliveryInfo" Specifies the commands used for programmed mail delivery. Section 5.2.103, "mailQuota" Specifies the disk space allowed for the user's mail box. Section 5.2.199, "nsmsgDisallowAccess" Sets limits on the mail protocols available to the user. Section 5.2.200, "nsmsgNumMsgQuota" Specifies the number of messages allowed for the user's mail box. Section 5.2.246, "nswmExtendedUserPrefs" Stores the extended preferences for the user. Section 5.2.353, "vacationEndDate" Contains the end date for a vacation period. Section 5.2.354, "vacationStartDate" Contains the start date for a vacation period. 5.3.80. nsMSNpresence nsMSNpresence is an auxiliary object class which defines the status of an MSN instance messaging account. This object is defined for the Directory Server. Superior Class top OID 2.16.840.1.113730.3.2.303 Table 5.140. Allowed Attributes Attribute Definition Section 5.2.201, "nsMSNid" Contains the MSN user ID for the entry. 5.3.81. nsNestedRoleDefinition The nsNestedRoleDefinition object class specifies one or more roles, of any type, are included as members within the role. This object class is defined in Directory Server. Superior Class nsComplexRoleDefinition OID 2.16.840.1.113730.3.2.98 Table 5.141. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 6.38, "nsRoleDn" Specifies the roles assigned to an entry. Table 5.142. Allowed Attributes Attribute Definition Section 5.2.25, "cn (commonName)" Gives the common name of the entry. Section 5.2.37, "description" Gives a text description of the entry. 5.3.82. nsResourceRef The nsNestedRoleDefinition object class configures a resource reference. This object class is defined in the Administration Services. Superior Class top OID nsResourceRef-oid Table 5.143. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 5.2.25, "cn (commonName)" Gives the common name of the entry. Table 5.144. Allowed Attributes Attribute Definition Section 5.2.315, "seeAlso" Contains a URL to another entry or site with related information. 5.3.83. nsRoleDefinition All role definition object classes inherit from the nsRoleDefinition object class. This object class is defined by Directory Server. Superior Class LDAPsubentry OID 2.16.840.1.113730.3.2.93 Table 5.145. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Table 5.146. Allowed Attributes Attribute Definition Section 5.2.25, "cn (commonName)" Gives the common name of the entry. Section 5.2.37, "description" Gives a text description of the entry. 5.3.84. nsSimpleRoleDefinition Roles containing this object class are called simple roles because they have a deliberately limited flexibility, which makes it easy to: Enumerate the members of a role. Determine whether a given entry possesses a particular role. Enumerate all the roles possessed by a given entry. Assign a particular role to a given entry. Remove a particular role from a given entry. This object class is defined by Directory Server. Superior Class nsRoleDefinition OID 2.16.840.1.113730.3.2.94 Table 5.147. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Table 5.148. Allowed Attributes Attribute Definition Section 5.2.25, "cn (commonName)" Gives the common name of the entry. Section 5.2.37, "description" Gives a text description of the entry. 5.3.85. nsSNMP This object class defines the configuration for the SNMP plug-in object used by the Directory Server. This object class is defined in Directory Server. Superior Class top OID 2.16.840.1.113730.3.2.41 Table 5.149. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 5.2.25, "cn (commonName)" Gives the common name of the entry. Section 5.2.220, "nsSNMPEnabled" Sets whether SNMP is enabled for the Directory Server instance. Table 5.150. Allowed Attributes Attribute Definition Section 5.2.218, "nsSNMPContact" Contains the contact information provided by the SNMP agent. Section 5.2.219, "nsSNMPDescription" Contains a text description of the SNMP setup. Section 5.2.221, "nsSNMPLocation" Contains the location information or configuration for the SNMP agent. Section 5.2.222, "nsSNMPMasterHost" Contains the host name for the server where the SNMP master agent is located. Section 5.2.223, "nsSNMPMasterPort" Contains the port to access the SNMP subagent. Section 5.2.224, "nsSNMPOrganization" Contains the organization name or information provided by the SNMP service. 5.3.86. nsTask This object class defines the configuration for tasks performed by the Directory Server. This object class is defined for the Administrative Services. Superior Class top OID nsTask-oid Table 5.151. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 5.2.25, "cn (commonName)" Gives the common name of the entry. Table 5.152. Allowed Attributes Attribute Definition Section 5.2.185, "nsExecRef" Contains a reference to the program which will perform the task. Section 5.2.189, "nsHelpRef" Contains a reference to an online (HTML) help file associated with the task window. Section 5.2.198, "nsLogSuppress" Sets whether to suppress logging for the task. Section 5.2.226, "nsTaskLabel" Contains a label associated with the task in the Console. 5.3.87. nsTaskGroup This object class defines the information for a group of tasks in the Console. This object class is defined for the Administrative Services. Superior Class top OID nsTaskGroup-oid Table 5.153. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 5.2.25, "cn (commonName)" Gives the common name of the entry. Table 5.154. Allowed Attributes Attribute Definition Section 5.2.226, "nsTaskLabel" Contains a label associated with the task in the Console. 5.3.88. nsTopologyCustomView This object class configures the topology views used for the profile in the Console. This object class is defined for the Administrative Services. Superior Class nsCustomView OID nsTopologyCustomView-oid Table 5.155. Required Attributes Attribute Definition Section 5.2.25, "cn (commonName)" Gives the common name of the entry. Table 5.156. Allowed Attributes Attribute Definition Section 5.2.243, "nsViewConfiguration" Contains the view configuration to use in the Console. 5.3.89. nsTopologyPlugin This object class configures the topology plug-in used to set views in the Console. This object class is defined for the Administrative Services. Superior Class nsAdminObject OID nsTopologyPlugin-oid 5.3.90. nsValueItem This object class defines a value item object configuration, which is used to specify information that is dependent on the value type of an entry. A value item relates to the allowed attribute value syntax for an entry attribute, such as binary or case-sensitive string. This object class is defined in Netscape Servers - Value Item. Superior Class top OID 2.16.840.1.113730.3.2.45 Table 5.157. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 5.2.25, "cn (commonName)" Gives the common name of the entry. Table 5.158. Allowed Attributes Attribute Definition Section 5.2.230, "nsValueBin" Contains information or operations related to the binary value type. Section 5.2.231, "nsValueCES" Contains information or operations related to the case-exact string (CES) value type. Section 5.2.232, "nsValueCIS" Contains information or operations related to the case-insensitive (CIS) value type. Section 5.2.233, "nsValueDefault" Sets the default value type to use for an attribute or configuration parameter. Section 5.2.234, "nsValueDescription" Gives a text description of the value item setting. Section 5.2.235, "nsValueDN" Contains information or operations related to the DN value type. Section 5.2.236, "nsValueFlags" Sets flags for the value item object. Section 5.2.237, "nsValueHelpURL" Contains a reference to an online (HTML) help file associated with the value item object. Section 5.2.238, "nsValueInt" Contains information or operations related to the integer value type. Section 5.2.239, "nsValueSyntax" Defines the syntax to use for the value item object. Section 5.2.240, "nsValueTel" Contains information or operations related to the telephone string value type. Section 5.2.241, "nsValueType" Sets which value type to apply. 5.3.91. nsView This object class is used for a view entry in the directory tree. This object class is defined in Directory Server. Superior Class top OID 2.16.840.1.113730.3.2.304 Table 5.159. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 5.2.25, "cn (commonName)" Gives the common name of the entry. Table 5.160. Allowed Attributes Attribute Definition Section 5.2.37, "description" Gives a text description of the entry. Section 5.2.244, "nsViewFilter" Identifies the filter used by the view plug-in. 5.3.92. nsYIMpresence nsYIMpresence is an auxiliary object class which defines the status of a Yahoo instance messaging account. This object is defined for the Directory Server. Superior Class top OID 2.16.840.1.113730.3.2.302 Table 5.161. Allowed Attributes Attribute Definition Section 5.2.247, "nsYIMid" Contains the Yahoo user ID for the entry. Section 6.45, "nsYIMStatusGraphic" Contains a pointer to the graphic image which indicates the Yahoo account's status. Section 6.46, "nsYIMStatusText" Contains the text to indicate the Yahoo account's status. 5.3.93. ntGroup The ntGroup object class holds data for a group entry stored in a Windows Active Directory server. Several Directory Server attributes correspond directly to or are mapped to match Windows group attributes. When you create a new group in the Directory Server that is to be synchronized with a Windows server group, Directory Server attributes are assigned to the Windows entry. These attributes may then be added, modified, or deleted in the entry through either directory service. This object class is defined in Netscape NT Synchronization. Superior Class top OID 2.16.840.1.113730.3.2.9 Table 5.162. Required Object Classes Object Class Definition Section 5.3.39, "mailGroup" Allows the mail attribute to be synchronized between Windows and Directory Server groups. Table 5.163. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 5.2.263, "ntUserDomainId" Contains the Windows domain login ID for the group account. Table 5.164. Allowed Attributes Attribute Definition Section 5.2.25, "cn (commonName)" Gives the common name of the entry; this corresponds to the Windows name field. Section 5.2.37, "description" Gives a text description of the entry; corresponds to the Windows comment field. Section 5.2.87, "l (localityName)" Gives the city or geographical location of the entry. Section 5.2.106, "member" Specifies the members of the group. Section 5.2.249, "ntGroupCreateNewGroup" Specifies whether a Windows account should be created when an entry is created in the Directory Server. Section 5.2.250, "ntGroupDeleteGroup" Specifies whether a Windows account should be deleted when an entry is deleted in the Directory Server. Section 5.2.251, "ntGroupDomainId" Gives the domain ID string for the group. Section 5.2.253, "ntGroupType" Defines what kind of Windows domain group the entry is. Section 5.2.254, "ntUniqueId" Contains a generated ID number used by the server for operations and identification. Section 5.2.291, "ou (organizationalUnitName)" Gives the organizational unit or division to which the entry belongs. Section 5.2.315, "seeAlso" Contains a URL to another entry or site with related information. 5.3.94. ntUser The ntUser entry holds data for a user entry stored in a Windows Active Directory server. Several Directory Server attributes correspond directly to or are mapped to match Windows user account fields. When you create a new person entry in the Directory Server that is to be synchronized with a Windows server, Directory Server attributes are assigned to Windows user account fields. These attributes may then be added, modified, or deleted in the entry through either directory service. This object class is defined in Netscape NT Synchronization. Superior Class top OID 2.16.840.1.113730.3.2.8 Table 5.165. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 5.2.25, "cn (commonName)" Gives the common name of the entry; this corresponds to the Windows name field. Section 5.2.263, "ntUserDomainId" Contains the Windows domain login ID for the user account. Table 5.166. Allowed Attributes Attribute Definition Section 5.2.37, "description" Gives a text description of the entry; corresponds to the Windows comment field. Section 5.2.38, "destinationIndicator" Gives the country and city associated with the entry; this was once required to provide public telegram service. Section 5.2.56, "fax (facsimileTelephoneNumber)" Gives the fax number for the user. Section 5.2.60, "givenName" Contains the person's first name. Section 5.2.62, "homePhone" Gives the person's home phone number. Section 5.2.63, "homePostalAddress" Gives the person's home mailing address. Section 5.2.74, "initials" Gives the person's initials. Section 5.2.87, "l (localityName)" Gives the city or geographical location of the entry. Section 5.2.91, "mail" Contains the person's email address. Section 5.2.105, "manager" Contains the DN (distinguished name) of the direct supervisor of the person entry. Section 5.2.131, "mobile" Gives the person's mobile phone number. Section 5.2.255, "ntUserAcctExpires" Identifies when the user's Windows account will expire. Section 5.2.258, "ntUserCodePage" Gives the user's code page. Section 5.2.261, "ntUserCreateNewAccount" Specifies whether a Windows account should be created when this entry is created in the Directory Server. Section 5.2.262, "ntUserDeleteAccount" Specifies whether a Windows account should be deleted when this entry is deleted in the Directory Server. Section 5.2.265, "ntUserHomeDir" Gives the path to the user's home directory. Section 5.2.267, "ntUserLastLogoff" Gives the time of the user's last logoff from the Windows server. Section 5.2.268, "ntUserLastLogon" Gives the time of the user's last logon to the Windows server. Section 5.2.271, "ntUserMaxStorage" Shows the maximum disk space available to the user in the Windows server. Section 5.2.273, "ntUserParms" Contains a Unicode string reserved for use by applications. Section 5.2.277, "ntUserProfile" Contains the path to the user's Windows profile. Section 5.2.278, "ntUserScriptPath" Contains the path to the user's Windows login script. Section 5.2.282, "ntUserWorkstations" Contains a list of Windows workstations from which the user is allowed to log into the Windows domain. Section 5.2.283, "o (organizationName)" Gives the organization to which the entry belongs. Section 5.2.291, "ou (organizationalUnitName)" Gives the organizational unit or division to which the entry belongs. Section 5.2.293, "pager" Gives the person's pager number. Section 5.2.299, "postalAddress" Contains the mailing address for the entry. Section 5.2.300, "postalCode" Gives the postal code for the entry, such as the zip code in the United States. Section 5.2.301, "postOfficeBox" Gives the post office box number for the entry. Section 5.2.310, "registeredAddress" Gives a postal address suitable to receive expedited documents when the recipient must verify delivery. Section 5.2.315, "seeAlso" Contains a URL to another entry or site with related information. Section 5.2.329, "sn (surname)" Gives the person's family name or last name. Section 5.2.330, "st (stateOrProvinceName)" Gives the state or province where the person is located. Section 5.2.331, "street" Gives the street name and address number for the person's physical location. Section 5.2.337, "telephoneNumber" Gives the telephone number for the entry. Section 5.2.338, "teletexTerminalIdentifier" Gives the identifier for the person's teletex terminal. Section 5.2.339, "telexNumber" Gives the telex number associated with the entry. Section 5.2.340, "title" Shows the person's job title. Section 5.2.348, "userCertificate" Stores a user's certificate in cleartext (not used). Section 5.2.355, "x121Address" Gives the X.121 address for the entry. 5.3.95. oncRpc The oncRpc object class defines an abstraction of an Open Network Computing Remote Procedure Call (ONC RPC). This object class is defined in RFC 2307 . Note This object class is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. Superior Class top OID 1.3.6.1.1.1.2.5 Table 5.167. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Defines the object classes for the entry. Section 5.2.25, "cn (commonName)" Gives the common name of the entry. Section 5.2.288, "oncRpcNumber" Contains part of the RPC map and stores the RPC number for UNIX RPCs. Table 5.168. Allowed Attributes Attribute Definition Section 5.2.37, "description" Gives a text description of the entry. 5.3.96. organization The organization attributes defines entries that represent organizations. An organization is generally assumed to be a large, relatively static grouping within a larger corporation or enterprise. This object class is defined in RFC 2256 . Superior Class top OID 2.5.6.4 Table 5.169. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 5.2.283, "o (organizationName)" Gives the organization to which the entry belongs. Table 5.170. Allowed Attributes Attribute Definition Section 5.2.20, "businessCategory" Gives the type of business in which the entry is engaged. Section 5.2.37, "description" Gives a text description of the entry. Section 5.2.38, "destinationIndicator" Gives the country and city associated with the entry; this was once required to provide public telegram service. Section 5.2.56, "fax (facsimileTelephoneNumber)" Contains the fax number for the entry. Section 5.2.76, "internationalISDNNumber" Contains the ISDN number for the entry. Section 5.2.87, "l (localityName)" Gives the city or geographical location of the entry. Section 5.2.298, "physicalDeliveryOfficeName" Gives a location where physical deliveries can be made. Section 5.2.299, "postalAddress" Contains the mailing address for the entry. Section 5.2.300, "postalCode" Gives the postal code for the entry, such as the zip code in the United States. Section 5.2.301, "postOfficeBox" Gives the post office box number for the entry. Section 5.2.302, "preferredDeliveryMethod" Shows the preferred method of contact or message delivery for the entry. Section 5.2.310, "registeredAddress" Gives a postal address suitable to receive expedited documents when the recipient must verify delivery. Section 5.2.313, "searchGuide" Specifies information for suggested search criteria when using the entry as the base object in the directory tree for a search. Section 5.2.315, "seeAlso" Contains a URL to another entry or site with related information. Section 5.2.330, "st (stateOrProvinceName)" Gives the state or province where the person is located. Section 5.2.331, "street" Gives the street name and number for the person's physical location. Section 5.2.337, "telephoneNumber" Gives the telephone number of the person responsible for the organization. Section 5.2.338, "teletexTerminalIdentifier" Gives the ID for an entry's teletex terminal. Section 5.2.339, "telexNumber" Gives the telex number associated with the entry. Section 5.2.350, "userPassword" Gives the password with which the entry can bind to the directory. Section 5.2.355, "x121Address" Gives the X.121 address for the entry. 5.3.97. organizationalPerson The organizationalPerson object class defines entries for people employed or affiliated with the organization. This object class inherits the Section 5.2.25, "cn (commonName)" and Section 5.2.329, "sn (surname)" attributes from the person object class. This object class is defined in RFC 2256 . Superior Class person OID 2.5.6.7 Table 5.171. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 5.2.25, "cn (commonName)" Gives the common name of the entry. Section 5.2.329, "sn (surname)" Gives the person's family name or last name. Table 5.172. Allowed Attributes Attribute Definition Section 5.2.37, "description" Gives a text description of the entry. Section 5.2.38, "destinationIndicator" Gives the country and city associated with the entry; this was once required to provide public telegram service. Section 5.2.56, "fax (facsimileTelephoneNumber)" Contains the fax number for the entry. Section 5.2.76, "internationalISDNNumber" Contains the ISDN number for the entry. Section 5.2.87, "l (localityName)" Gives the city or geographical location of the entry. Section 5.2.291, "ou (organizationalUnitName)" Gives the organizational unit or division to which the entry belongs. Section 5.2.298, "physicalDeliveryOfficeName" Gives a location where physical deliveries can be made. Section 5.2.299, "postalAddress" Contains the mailing address for the entry. Section 5.2.300, "postalCode" Gives the postal code for the entry, such as the zip code in the United States. Section 5.2.301, "postOfficeBox" Gives the post office box number for the entry. Section 5.2.302, "preferredDeliveryMethod" Shows the person's preferred method of contact or message delivery. Section 5.2.310, "registeredAddress" Gives a postal address suitable to receive expedited documents when the recipient must verify delivery. Section 5.2.315, "seeAlso" Contains a URL to another entry or site with related information. Section 5.2.330, "st (stateOrProvinceName)" Gives the state or province where the person is located. Section 5.2.331, "street" Gives the street name and number for the person's physical location. Section 5.2.337, "telephoneNumber" Gives the telephone number for the entry. Section 5.2.338, "teletexTerminalIdentifier" Gives the ID for an entry's teletex terminal. Section 5.2.339, "telexNumber" Gives the telex number associated with the entry. Section 5.2.340, "title" Shows the person's job title. Section 5.2.350, "userPassword" Stores the password with which the entry can bind to the directory. Section 5.2.355, "x121Address" Gives the X.121 address for the entry. 5.3.98. organizationalRole The organizationalRole object class is used to define entries for roles held by people within an organization. This object class is defined in RFC 2256 . Superior Class top OID 2.5.6.8 Table 5.173. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 5.2.25, "cn (commonName)" Gives the common name of the entry. Table 5.174. Allowed Attributes Attribute Definition Section 5.2.37, "description" Gives a text description of the entry. Section 5.2.38, "destinationIndicator" Gives the country and city associated with the entry; this was once required to provide public telegram service. Section 5.2.56, "fax (facsimileTelephoneNumber)" Contains the fax number for the entry. Section 5.2.76, "internationalISDNNumber" Contains the ISDN number for the entry. Section 5.2.87, "l (localityName)" Gives the city or geographical location of the entry. Section 5.2.291, "ou (organizationalUnitName)" Gives the organizational unit or division to which the entry belongs. Section 5.2.298, "physicalDeliveryOfficeName" Gives a location where physical deliveries can be made. Section 5.2.299, "postalAddress" Contains the mailing address for the entry. Section 5.2.300, "postalCode" Gives the postal code for the entry, such as the zip code in the United States. Section 5.2.301, "postOfficeBox" Gives the post office box number for the entry. Section 5.2.302, "preferredDeliveryMethod" Shows the role's preferred method of contact or message delivery. Section 5.2.310, "registeredAddress" Gives a postal address suitable to receive expedited documents when the recipient must verify delivery. Section 5.2.311, "roleOccupant" Contains the DN (distinguished name) of the person in the role. Section 5.2.315, "seeAlso" Contains a URL to another entry or site with related information. Section 5.2.330, "st (stateOrProvinceName)" Gives the state or province where the entry is located. Section 5.2.331, "street" Gives the street name and number for the role's physical location. Section 5.2.337, "telephoneNumber" Gives the telephone number for the entry. Section 5.2.338, "teletexTerminalIdentifier" Gives the ID for an entry's teletex terminal. Section 5.2.339, "telexNumber" Gives the telex number associated with the entry. Section 5.2.355, "x121Address" Gives the X.121 address for the entry. 5.3.99. organizationalUnit The organizationalUnit object class defines entries that represent organizational units , generally understood to be a relatively static grouping within a larger organization. This object class is defined in RFC 2256 . Superior Class top OID 2.5.6.5 Table 5.175. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 5.2.291, "ou (organizationalUnitName)" Gives the organizational unit or division to which the entry belongs. Table 5.176. Allowed Attributes Attribute Definition Section 5.2.20, "businessCategory" Gives the type of business in which the entry is engaged. Section 5.2.37, "description" Gives a text description of the entry. Section 5.2.38, "destinationIndicator" Gives the country and city associated with the entry; this was once required to provide public telegram service. Section 5.2.56, "fax (facsimileTelephoneNumber)" Contains the fax number for the entry. Section 5.2.76, "internationalISDNNumber" Contains the ISDN number for the entry. Section 5.2.87, "l (localityName)" Gives the city or geographical location of the entry. Section 5.2.298, "physicalDeliveryOfficeName" Gives a location where physical deliveries can be made. Section 5.2.299, "postalAddress" Contains the mailing address for the entry. Section 5.2.300, "postalCode" Gives the postal code for the entry, such as the zip code in the United States. Section 5.2.301, "postOfficeBox" Gives the post office box number for the entry. Section 5.2.302, "preferredDeliveryMethod" Gives the preferred method of being contacted. Section 5.2.310, "registeredAddress" Gives a postal address suitable to receive expedited documents when the recipient must verify delivery. Section 5.2.313, "searchGuide" Specifies information for suggested search criteria when using the entry as the base object in the directory tree for a search. Section 5.2.315, "seeAlso" Contains a URL to another entry or site with related information. Section 5.2.330, "st (stateOrProvinceName)" Gives the state or province where the person is located. Section 5.2.331, "street" Gives the street name and number for the role's physical location. Section 5.2.337, "telephoneNumber" Gives the telephone number for the entry. Section 5.2.338, "teletexTerminalIdentifier" Gives the ID for an entry's teletex terminal. Section 5.2.339, "telexNumber" Gives the telex number associated with the entry. Section 5.2.350, "userPassword" Stores the password with which the entry can bind to the directory. Section 5.2.355, "x121Address" Gives the X.121 address for the entry. 5.3.100. person The person object class represents entries for generic people. This is the base object class for the organizationalPerson object class. This object class is defined in RFC 2256 . Superior Class top OID 2.5.6.6 Table 5.177. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 5.2.25, "cn (commonName)" Gives the common name of the entry. Section 5.2.329, "sn (surname)" Gives the person's family name or last name. Table 5.178. Allowed Attributes Attribute Definition Section 5.2.37, "description" Gives a text description of the entry. Section 5.2.315, "seeAlso" Contains a URL to another entry or site with related information. Section 5.2.337, "telephoneNumber" Gives the telephone number for the entry. Section 5.2.350, "userPassword" Stores the password with which the entry can bind to the directory. 5.3.101. pilotObject The pilotObject is a subclass to allow additional attributes to be assigned to entries of all other object classes. This object class is defined in RFC 1274 . Superior Class top OID 0.9.2342.19200300.100.4.3 Table 5.179. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Table 5.180. Allowed Attributes Attribute Definition Section 5.2.12, "audio" Stores a sound file in a binary format. Section 5.2.40, "dITRedirect" Contains the DN (distinguished name) of the entry to use as a redirect for the entry. Section 5.2.73, "info" Contains information about the entry. Section 5.2.84, "jpegPhoto" Stores a JPG image. Section 6.13, "lastModifiedBy" Gives the DN (distinguished name) of the last user which modified the document entry. Section 6.14, "lastModifiedTime" Gives the time the object was most recently modified. Section 5.2.105, "manager" Gives the DN (distinguished name) of the entry's manager. Section 5.2.297, "photo" Stores a photo of the document in binary format. Section 5.2.344, "uniqueIdentifier" Distinguishes between two entries when a distinguished name has been reused. 5.3.102. pilotOrganization The pilotOrganization object class is a subclass used to add attributes to organization and organizationalUnit object class entries. This object class is defined in RFC 1274 . Superior Class top OID 0.9.2342.19200300.100.4.20 Table 5.181. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 5.2.283, "o (organizationName)" Gives the organization to which the entry belongs. Section 5.2.291, "ou (organizationalUnitName)" Gives the organizational unit or division to which the entry belongs. Table 5.182. Allowed Attributes Attribute Definition Section 5.2.19, "buildingName" Gives the name of the building where the entry is located. Section 5.2.20, "businessCategory" Gives the type of business in which the entry is engaged. Section 5.2.37, "description" Gives a text description of the entry. Section 5.2.38, "destinationIndicator" Gives the country and city associated with the entry; this was once required to provide public telegram service. Section 5.2.56, "fax (facsimileTelephoneNumber)" Contains the fax number for the entry. Section 5.2.76, "internationalISDNNumber" Contains the ISDN number for the entry. Section 5.2.87, "l (localityName)" Gives the city or geographical location of the entry. Section 5.2.298, "physicalDeliveryOfficeName" Gives a location where physical deliveries can be made. Section 5.2.299, "postalAddress" Contains the mailing address for the entry. Section 5.2.300, "postalCode" Gives the postal code for the entry, such as the zip code in the United States. Section 5.2.301, "postOfficeBox" Gives the post office box number for the entry. Section 5.2.302, "preferredDeliveryMethod" Gives the preferred method of being contacted. Section 5.2.310, "registeredAddress" Gives a postal address suitable to receive expedited documents when the recipient must verify delivery. Section 5.2.313, "searchGuide" Specifies information for suggested search criteria when using the entry as the base object in the directory tree for a search. Section 5.2.315, "seeAlso" Contains a URL to another entry or site with related information. Section 5.2.330, "st (stateOrProvinceName)" Gives the state or province where the person is located. Section 5.2.331, "street" Gives the street name and address number for the person's physical location. Section 5.2.337, "telephoneNumber" Gives the telephone number for the entry. Section 5.2.338, "teletexTerminalIdentifier" Gives the ID for an entry's teletex terminal. Section 5.2.339, "telexNumber" Gives the telex number associated with the entry. Section 5.2.350, "userPassword" Stores the password with which the entry can bind to the directory. Section 5.2.355, "x121Address" Gives the X.121 address for the entry. 5.3.103. pkiCA The pkiCA auxiliary object class contains required or available certificates that are configured for a certificate authority. This object class is defined in RFC 4523 , which defines object classes and attributes for LDAP to use to manage X.509 certificates and related certificate services. Superior Class top OID 2.5.6.22 Table 5.183. Allowed Attributes Attribute Definition Section 5.2.14, "authorityRevocationList" Contains a list of revoked CA certificates. Section 5.2.22, "cACertificate" Contains a CA certificate. Section 5.2.24, "certificateRevocationList" Contains a list of certificates that have been revoked. Section 5.2.33, "crossCertificatePair" Contains a pair of certificates that are used to cross-certify a pair of CAs in a FBCA-style bridge CA configuration. 5.3.104. pkiUser The pkiUser auxiliary object class contains required certificates for a user or client that connects to a certificate authority or element in the public key infrastructure. This object class is defined in RFC 4523 , which defines object classes and attributes for LDAP to use to manage X.509 certificates and related certificate services. Superior Class top OID 2.5.6.21 Table 5.184. Allowed Attributes Attribute Definition Section 5.2.348, "userCertificate" Stores a user's certificate, usually in binary form. 5.3.105. posixAccount The posixAccount object class defines network accounts which use POSIX attributes. This object class is defined in RFC 2307 , which defines object classes and attributes to use LDAP as a network information service. Note This object class is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. Superior Class top OID 1.3.6.1.1.1.2.0 Table 5.185. Required Attributes Attribute Definition Section 5.2.25, "cn (commonName)" Gives the common name of the entry. Section 5.2.59, "gidNumber" Contains a unique numeric identifier for a group entry or to identify the group for a user entry, analogous to the group number in Unix. Section 5.2.61, "homeDirectory" Contains the path to the user's home directory. Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 5.2.342, "uid (userID)" Gives the defined account's user ID. Section 5.2.343, "uidNumber" Contains a unique numeric identifier for a user entry, analogous to the user number in Unix. Table 5.186. Allowed Attributes Attribute Definition Section 5.2.37, "description" Gives a text description of the entry. Section 5.2.57, "gecos" Used to determine the GECOS field for the user; this is based on a common name, with additional information embedded. Section 5.2.89, "loginShell" Contains the path to a script that is launched automatically when a user logs into the domain. Section 5.2.350, "userPassword" Stores the password with which the entry can bind to the directory. 5.3.106. posixGroup The posixGroup object class defines a group of network accounts which use POSIX attributes. This object class is defined in RFC 2307 , which defines object classes and attributes to use LDAP as a network information service. Superior Class top OID 1.3.6.1.1.1.2.2 Table 5.187. Required Attributes Attribute Definition Section 5.2.59, "gidNumber" Contains the path to a script that is launched automatically when a user logs into the domain. Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Table 5.188. Allowed Attributes Attribute Definition Section 5.2.37, "description" Gives a text description of the entry. Section 5.2.110, "memberUid" Gives the login name of the group member; this possibly may not be the same as the member's DN. Section 5.2.350, "userPassword" Contains the login name of the member of a group. 5.3.107. referral The referral object class defines an object which supports LDAPv3 smart referrals. This object class is defined in LDAPv3 referrals Internet Draft. Superior Class top OID 2.16.840.1.113730.3.2.6 Table 5.189. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Table 5.190. Allowed Attributes Attribute Definition Section 5.2.309, "ref" Contains information for an LDAPv3 smart referral. 5.3.108. residentialPerson The residentialPerson object class manages a person's residential information. This object class is defined in RFC 2256 . Superior Class top OID 2.5.6.10 Table 5.191. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 5.2.25, "cn (commonName)" Gives the common name of the entry. Section 5.2.87, "l (localityName)" Gives the city or geographical location of the entry. Section 5.2.329, "sn (surname)" Gives the person's family name or last name. Table 5.192. Allowed Attributes Attribute Definition Section 5.2.20, "businessCategory" Gives the type of business in which the entry is engaged. Section 5.2.37, "description" Gives a text description of the entry. Section 5.2.38, "destinationIndicator" Gives the country and city associated with the entry; this was once required to provide public telegram service. Section 5.2.56, "fax (facsimileTelephoneNumber)" Contains the fax number for the entry. Section 5.2.76, "internationalISDNNumber" Contains the ISDN number for the entry. Section 5.2.298, "physicalDeliveryOfficeName" Gives a location where physical deliveries can be made. Section 5.2.299, "postalAddress" Contains the mailing address for the entry. Section 5.2.300, "postalCode" Gives the postal code for the entry, such as the zip code in the United States. Section 5.2.301, "postOfficeBox" Gives the post office box number for the entry. Section 5.2.302, "preferredDeliveryMethod" Shows the person's preferred method of contact or message delivery. Section 5.2.310, "registeredAddress" Gives a postal address suitable to receive expedited documents when the recipient must verify delivery. Section 5.2.315, "seeAlso" Contains a URL to another entry or site with related information. Section 5.2.330, "st (stateOrProvinceName)" Gives the state or province where the person is located. Section 5.2.331, "street" Gives the street name and address number for the person's physical location. Section 5.2.337, "telephoneNumber" Gives the telephone number for the entry. Section 5.2.338, "teletexTerminalIdentifier" Gives the ID for an entry's teletex terminal. Section 5.2.339, "telexNumber" Gives the telex number associated with the entry. Section 5.2.350, "userPassword" Stores the password with which the entry can bind to the directory. Section 5.2.355, "x121Address" Gives the X.121 address for the entry. 5.3.109. RFC822LocalPart The RFC822LocalPart object class defines entries that represent the local part of RFC 822 mail addresses. The directory treats this part of an RFC822 address as a domain. This object class is defined by the Internet Directory Pilot. Superior Class domain OID 0.9.2342.19200300.100.4.14 Table 5.193. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 5.2.34, "dc (domainComponent)" Contains one component of a domain name. Table 5.194. Allowed Attributes Attribute Definition Section 5.2.10, "associatedName" Gives the name of an entry within the organizational directory tree which is associated with a DNS domain. Section 5.2.20, "businessCategory" Gives the type of business in which the entry is engaged. Section 5.2.25, "cn (commonName)" Gives the common name of the entry. Section 5.2.37, "description" Gives a text description of the entry. Section 5.2.38, "destinationIndicator" Gives the country and city associated with the entry; this was once required to provide public telegram service. Section 5.2.56, "fax (facsimileTelephoneNumber)" Contains the fax number for the entry. Section 5.2.76, "internationalISDNNumber" Contains the ISDN number for the entry. Section 5.2.87, "l (localityName)" Gives the city or geographical location of the entry. Section 5.2.283, "o (organizationName)" Gives the organization to which the account belongs. Section 5.2.298, "physicalDeliveryOfficeName" Gives a location where physical deliveries can be made. Section 5.2.299, "postalAddress" Contains the mailing address for the entry. Section 5.2.300, "postalCode" Gives the postal code for the entry, such as the zip code in the United States. Section 5.2.301, "postOfficeBox" Gives the post office box number for the entry. Section 5.2.302, "preferredDeliveryMethod" Shows the person's preferred method of contact or message delivery. Section 5.2.310, "registeredAddress" Gives a postal address suitable to receive expedited documents when the recipient must verify delivery. Section 5.2.313, "searchGuide" Specifies information for suggested search criteria when using the entry as the base object in the directory tree for a search. Section 5.2.315, "seeAlso" Contains a URL to another entry or site with related information. Section 5.2.329, "sn (surname)" Gives the person's family name or last name. Section 5.2.330, "st (stateOrProvinceName)" Gives the state or province where the person is located. Section 5.2.331, "street" Gives the street name and address number for the person's physical location. Section 5.2.337, "telephoneNumber" Gives the telephone number for the entry. Section 5.2.338, "teletexTerminalIdentifier" Gives the identifier for the person's teletex terminal. Section 5.2.339, "telexNumber" Gives the telex number associated with the entry. Section 5.2.350, "userPassword" Stores the password with which the entry can bind to the directory. Section 5.2.355, "x121Address" Gives the X.121 address for the entry. 5.3.110. room The room object class stores information in the directory about rooms. Superior Class top OID 0.9.2342.19200300.100.4.7 Table 5.195. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 5.2.25, "cn (commonName)" Gives the common name of the entry. Table 5.196. Allowed Attributes Attribute Definition Section 5.2.37, "description" Gives a text description of the room. Section 5.2.312, "roomNumber" Contains the room's number. Section 5.2.315, "seeAlso" Contains a URL to another entry or site with related information. Section 5.2.337, "telephoneNumber" Gives the telephone number for the entry. 5.3.111. shadowAccount The shadowAccount object class allows the LDAP directory to be used as a shadow password service. Shadow password services relocate the password files on a host to a shadow file with tightly restricted access. This object class is defined in RFC 2307 , which defines object classes and attributes to use LDAP as a network information service. Note This object class is defined in 10rfc2307.ldif in the Directory Server. To use the updated RFC 2307 schema, remove the 10rfc2307.ldif file and copy the 10rfc2307bis.ldif file from the /usr/share/dirsrv/data directory to the /etc/dirsrv/slapd- instance /schema directory. Superior Class top OID 1.3.6.1.1.1.2.1 Table 5.197. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 5.2.342, "uid (userID)" Gives the defined account's user ID. Table 5.198. Allowed Attributes Attribute Definition Section 5.2.37, "description" Gives a text description of the entry. Section 5.2.321, "shadowExpire" Contains the date that the shadow account expires. Section 5.2.322, "shadowFlag" Identifies what area in the shadow map stores the flag values. Section 5.2.323, "shadowInactive" Sets how long the shadow account can be inactive. Section 5.2.324, "shadowLastChange" Contains the time and date of the last modification to the shadow account. Section 5.2.325, "shadowMax" Sets the maximum number of days that a shadow password is valid. Section 5.2.326, "shadowMin" Sets the minimum number of days that must pass between changing the shadow password. Section 5.2.327, "shadowWarning" Sets how may days in advance of password expiration to send a warning to the user. Section 5.2.350, "userPassword" Stores the password with which the entry can bind to the directory. 5.3.112. simpleSecurityObject The simpleSecurityObject object class allow an entry to contain the userPassword attribute when an entry's principal object classes do not allow a password attribute. Reserved for future use. This object class is defined in RFC 1274 . Superior Class top OID 0.9.2342.19200300.100.4.19 Table 5.199. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 5.2.350, "userPassword" Stores the password with which the entry can bind to the directory. 5.3.113. strongAuthenticationUser The strongAuthenticationUser object class stores a user's certificate in the directory. This object class is defined in RFC 2256 . Superior Class top OID 2.5.6.15 Table 5.200. Required Attributes Attribute Definition Section 5.2.284, "objectClass" Gives the object classes assigned to the entry. Section 5.2.348, "userCertificate" Stores a user's certificate, usually in binary form.
[ "objectClasses: ( 2.5.6.6 NAME 'person' DESC 'Standard LDAP objectclass' SUP top MUST ( sn USD cn ) MAY ( description USD seeAlso USD telephoneNumber USD userPassword ) X-ORIGIN 'RFC 2256' )", "objectClass: top objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson", "givenname: John surname: Smith mail: [email protected]", "attributetypes: ( 2.5.4.13 NAME 'description' DESC 'Standard LDAP attribute type' SYNTAX 1.3.6.1.4.1.1466.115.121.1.15 X-ORIGIN 'RFC 2256' )", "attributetypes: ( 2.5.4.3 NAME ( 'cn' 'commonName' )", "dn: uid=jsmith,ou=marketing,ou=people,dc=example,dc=com ou: marketing ou: people", "aliasedObjectName: uid=jdoe,ou=people,dc=example,dc=com", "associatedDomain:US", "associatedName: c=us", "audio:: AAAAAA==", "authorCn: John Smith", "authorityrevocationlist;binary:: AAAAAA==", "authorSn: Smith", "buildingName: 14", "businessCategory: Engineering", "countryName: GB c: US", "cACertificate;binary:: AAAAAA==", "carLicense: 6ABC246", "certificateRevocationList;binary:: AAAAAA==", "commonName: John Smith cn: Bill Anderson", "cn: replicater.example.com:17430/dc%3Dexample%2Cdc%3com", "friendlyCountryName: Ireland co: Ireland", "crossCertificatePair;binary:: AAAAAA==", "dc: example domainComponent: example", "departmentNumber: 2604", "description: Quality control inspector for the ME2873 product line.", "destinationIndicator: Stow, Ohio, USA", "displayName: John Smith", "dITRedirect: cn=jsmith,dc=example,dc=com", "dn: uid=Barbara Jensen,ou=Quality Control,dc=example,dc=com", "dNSRecord: IN NS ns.uu.net", "documentAuthor: uid=Barbara Jensen,ou=People,dc=example,dc=com", "documentIdentifier: L3204REV1", "documentLocation: Department Library", "documentPublisher: Southeastern Publishing", "documentTitle: Red Hat Directory Server Administrator Guide", "documentVersion: 1.1", "favouriteDrink: iced tea drink: cranberry juice", "dSAQuality: high", "employeeNumber: 3441", "employeeType: Full time", "enhancedSearchGuide: (uid=bjensen)", "facsimileTelephoneNumber: +1 415 555 1212 fax: +1 415 555 1212", "gecos: John Smith", "generationQualifier:III", "gidNumber: 100", "givenName: Rachel", "homeDirectory: /home/jsmith", "homePhone: 415-555-1234", "homePostalAddress: 1234 Ridgeway DriveUSDSanta Clara, CAUSD99555", "The dollar (USD) value can be found in the c:\\cost file.", "The dollar (\\24) value can be foundUSDin the c:\\c5cost file.", "host: labcontroller01", "houseIdentifier: B105", "info: not valid", "initials: BAJ", "jpegPhoto:: AAAAAA==", "keyWords: directory LDAP X.500", "localityName: Santa Clara l: Santa Clara", "labeledURI: http://home.example.com labeledURI: http://home.example.com Example website", "loginShell: c:\\scripts\\jsmith.bat", "mail: [email protected]", "mailAlternateAddress: [email protected] mailAlternateAddress: [email protected]", "mailHost: mail.example.com", "mailPreferenceOption: 0", "manager: cn=Bill Andersen,ou=Quality Control,dc=example,dc=com", "member: cn=John Smith,dc=example,dc=com", "memberCertificateDescription: {ou=x,ou=A,dc=company,dc=example}", "memberUID: jsmith", "memberURL: ldap://cn=jsmith,ou=people,dc=example,dc=com", "mepMappedAttr: gidNumber: USDgidNumber", "mepMappedAttr: cn: Managed Group for USDcn", "mepStaticAttr: posixGroup", "mobileTelephoneNumber: 415-555-4321", "nsHardwarePlatform:i686", "nsLicensedFor: slapd", "nsServerID: slapd-example", "ntGroupAttributes:: IyEvYmluL2tzaAoKIwojIGRlZmF1bHQgdmFsdWUKIwpIPSJgaG9zdG5hb", "ntGroupDomainId: DS HR Group", "ntGroupId: IOUnHNjjRgghghREgfvItrGHyuTYhjIOhTYtyHJuSDwOopKLhjGbnGFtr", "ntGroupType: -21483646", "ntUniqueId: 352562404224a44ab040df02e4ef500b", "ntUserAcctExpires: 20081015203415", "ntUserCodePage: AAAAAA==", "ntUserDomainId: jsmith", "ntUserHomeDir: c:\\jsmith", "ntUserLastLogoff: 20201015203415Z", "ntUserLastLogon: 20201015203415Z", "ntUserMaxStorage: 4294967295", "ntUserProfile: c:\\jsmith\\profile.txt", "ntUserScriptPath: c:\\jstorm\\lscript.bat", "ntUserWorkstations: firefly", "organizationName: Example Corporation o: Example Corporation", "objectClass: person", "organizationalStatus: researcher", "otherMailbox: internet USD [email protected]", "organizationalUnitName: Marketing ou: Marketing", "owner: cn=John Smith,ou=people,dc=example,dc=com", "pagerTelephoneNumber: 415-555-6789 pager: 415-555-6789", "personalSignature:: AAAAAA==", "personalTitle: Mr.", "photo:: AAAAAA==", "physicalDeliveryOfficeName: Raleigh", "The dollar (USD) value can be found in the c:\\cost file.", "The dollar (\\24) value can be foundUSDin the c:\\5ccost file.", "postalCode: 44224", "postOfficeBox: 1234", "preferredDeliveryMethod: telephone", "presentationAddress: TELEX+00726322+RFC-1006+02+130.59.2.1", "ldap: host_name : port_number / subtree_dn", "ref: ldap://server.example.com:389/ou=People,dc=example,dc=com", "roleOccupant: uid=bjensen,dc=example,dc=com", "roomNumber: 230", "secretary: cn=John Smith,dc=example,dc=com", "seeAlso: cn=Quality Control Inspectors,ou=manufacturing,dc=example,dc=com", "serialNumber: 555-1234-AZ", "echo date -u -d 20100108 +%s /24/60/60 |bc 14617", "shadowExpire: 14617", "shadowFlag: 150", "shadowInactive: 15", "shadowMax: 10", "shadowMin: 3", "shadowWarning: 2", "surname: Jensen sn: Jensen", "stateOrProvinceName: California st: California", "streetAddress: 1234 Ridgeway Drive street: 1234 Ridgeway Drive", "subject: employee option grants", "supportedAlgorithms:: AAAAAA==", "telephoneNumber: 415-555-2233", "teletex-id = ttx-term 0*(\"USD\" ttx-param) ttx-term = printablestring ttx-param = ttx-key \":\" ttx-value ttx-key = \"graphic\" / \"control\" / \"misc\" / \"page\" / \"private\" ttx-value = octetstring", "actual-number \"USD\" country \"USD\" answerback", "title: Senior QC Inspector", "TimeToLive: 120 ttl: 120", "userID: jsmith uid: jsmith", "uidNumber: 120", "uniqueIdentifier:: AAAAAA==", "userCertificate;binary:: AAAAAA==", "userClass: intern", "userPassword: {sha}FTSLQhxXpA05", "userSMIMECertificate;binary:: AAAAAA==", "x500UniqueIdentifier:: AAAAAA==", "dn: dc=example,dc=com objectClass: top objectClass: organizationalUnit objectClass: dcObject dc: example ou: Example Corporation" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/configuration_command_and_file_reference/directory-schema
Appendix B. Installing a Websocket Proxy on a Separate Machine
Appendix B. Installing a Websocket Proxy on a Separate Machine Important The websocket proxy and noVNC are Technology Preview features only. Technology Preview features are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information see Red Hat Technology Preview Features Support Scope . The websocket proxy allows users to connect to virtual machines through a noVNC console. The noVNC client uses websockets to pass VNC data. However, the VNC server in QEMU does not provide websocket support, so a websocket proxy must be placed between the client and the VNC server. The proxy can run on any machine that has access to the network, including the the Manager machine. For security and performance reasons, users may want to configure the websocket proxy on a separate machine. Procedure Install the websocket proxy: Run the engine-setup command to configure the websocket proxy. Note If the rhvm package has also been installed, choose No when asked to configure the Manager ( Engine ) on this host. Press Enter to allow engine-setup to configure a websocket proxy server on the machine. Press Enter to accept the automatically detected host name, or enter an alternative host name and press Enter . Note that the automatically detected host name may be incorrect if you are using virtual hosts: Press Enter to allow engine-setup to configure the firewall and open the ports required for external communication. If you do not allow engine-setup to modify your firewall configuration, then you must manually open the required ports. Enter the FQDN of the Manager machine and press Enter . Press Enter to allow engine-setup to perform actions on the Manager machine, or press 2 to manually perform the actions. Press Enter to accept the default SSH port number, or enter the port number of the Manager machine. Enter the root password to log in to the Manager machine and press Enter . Select whether to review iptables rules if they differ from the current settings. Press Enter to confirm the configuration settings. Instructions are provided to configure the Manager machine to use the configured websocket proxy. Log in to the Manager machine and execute the provided instructions.
[ "yum install ovirt-engine-websocket-proxy", "engine-setup", "Configure WebSocket Proxy on this machine? (Yes, No) [Yes]:", "Host fully qualified DNS name of this server [ host.example.com ]:", "Setup can automatically configure the firewall on this system. Note: automatic configuration of the firewall may overwrite current settings. Do you want Setup to configure the firewall? (Yes, No) [Yes]:", "Host fully qualified DNS name of the engine server []: manager.example.com", "Setup will need to do some actions on the remote engine server. Either automatically, using ssh as root to access it, or you will be prompted to manually perform each such action. Please choose one of the following: 1 - Access remote engine server using ssh as root 2 - Perform each action manually, use files to copy content around (1, 2) [1]:", "ssh port on remote engine server [22]:", "root password on remote engine server engine_host.example.com :", "Generated iptables rules are different from current ones. Do you want to review them? (Yes, No) [No]:", "--== CONFIGURATION PREVIEW ==-- Firewall manager : iptables Update Firewall : True Host FQDN : host.example.com Configure WebSocket Proxy : True Engine Host FQDN : engine_host.example.com Please confirm installation settings (OK, Cancel) [OK]:", "Manual actions are required on the engine host in order to enroll certs for this host and configure the engine about it. Please execute this command on the engine host: engine-config -s WebSocketProxy=host.example.com:6100 and than restart the engine service to make it effective", "engine-config -s WebSocketProxy=host.example.com:6100 systemctl restart ovirt-engine.service" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/installing_red_hat_virtualization_as_a_standalone_manager_with_remote_databases/installing_the_websocket_proxy_on_a_different_host_sm_remotedb_deploy
6.6. Updating Virtual Machine Guest Agents and Drivers
6.6. Updating Virtual Machine Guest Agents and Drivers The Red Hat Virtualization guest agents, tools, and drivers provide additional functionality for virtual machines, such as gracefully shutting down or rebooting virtual machines from the VM Portal and Administration Portal. The tools and agents also provide information for virtual machines, including: Resource usage IP addresses Installed applications The guest tools are distributed as an ISO file that you can attach to virtual machines. This ISO file is packaged as an RPM file that you can install and update from the Manager machine. 6.6.1. Updating the Guest Agents and Drivers on Red Hat Enterprise Linux Update the guest agents and drivers on your Red Hat Enterprise Linux virtual machines to use the latest version. Updating the Guest Agents and Drivers on Red Hat Enterprise Linux Log in to the Red Hat Enterprise Linux virtual machine. Update the ovirt-guest-agent-common package: Restart the service: For Red Hat Enterprise Linux 6 For Red Hat Enterprise Linux 7 6.6.2. Updating the Guest Agents and Drivers on Windows Updating the Guest Agents, Tools, and Drivers on Windows On the Red Hat Virtualization Manager machine, update the Red Hat Virtualization Guest Tools package to the latest version: The ISO file is located in /usr/share/rhv-guest-tools-iso/RHV-toolsSetup _version .iso on the Manager machine. If the APT service is enabled on virtual machines, the updated ISO files are automatically attached. Otherwise, upload RHV-toolsSetup _version .iso to a data domain. See Uploading Images to a Data Storage Domain in the Administration Guide for details. In the Administration or VM Portal, if the virtual machine is running, use the Change CD drop-down list to attach the RHV-toolsSetup _version .iso file to each of your virtual machines. If the virtual machine is powered off, click the Run Once button and attach the ISO as a CD. Log in to the virtual machine. Select the CD Drive containing the RHV-toolsSetup _version .iso file. Double-click RHEV-toolsSetup.exe . Click at the welcome screen. Follow the prompts on the RHEV-Tools InstallShield Wizard window. Ensure all check boxes in the list of components are selected. Once installation is complete, select Yes, I want to restart my computer now and click Finish to apply the changes.
[ "yum update ovirt-guest-agent-common", "service ovirt-guest-agent restart", "systemctl restart ovirt-guest-agent.service", "yum update -y rhv-guest-tools-iso*" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/virtual_machine_management_guide/sect-Updating_Virtual_Machine_Guest_Agents_and_Drivers
Chapter 56. Granting sudo access to an IdM user on an IdM client
Chapter 56. Granting sudo access to an IdM user on an IdM client Learn more about granting sudo access to users in Identity Management. 56.1. Sudo access on an IdM client System administrators can grant sudo access to allow non-root users to execute administrative commands that are normally reserved for the root user. Consequently, when users need to perform an administrative command normally reserved for the root user, they precede that command with sudo . After entering their password, the command is executed as if they were the root user. To execute a sudo command as another user or group, such as a database service account, you can configure a RunAs alias for a sudo rule. If a Red Hat Enterprise Linux (RHEL) 8 host is enrolled as an Identity Management (IdM) client, you can specify sudo rules defining which IdM users can perform which commands on the host in the following ways: Locally in the /etc/sudoers file Centrally in IdM You can create a central sudo rule for an IdM client using the command-line interface (CLI) and the IdM Web UI. In RHEL 8.4 and later, you can also configure password-less authentication for sudo using the Generic Security Service Application Programming Interface (GSSAPI), the native way for UNIX-based operating systems to access and authenticate Kerberos services. You can use the pam_sss_gss.so Pluggable Authentication Module (PAM) to invoke GSSAPI authentication via the SSSD service, allowing users to authenticate to the sudo command with a valid Kerberos ticket. Additional resources Managing sudo access 56.2. Granting sudo access to an IdM user on an IdM client using the CLI In Identity Management (IdM), you can grant sudo access for a specific command to an IdM user account on a specific IdM host. First, add a sudo command and then create a sudo rule for one or more commands. For example, complete this procedure to create the idm_user_reboot sudo rule to grant the idm_user account the permission to run the /usr/sbin/reboot command on the idmclient machine. Prerequisites You are logged in as IdM administrator. You have created a user account for idm_user in IdM and unlocked the account by creating a password for the user. For details on adding a new IdM user using the CLI, see Adding users using the command line . No local idm_user account is present on the idmclient host. The idm_user user is not listed in the local /etc/passwd file. Procedure Retrieve a Kerberos ticket as the IdM admin . Add the /usr/sbin/reboot command to the IdM database of sudo commands: Create a sudo rule named idm_user_reboot : Add the /usr/sbin/reboot command to the idm_user_reboot rule: Apply the idm_user_reboot rule to the IdM idmclient host: Add the idm_user account to the idm_user_reboot rule: Optional: Define the validity of the idm_user_reboot rule: To define the time at which a sudo rule starts to be valid, use the ipa sudorule-mod sudo_rule_name command with the --setattr sudonotbefore= DATE option. The DATE value must follow the yyyymmddHHMMSSZ format, with seconds specified explicitly. For example, to set the start of the validity of the idm_user_reboot rule to 31 December 2025 12:34:00, enter: To define the time at which a sudo rule stops being valid, use the --setattr sudonotafter=DATE option. For example, to set the end of the idm_user_reboot rule validity to 31 December 2026 12:34:00, enter: Note Propagating the changes from the server to the client can take a few minutes. Verification Log in to the idmclient host as the idm_user account. Display which sudo rules the idm_user account is allowed to perform. Reboot the machine using sudo . Enter the password for idm_user when prompted: 56.3. Granting sudo access to an AD user on an IdM client using the CLI Identity Management (IdM) system administrators can use IdM user groups to set access permissions, host-based access control, sudo rules, and other controls on IdM users. IdM user groups grant and restrict access to IdM domain resources. You can add both Active Directory (AD) users and AD groups to IdM user groups. To do that: Add the AD users or groups to a non-POSIX external IdM group. Add the non-POSIX external IdM group to an IdM POSIX group. You can then manage the privileges of the AD users by managing the privileges of the POSIX group. For example, you can grant sudo access for a specific command to an IdM POSIX user group on a specific IdM host. Note It is also possible to add AD user groups as members to IdM external groups. This might make it easier to define policies for Windows users, by keeping the user and group management within the single AD realm. Important Do not use ID overrides of AD users for SUDO rules in IdM. ID overrides of AD users represent only POSIX attributes of AD users, not AD users themselves. You can add ID overrides as group members. However, you can only use this functionality to manage IdM resources in the IdM API. The possibility to add ID overrides as group members is not extended to POSIX environments and you therefore cannot use it for membership in sudo or host-based access control (HBAC) rules. Follow this procedure to create the ad_users_reboot sudo rule to grant the [email protected] AD user the permission to run the /usr/sbin/reboot command on the idmclient IdM host, which is normally reserved for the root user. [email protected] is a member of the ad_users_external non-POSIX group, which is, in turn, a member of the ad_users POSIX group. Prerequisites You have obtained the IdM admin Kerberos ticket-granting ticket (TGT). A cross-forest trust exists between the IdM domain and the ad-domain.com AD domain. No local administrator account is present on the idmclient host: the administrator user is not listed in the local /etc/passwd file. Procedure Create the ad_users group that contains the ad_users_external group with the administrator@ad-domain member: Optional: Create or select a corresponding group in the AD domain to use to manage AD users in the IdM realm. You can use multiple AD groups and add them to different groups on the IdM side. Create the ad_users_external group and indicate that it contains members from outside the IdM domain by adding the --external option: Note Ensure that the external group that you specify here is an AD security group with a global or universal group scope as defined in the Active Directory security groups document. For example, the Domain users or Domain admins AD security groups cannot be used because their group scope is domain local . Create the ad_users group: Add the [email protected] AD user to ad_users_external as an external member: The AD user must be identified by a fully-qualified name, such as DOMAIN\user_name or user_name@DOMAIN . The AD identity is then mapped to the AD SID for the user. The same applies to adding AD groups. Add ad_users_external to ad_users as a member: Grant the members of ad_users the permission to run /usr/sbin/reboot on the idmclient host: Add the /usr/sbin/reboot command to the IdM database of sudo commands: Create a sudo rule named ad_users_reboot : Add the /usr/sbin/reboot command to the ad_users_reboot rule: Apply the ad_users_reboot rule to the IdM idmclient host: Add the ad_users group to the ad_users_reboot rule: Note Propagating the changes from the server to the client can take a few minutes. Verification Log in to the idmclient host as [email protected] , an indirect member of the ad_users group: Optional: Display the sudo commands that [email protected] is allowed to execute: Reboot the machine using sudo . Enter the password for [email protected] when prompted: Additional resources Active Directory users and Identity Management groups Include users and groups from a trusted Active Directory domain into SUDO rules 56.4. Granting sudo access to an IdM user on an IdM client using the IdM Web UI In Identity Management (IdM), you can grant sudo access for a specific command to an IdM user account on a specific IdM host. First, add a sudo command and then create a sudo rule for one or more commands. Complete this procedure to create the idm_user_reboot sudo rule to grant the idm_user account the permission to run the /usr/sbin/reboot command on the idmclient machine. Prerequisites You are logged in as IdM administrator. You have created a user account for idm_user in IdM and unlocked the account by creating a password for the user. For details on adding a new IdM user using the command-line interface, see Adding users using the command line . No local idm_user account is present on the idmclient host. The idm_user user is not listed in the local /etc/passwd file. Procedure Add the /usr/sbin/reboot command to the IdM database of sudo commands: Navigate to Policy Sudo Sudo Commands . Click Add in the upper right corner to open the Add sudo command dialog box. Enter the command you want the user to be able to perform using sudo : /usr/sbin/reboot . Figure 56.1. Adding IdM sudo command Click Add . Use the new sudo command entry to create a sudo rule to allow idm_user to reboot the idmclient machine: Navigate to Policy Sudo Sudo rules . Click Add in the upper right corner to open the Add sudo rule dialog box. Enter the name of the sudo rule: idm_user_reboot . Click Add and Edit . Specify the user: In the Who section, check the Specified Users and Groups radio button. In the User category the rule applies to subsection, click Add to open the Add users into sudo rule "idm_user_reboot" dialog box. In the Add users into sudo rule "idm_user_reboot" dialog box in the Available column, check the idm_user checkbox, and move it to the Prospective column. Click Add . Specify the host: In the Access this host section, check the Specified Hosts and Groups radio button. In the Host category this rule applies to subsection, click Add to open the Add hosts into sudo rule "idm_user_reboot" dialog box. In the Add hosts into sudo rule "idm_user_reboot" dialog box in the Available column, check the idmclient.idm.example.com checkbox, and move it to the Prospective column. Click Add . Specify the commands: In the Command category the rule applies to subsection of the Run Commands section, check the Specified Commands and Groups radio button. In the Sudo Allow Commands subsection, click Add to open the Add allow sudo commands into sudo rule "idm_user_reboot" dialog box. In the Add allow sudo commands into sudo rule "idm_user_reboot" dialog box in the Available column, check the /usr/sbin/reboot checkbox, and move it to the Prospective column. Click Add to return to the idm_sudo_reboot page. Figure 56.2. Adding IdM sudo rule Click Save in the top left corner. The new rule is enabled by default. Note Propagating the changes from the server to the client can take a few minutes. Verification Log in to idmclient as idm_user . Reboot the machine using sudo . Enter the password for idm_user when prompted: If the sudo rule is configured correctly, the machine reboots. 56.5. Creating a sudo rule on the CLI that runs a command as a service account on an IdM client In IdM, you can configure a sudo rule with a RunAs alias to run a sudo command as another user or group. For example, you might have an IdM client that hosts a database application, and you need to run commands as the local service account that corresponds to that application. Use this example to create a sudo rule on the command line called run_third-party-app_report to allow the idm_user account to run the /opt/third-party-app/bin/report command as the thirdpartyapp service account on the idmclient host. Prerequisites You are logged in as IdM administrator. You have created a user account for idm_user in IdM and unlocked the account by creating a password for the user. For details on adding a new IdM user using the CLI, see Adding users using the command line . No local idm_user account is present on the idmclient host. The idm_user user is not listed in the local /etc/passwd file. You have a custom application named third-party-app installed on the idmclient host. The report command for the third-party-app application is installed in the /opt/third-party-app/bin/report directory. You have created a local service account named thirdpartyapp to execute commands for the third-party-app application. Procedure Retrieve a Kerberos ticket as the IdM admin . Add the /opt/third-party-app/bin/report command to the IdM database of sudo commands: Create a sudo rule named run_third-party-app_report : Use the --users= <user> option to specify the RunAs user for the sudorule-add-runasuser command: The user (or group specified with the --groups=* option) can be external to IdM, such as a local service account or an Active Directory user. Do not add a % prefix for group names. Add the /opt/third-party-app/bin/report command to the run_third-party-app_report rule: Apply the run_third-party-app_report rule to the IdM idmclient host: Add the idm_user account to the run_third-party-app_report rule: Note Propagating the changes from the server to the client can take a few minutes. Verification Log in to the idmclient host as the idm_user account. Test the new sudo rule: Display which sudo rules the idm_user account is allowed to perform. Run the report command as the thirdpartyapp service account. 56.6. Creating a sudo rule in the IdM WebUI that runs a command as a service account on an IdM client In IdM, you can configure a sudo rule with a RunAs alias to run a sudo command as another user or group. For example, you might have an IdM client that hosts a database application, and you need to run commands as the local service account that corresponds to that application. Use this example to create a sudo rule in the IdM WebUI called run_third-party-app_report to allow the idm_user account to run the /opt/third-party-app/bin/report command as the thirdpartyapp service account on the idmclient host. Prerequisites You are logged in as IdM administrator. You have created a user account for idm_user in IdM and unlocked the account by creating a password for the user. For details on adding a new IdM user using the CLI, see Adding users using the command line . No local idm_user account is present on the idmclient host. The idm_user user is not listed in the local /etc/passwd file. You have a custom application named third-party-app installed on the idmclient host. The report command for the third-party-app application is installed in the /opt/third-party-app/bin/report directory. You have created a local service account named thirdpartyapp to execute commands for the third-party-app application. Procedure Add the /opt/third-party-app/bin/report command to the IdM database of sudo commands: Navigate to Policy Sudo Sudo Commands . Click Add in the upper right corner to open the Add sudo command dialog box. Enter the command: /opt/third-party-app/bin/report . Click Add . Use the new sudo command entry to create the new sudo rule: Navigate to Policy Sudo Sudo rules . Click Add in the upper right corner to open the Add sudo rule dialog box. Enter the name of the sudo rule: run_third-party-app_report . Click Add and Edit . Specify the user: In the Who section, check the Specified Users and Groups radio button. In the User category the rule applies to subsection, click Add to open the Add users into sudo rule "run_third-party-app_report" dialog box. In the Add users into sudo rule "run_third-party-app_report" dialog box in the Available column, check the idm_user checkbox, and move it to the Prospective column. Click Add . Specify the host: In the Access this host section, check the Specified Hosts and Groups radio button. In the Host category this rule applies to subsection, click Add to open the Add hosts into sudo rule "run_third-party-app_report" dialog box. In the Add hosts into sudo rule "run_third-party-app_report" dialog box in the Available column, check the idmclient.idm.example.com checkbox, and move it to the Prospective column. Click Add . Specify the commands: In the Command category the rule applies to subsection of the Run Commands section, check the Specified Commands and Groups radio button. In the Sudo Allow Commands subsection, click Add to open the Add allow sudo commands into sudo rule "run_third-party-app_report" dialog box. In the Add allow sudo commands into sudo rule "run_third-party-app_report" dialog box in the Available column, check the /opt/third-party-app/bin/report checkbox, and move it to the Prospective column. Click Add to return to the run_third-party-app_report page. Specify the RunAs user: In the As Whom section, check the Specified Users and Groups radio button. In the RunAs Users subsection, click Add to open the Add RunAs users into sudo rule "run_third-party-app_report" dialog box. In the Add RunAs users into sudo rule "run_third-party-app_report" dialog box, enter the thirdpartyapp service account in the External box and move it to the Prospective column. Click Add to return to the run_third-party-app_report page. Click Save in the top left corner. The new rule is enabled by default. Figure 56.3. Details of the sudo rule Note Propagating the changes from the server to the client can take a few minutes. Verification Log in to the idmclient host as the idm_user account. Test the new sudo rule: Display which sudo rules the idm_user account is allowed to perform. Run the report command as the thirdpartyapp service account. 56.7. Enabling GSSAPI authentication for sudo on an IdM client Enable Generic Security Service Application Program Interface (GSSAPI) authentication on an IdM client for the sudo and sudo -i commands via the pam_sss_gss.so PAM module. With this configuration, IdM users can authenticate to the sudo command with their Kerberos ticket. Prerequisites You have created a sudo rule for an IdM user that applies to an IdM host. For this example, you have created the idm_user_reboot sudo rule to grant the idm_user account the permission to run the /usr/sbin/reboot command on the idmclient host. The idmclient host is running RHEL 8.4 or later. You need root privileges to modify the /etc/sssd/sssd.conf file and PAM files in the /etc/pam.d/ directory. Procedure Open the /etc/sssd/sssd.conf configuration file. Add the following entry to the [domain/ <domain_name> ] section. Save and close the /etc/sssd/sssd.conf file. Restart the SSSD service to load the configuration changes. On RHEL 8.8 or later: Optional: Determine if you have selected the sssd authselect profile: If the sssd authselect profile is selected, enable GSSAPI authentication: If the sssd authselect profile is not selected, select it and enable GSSAPI authentication: On RHEL 8.7 or earlier: Open the /etc/pam.d/sudo PAM configuration file. Add the following entry as the first line of the auth section in the /etc/pam.d/sudo file. Save and close the /etc/pam.d/sudo file. Verification Log into the host as the idm_user account. Verify that you have a ticket-granting ticket as the idm_user account. Optional: If you do not have Kerberos credentials for the idm_user account, delete your current Kerberos credentials and request the correct ones. Reboot the machine using sudo , without specifying a password. Additional resources The GSSAPI entry in the IdM terminology listing Granting sudo access to an IdM user on an IdM client using IdM Web UI Granting sudo access to an IdM user on an IdM client using the CLI pam_sss_gss (8) and sssd.conf (5) man pages on your system 56.8. Enabling GSSAPI authentication and enforcing Kerberos authentication indicators for sudo on an IdM client Enable Generic Security Service Application Program Interface (GSSAPI) authentication on an IdM client for the sudo and sudo -i commands via the pam_sss_gss.so PAM module. Additionally, only users who have logged in with a smart card will authenticate to those commands with their Kerberos ticket. Note You can use this procedure as a template to configure GSSAPI authentication with SSSD for other PAM-aware services, and further restrict access to only those users that have a specific authentication indicator attached to their Kerberos ticket. Prerequisites You have created a sudo rule for an IdM user that applies to an IdM host. For this example, you have created the idm_user_reboot sudo rule to grant the idm_user account the permission to run the /usr/sbin/reboot command on the idmclient host. You have configured smart card authentication for the idmclient host. The idmclient host is running RHEL 8.4 or later. You need root privileges to modify the /etc/sssd/sssd.conf file and PAM files in the /etc/pam.d/ directory. Procedure Open the /etc/sssd/sssd.conf configuration file. Add the following entries to the [domain/ <domain_name> ] section. Save and close the /etc/sssd/sssd.conf file. Restart the SSSD service to load the configuration changes. On RHEL 8.8 or later: Determine if you have selected the sssd authselect profile: Optional: Select the sssd authselect profile: Enable GSSAPI authentication: Configure the system to authenticate only users with smart cards: On RHEL 8.7 or earlier: Open the /etc/pam.d/sudo PAM configuration file. Add the following entry as the first line of the auth section in the /etc/pam.d/sudo file. Save and close the /etc/pam.d/sudo file. Open the /etc/pam.d/sudo-i PAM configuration file. Add the following entry as the first line of the auth section in the /etc/pam.d/sudo-i file. Save and close the /etc/pam.d/sudo-i file. Verification Log into the host as the idm_user account and authenticate with a smart card. Verify that you have a ticket-granting ticket as the smart card user. Display which sudo rules the idm_user account is allowed to perform. Reboot the machine using sudo , without specifying a password. Additional resources SSSD options controlling GSSAPI authentication for PAM services The GSSAPI entry in the IdM terminology listing Configuring Identity Management for smart card authentication Kerberos authentication indicators Granting sudo access to an IdM user on an IdM client using IdM Web UI Granting sudo access to an IdM user on an IdM client using the CLI . pam_sss_gss (8) and sssd.conf (5) man pages on your system 56.9. SSSD options controlling GSSAPI authentication for PAM services You can use the following options for the /etc/sssd/sssd.conf configuration file to adjust the GSSAPI configuration within the SSSD service. pam_gssapi_services GSSAPI authentication with SSSD is disabled by default. You can use this option to specify a comma-separated list of PAM services that are allowed to try GSSAPI authentication using the pam_sss_gss.so PAM module. To explicitly disable GSSAPI authentication, set this option to - . pam_gssapi_indicators_map This option only applies to Identity Management (IdM) domains. Use this option to list Kerberos authentication indicators that are required to grant PAM access to a service. Pairs must be in the format <PAM_service> :_<required_authentication_indicator>_ . Valid authentication indicators are: otp for two-factor authentication radius for RADIUS authentication pkinit for PKINIT, smart card, or certificate authentication hardened for hardened passwords pam_gssapi_check_upn This option is enabled and set to true by default. If this option is enabled, the SSSD service requires that the user name matches the Kerberos credentials. If false , the pam_sss_gss.so PAM module authenticates every user that is able to obtain the required service ticket. Examples The following options enable Kerberos authentication for the sudo and sudo-i services, requires that sudo users authenticated with a one-time password, and user names must match the Kerberos principal. Because these settings are in the [pam] section, they apply to all domains: You can also set these options in individual [domain] sections to overwrite any global values in the [pam] section. The following options apply different GSSAPI settings to each domain: For the idm.example.com domain Enable GSSAPI authentication for the sudo and sudo -i services. Require certificate or smart card authentication authenticators for the sudo command. Require one-time password authentication authenticators for the sudo -i command. Enforce matching user names and Kerberos principals. For the ad.example.com domain Enable GSSAPI authentication only for the sudo service. Do not enforce matching user names and principals. Additional resources Kerberos authentication indicators 56.10. Troubleshooting GSSAPI authentication for sudo If you are unable to authenticate to the sudo service with a Kerberos ticket from IdM, use the following scenarios to troubleshoot your configuration. Prerequisites You have enabled GSSAPI authentication for the sudo service. See Enabling GSSAPI authentication for sudo on an IdM client . You need root privileges to modify the /etc/sssd/sssd.conf file and PAM files in the /etc/pam.d/ directory. Procedure If you see the following error, the Kerberos service might not able to resolve the correct realm for the service ticket based on the host name: In this situation, add the hostname directly to [domain_realm] section in the /etc/krb5.conf Kerberos configuration file: If you see the following error, you do not have any Kerberos credentials: In this situation, retrieve Kerberos credentials with the kinit utility or authenticate with SSSD: If you see either of the following errors in the /var/log/sssd/sssd_pam.log log file, the Kerberos credentials do not match the username of the user currently logged in: In this situation, verify that you authenticated with SSSD, or consider disabling the pam_gssapi_check_upn option in the /etc/sssd/sssd.conf file: For additional troubleshooting, you can enable debugging output for the pam_sss_gss.so PAM module. Add the debug option at the end of all pam_sss_gss.so entries in PAM files, such as /etc/pam.d/sudo and /etc/pam.d/sudo-i : Try to authenticate with the pam_sss_gss.so module and review the console output. In this example, the user did not have any Kerberos credentials. 56.11. Using an Ansible playbook to ensure sudo access for an IdM user on an IdM client In Identity Management (IdM), you can ensure sudo access to a specific command is granted to an IdM user account on a specific IdM host. Complete this procedure to ensure a sudo rule named idm_user_reboot exists. The rule grants idm_user the permission to run the /usr/sbin/reboot command on the idmclient machine. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You have ensured the presence of a user account for idm_user in IdM and unlocked the account by creating a password for the user . For details on adding a new IdM user using the command-line interface, see link: Adding users using the command line . No local idm_user account exists on idmclient . The idm_user user is not listed in the /etc/passwd file on idmclient . Procedure Create an inventory file, for example inventory.file , and define ipaservers in it: Add one or more sudo commands: Create an ensure-reboot-sudocmd-is-present.yml Ansible playbook that ensures the presence of the /usr/sbin/reboot command in the IdM database of sudo commands. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/sudocmd/ensure-sudocmd-is-present.yml file: Run the playbook: Create a sudo rule that references the commands: Create an ensure-sudorule-for-idmuser-on-idmclient-is-present.yml Ansible playbook that uses the sudo command entry to ensure the presence of a sudo rule. The sudo rule allows idm_user to reboot the idmclient machine. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/sudorule/ensure-sudorule-is-present.yml file: Run the playbook: Verification Test that the sudo rule whose presence you have ensured on the IdM server works on idmclient by verifying that idm_user can reboot idmclient using sudo . Note that it can take a few minutes for the changes made on the server to take effect on the client. Log in to idmclient as idm_user . Reboot the machine using sudo . Enter the password for idm_user when prompted: If sudo is configured correctly, the machine reboots. Additional resources See the README-sudocmd.md , README-sudocmdgroup.md , and README-sudorule.md files in the /usr/share/doc/ansible-freeipa/ directory.
[ "kinit admin", "ipa sudocmd-add /usr/sbin/reboot ------------------------------------- Added Sudo Command \"/usr/sbin/reboot\" ------------------------------------- Sudo Command: /usr/sbin/reboot", "ipa sudorule-add idm_user_reboot --------------------------------- Added Sudo Rule \"idm_user_reboot\" --------------------------------- Rule name: idm_user_reboot Enabled: TRUE", "ipa sudorule-add-allow-command idm_user_reboot --sudocmds '/usr/sbin/reboot' Rule name: idm_user_reboot Enabled: TRUE Sudo Allow Commands: /usr/sbin/reboot ------------------------- Number of members added 1 -------------------------", "ipa sudorule-add-host idm_user_reboot --hosts idmclient.idm.example.com Rule name: idm_user_reboot Enabled: TRUE Hosts: idmclient.idm.example.com Sudo Allow Commands: /usr/sbin/reboot ------------------------- Number of members added 1 -------------------------", "ipa sudorule-add-user idm_user_reboot --users idm_user Rule name: idm_user_reboot Enabled: TRUE Users: idm_user Hosts: idmclient.idm.example.com Sudo Allow Commands: /usr/sbin/reboot ------------------------- Number of members added 1 -------------------------", "ipa sudorule-mod idm_user_reboot --setattr sudonotbefore=20251231123400Z", "ipa sudorule-mod idm_user_reboot --setattr sudonotafter=20261231123400Z", "[idm_user@idmclient ~]USD sudo -l Matching Defaults entries for idm_user on idmclient : !visiblepw, always_set_home, match_group_by_gid, always_query_group_plugin, env_reset, env_keep=\"COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS\", env_keep+=\"MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE\", env_keep+=\"LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES\", env_keep+=\"LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE\", env_keep+=\"LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY KRB5CCNAME\", secure_path=/sbin\\:/bin\\:/usr/sbin\\:/usr/bin User idm_user may run the following commands on idmclient : (root) /usr/sbin/reboot", "[idm_user@idmclient ~]USD sudo /usr/sbin/reboot [sudo] password for idm_user:", "ipa group-add --desc='AD users external map' ad_users_external --external ------------------------------- Added group \"ad_users_external\" ------------------------------- Group name: ad_users_external Description: AD users external map", "ipa group-add --desc='AD users' ad_users ---------------------- Added group \"ad_users\" ---------------------- Group name: ad_users Description: AD users GID: 129600004", "ipa group-add-member ad_users_external --external \"[email protected]\" [member user]: [member group]: Group name: ad_users_external Description: AD users external map External member: S-1-5-21-3655990580-1375374850-1633065477-513 ------------------------- Number of members added 1 -------------------------", "ipa group-add-member ad_users --groups ad_users_external Group name: ad_users Description: AD users GID: 129600004 Member groups: ad_users_external ------------------------- Number of members added 1 -------------------------", "ipa sudocmd-add /usr/sbin/reboot ------------------------------------- Added Sudo Command \"/usr/sbin/reboot\" ------------------------------------- Sudo Command: /usr/sbin/reboot", "ipa sudorule-add ad_users_reboot --------------------------------- Added Sudo Rule \"ad_users_reboot\" --------------------------------- Rule name: ad_users_reboot Enabled: True", "ipa sudorule-add-allow-command ad_users_reboot --sudocmds '/usr/sbin/reboot' Rule name: ad_users_reboot Enabled: True Sudo Allow Commands: /usr/sbin/reboot ------------------------- Number of members added 1 -------------------------", "ipa sudorule-add-host ad_users_reboot --hosts idmclient.idm.example.com Rule name: ad_users_reboot Enabled: True Hosts: idmclient.idm.example.com Sudo Allow Commands: /usr/sbin/reboot ------------------------- Number of members added 1 -------------------------", "ipa sudorule-add-user ad_users_reboot --groups ad_users Rule name: ad_users_reboot Enabled: TRUE User Groups: ad_users Hosts: idmclient.idm.example.com Sudo Allow Commands: /usr/sbin/reboot ------------------------- Number of members added 1 -------------------------", "ssh [email protected]@ipaclient Password:", "[[email protected]@idmclient ~]USD sudo -l Matching Defaults entries for [email protected] on idmclient : !visiblepw, always_set_home, match_group_by_gid, always_query_group_plugin, env_reset, env_keep=\"COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS\", env_keep+=\"MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE\", env_keep+=\"LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES\", env_keep+=\"LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE\", env_keep+=\"LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY KRB5CCNAME\", secure_path=/sbin\\:/bin\\:/usr/sbin\\:/usr/bin User [email protected] may run the following commands on idmclient : (root) /usr/sbin/reboot", "[[email protected]@idmclient ~]USD sudo /usr/sbin/reboot [sudo] password for [email protected]:", "sudo /usr/sbin/reboot [sudo] password for idm_user:", "kinit admin", "ipa sudocmd-add /opt/third-party-app/bin/report ---------------------------------------------------- Added Sudo Command \"/opt/third-party-app/bin/report\" ---------------------------------------------------- Sudo Command: /opt/third-party-app/bin/report", "ipa sudorule-add run_third-party-app_report -------------------------------------------- Added Sudo Rule \"run_third-party-app_report\" -------------------------------------------- Rule name: run_third-party-app_report Enabled: TRUE", "ipa sudorule-add-runasuser run_third-party-app_report --users= thirdpartyapp Rule name: run_third-party-app_report Enabled: TRUE RunAs External User: thirdpartyapp ------------------------- Number of members added 1 -------------------------", "ipa sudorule-add-allow-command run_third-party-app_report --sudocmds '/opt/third-party-app/bin/report' Rule name: run_third-party-app_report Enabled: TRUE Sudo Allow Commands: /opt/third-party-app/bin/report RunAs External User: thirdpartyapp ------------------------- Number of members added 1 -------------------------", "ipa sudorule-add-host run_third-party-app_report --hosts idmclient.idm.example.com Rule name: run_third-party-app_report Enabled: TRUE Hosts: idmclient.idm.example.com Sudo Allow Commands: /opt/third-party-app/bin/report RunAs External User: thirdpartyapp ------------------------- Number of members added 1 -------------------------", "ipa sudorule-add-user run_third-party-app_report --users idm_user Rule name: run_third-party-app_report Enabled: TRUE Users: idm_user Hosts: idmclient.idm.example.com Sudo Allow Commands: /opt/third-party-app/bin/report RunAs External User: thirdpartyapp ------------------------- Number of members added 1", "[idm_user@idmclient ~]USD sudo -l Matching Defaults entries for [email protected] on idmclient: !visiblepw, always_set_home, match_group_by_gid, always_query_group_plugin, env_reset, env_keep=\"COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS\", env_keep+=\"MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE\", env_keep+=\"LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES\", env_keep+=\"LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE\", env_keep+=\"LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY KRB5CCNAME\", secure_path=/sbin\\:/bin\\:/usr/sbin\\:/usr/bin User [email protected] may run the following commands on idmclient: (thirdpartyapp) /opt/third-party-app/bin/report", "[idm_user@idmclient ~]USD sudo -u thirdpartyapp /opt/third-party-app/bin/report [sudo] password for [email protected]: Executing report Report successful.", "[idm_user@idmclient ~]USD sudo -l Matching Defaults entries for [email protected] on idmclient: !visiblepw, always_set_home, match_group_by_gid, always_query_group_plugin, env_reset, env_keep=\"COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS\", env_keep+=\"MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE\", env_keep+=\"LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES\", env_keep+=\"LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE\", env_keep+=\"LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY KRB5CCNAME\", secure_path=/sbin\\:/bin\\:/usr/sbin\\:/usr/bin User [email protected] may run the following commands on idmclient: (thirdpartyapp) /opt/third-party-app/bin/report", "[idm_user@idmclient ~]USD sudo -u thirdpartyapp /opt/third-party-app/bin/report [sudo] password for [email protected]: Executing report Report successful.", "[domain/ <domain_name> ] pam_gssapi_services = sudo, sudo-i", "systemctl restart sssd", "authselect current Profile ID: sssd", "authselect enable-feature with-gssapi", "authselect select sssd with-gssapi", "#%PAM-1.0 auth sufficient pam_sss_gss.so auth include system-auth account include system-auth password include system-auth session include system-auth", "ssh -l [email protected] localhost [email protected]'s password:", "[idmuser@idmclient ~]USD klist Ticket cache: KCM:1366201107 Default principal: [email protected] Valid starting Expires Service principal 01/08/2021 09:11:48 01/08/2021 19:11:48 krbtgt/[email protected] renew until 01/15/2021 09:11:44", "[idm_user@idmclient ~]USD kdestroy -A [idm_user@idmclient ~]USD kinit [email protected] Password for [email protected] :", "[idm_user@idmclient ~]USD sudo /usr/sbin/reboot", "[domain/ <domain_name> ] pam_gssapi_services = sudo, sudo-i pam_gssapi_indicators_map = sudo:pkinit, sudo-i:pkinit", "systemctl restart sssd", "authselect current Profile ID: sssd", "authselect select sssd", "authselect enable-feature with-gssapi", "authselect with-smartcard-required", "#%PAM-1.0 auth sufficient pam_sss_gss.so auth include system-auth account include system-auth password include system-auth session include system-auth", "#%PAM-1.0 auth sufficient pam_sss_gss.so auth include sudo account include sudo password include sudo session optional pam_keyinit.so force revoke session include sudo", "ssh -l [email protected] localhost PIN for smart_card", "[idm_user@idmclient ~]USD klist Ticket cache: KEYRING:persistent:1358900015:krb_cache_TObtNMd Default principal: [email protected] Valid starting Expires Service principal 02/15/2021 16:29:48 02/16/2021 02:29:48 krbtgt/[email protected] renew until 02/22/2021 16:29:44", "[idm_user@idmclient ~]USD sudo -l Matching Defaults entries for idmuser on idmclient : !visiblepw, always_set_home, match_group_by_gid, always_query_group_plugin, env_reset, env_keep=\"COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS\", env_keep+=\"MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE\", env_keep+=\"LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES\", env_keep+=\"LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE\", env_keep+=\"LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY KRB5CCNAME\", secure_path=/sbin\\:/bin\\:/usr/sbin\\:/usr/bin User idm_user may run the following commands on idmclient : (root) /usr/sbin/reboot", "[idm_user@idmclient ~]USD sudo /usr/sbin/reboot", "[pam] pam_gssapi_services = sudo , sudo-i pam_gssapi_indicators_map = sudo:otp pam_gssapi_check_upn = true", "[domain/ idm.example.com ] pam_gssapi_services = sudo, sudo-i pam_gssapi_indicators_map = sudo:pkinit , sudo-i:otp pam_gssapi_check_upn = true [domain/ ad.example.com ] pam_gssapi_services = sudo pam_gssapi_check_upn = false", "Server not found in Kerberos database", "[idm-user@idm-client ~]USD cat /etc/krb5.conf [domain_realm] .example.com = EXAMPLE.COM example.com = EXAMPLE.COM server.example.com = EXAMPLE.COM", "No Kerberos credentials available", "[idm-user@idm-client ~]USD kinit [email protected] Password for [email protected] :", "User with UPN [ <UPN> ] was not found. UPN [ <UPN> ] does not match target user [ <username> ].", "[idm-user@idm-client ~]USD cat /etc/sssd/sssd.conf pam_gssapi_check_upn = false", "cat /etc/pam.d/sudo #%PAM-1.0 auth sufficient pam_sss_gss.so debug auth include system-auth account include system-auth password include system-auth session include system-auth", "cat /etc/pam.d/sudo-i #%PAM-1.0 auth sufficient pam_sss_gss.so debug auth include sudo account include sudo password include sudo session optional pam_keyinit.so force revoke session include sudo", "[idm-user@idm-client ~]USD sudo ls -l /etc/sssd/sssd.conf pam_sss_gss: Initializing GSSAPI authentication with SSSD pam_sss_gss: Switching euid from 0 to 1366201107 pam_sss_gss: Trying to establish security context pam_sss_gss: SSSD User name: [email protected] pam_sss_gss: User domain: idm.example.com pam_sss_gss: User principal: pam_sss_gss: Target name: [email protected] pam_sss_gss: Using ccache: KCM: pam_sss_gss: Acquiring credentials, principal name will be derived pam_sss_gss: Unable to read credentials from [KCM:] [maj:0xd0000, min:0x96c73ac3] pam_sss_gss: GSSAPI: Unspecified GSS failure. Minor code may provide more information pam_sss_gss: GSSAPI: No credentials cache found pam_sss_gss: Switching euid from 1366200907 to 0 pam_sss_gss: System error [5]: Input/output error", "[ipaservers] server.idm.example.com", "--- - name: Playbook to manage sudo command hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure sudo command is present - ipasudocmd: ipaadmin_password: \"{{ ipaadmin_password }}\" name: /usr/sbin/reboot state: present", "ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory /ensure-reboot-sudocmd-is-present.yml", "--- - name: Tests hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure a sudorule is present granting idm_user the permission to run /usr/sbin/reboot on idmclient - ipasudorule: ipaadmin_password: \"{{ ipaadmin_password }}\" name: idm_user_reboot description: A test sudo rule. allow_sudocmd: /usr/sbin/reboot host: idmclient.idm.example.com user: idm_user state: present", "ansible-playbook -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory /ensure-sudorule-for-idmuser-on-idmclient-is-present.yml", "sudo /usr/sbin/reboot [sudo] password for idm_user:" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/granting-sudo-access-to-an-idm-user-on-an-idm-client_configuring-and-managing-idm
Chapter 108. PasswordSource schema reference
Chapter 108. PasswordSource schema reference Used in: Password Property Property type Description secretKeyRef SecretKeySelector Selects a key of a Secret in the resource's namespace.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-passwordsource-reference
Chapter 10. Creating quick start tutorials in the web console
Chapter 10. Creating quick start tutorials in the web console If you are creating quick start tutorials for the OpenShift Container Platform web console, follow these guidelines to maintain a consistent user experience across all quick starts. 10.1. Understanding quick starts A quick start is a guided tutorial with user tasks. In the web console, you can access quick starts under the Help menu. They are especially useful for getting oriented with an application, Operator, or other product offering. A quick start primarily consists of tasks and steps. Each task has multiple steps, and each quick start has multiple tasks. For example: Task 1 Step 1 Step 2 Step 3 Task 2 Step 1 Step 2 Step 3 Task 3 Step 1 Step 2 Step 3 10.2. Quick start user workflow When you interact with an existing quick start tutorial, this is the expected workflow experience: In the Administrator or Developer perspective, click the Help icon and select Quick Starts . Click a quick start card. In the panel that appears, click Start . Complete the on-screen instructions, then click . In the Check your work module that appears, answer the question to confirm that you successfully completed the task. If you select Yes , click to continue to the task. If you select No , repeat the task instructions and check your work again. Repeat steps 1 through 6 above to complete the remaining tasks in the quick start. After completing the final task, click Close to close the quick start. 10.3. Quick start components A quick start consists of the following sections: Card : The catalog tile that provides the basic information of the quick start, including title, description, time commitment, and completion status Introduction : A brief overview of the goal and tasks of the quick start Task headings : Hyper-linked titles for each task in the quick start Check your work module : A module for a user to confirm that they completed a task successfully before advancing to the task in the quick start Hints : An animation to help users identify specific areas of the product Buttons and back buttons : Buttons for navigating the steps and modules within each task of a quick start Final screen buttons : Buttons for closing the quick start, going back to tasks within the quick start, and viewing all quick starts The main content area of a quick start includes the following sections: Card copy Introduction Task steps Modals and in-app messaging Check your work module 10.4. Contributing quick starts OpenShift Container Platform introduces the quick start custom resource, which is defined by a ConsoleQuickStart object. Operators and administrators can use this resource to contribute quick starts to the cluster. Prerequisites You must have cluster administrator privileges. Procedure To create a new quick start, run: USD oc get -o yaml consolequickstart spring-with-s2i > my-quick-start.yaml Run: USD oc create -f my-quick-start.yaml Update the YAML file using the guidance outlined in this documentation. Save your edits. 10.4.1. Viewing the quick start API documentation Procedure To see the quick start API documentation, run: USD oc explain consolequickstarts Run oc explain -h for more information about oc explain usage. 10.4.2. Mapping the elements in the quick start to the quick start CR This section helps you visually map parts of the quick start custom resource (CR) with where they appear in the quick start within the web console. 10.4.2.1. conclusion element Viewing the conclusion element in the YAML file ... summary: failed: Try the steps again. success: Your Spring application is running. title: Run the Spring application conclusion: >- Your Spring application is deployed and ready. 1 1 conclusion text Viewing the conclusion element in the web console The conclusion appears in the last section of the quick start. 10.4.2.2. description element Viewing the description element in the YAML file apiVersion: console.openshift.io/v1 kind: ConsoleQuickStart metadata: name: spring-with-s2i spec: description: 'Import a Spring Application from git, build, and deploy it onto OpenShift.' 1 ... 1 description text Viewing the description element in the web console The description appears on the introductory tile of the quick start on the Quick Starts page. 10.4.2.3. displayName element Viewing the displayName element in the YAML file apiVersion: console.openshift.io/v1 kind: ConsoleQuickStart metadata: name: spring-with-s2i spec: description: 'Import a Spring Application from git, build, and deploy it onto OpenShift.' displayName: Get started with Spring 1 durationMinutes: 10 1 displayName text. Viewing the displayName element in the web console The display name appears on the introductory tile of the quick start on the Quick Starts page. 10.4.2.4. durationMinutes element Viewing the durationMinutes element in the YAML file apiVersion: console.openshift.io/v1 kind: ConsoleQuickStart metadata: name: spring-with-s2i spec: description: 'Import a Spring Application from git, build, and deploy it onto OpenShift.' displayName: Get started with Spring durationMinutes: 10 1 1 durationMinutes value, in minutes. This value defines how long the quick start should take to complete. Viewing the durationMinutes element in the web console The duration minutes element appears on the introductory tile of the quick start on the Quick Starts page. 10.4.2.5. icon element Viewing the icon element in the YAML file ... spec: description: 'Import a Spring Application from git, build, and deploy it onto OpenShift.' displayName: Get started with Spring durationMinutes: 10 icon: >- 1 data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIGlkPSJMYXllcl8xIiBkYXRhLW5hbWU9IkxheWVyIDEiIHZpZXdCb3g9IjAgMCAxMDI0IDEwMjQiPjxkZWZzPjxzdHlsZT4uY2xzLTF7ZmlsbDojMTUzZDNjO30uY2xzLTJ7ZmlsbDojZDhkYTlkO30uY2xzLTN7ZmlsbDojNThjMGE4O30uY2xzLTR7ZmlsbDojZmZmO30uY2xzLTV7ZmlsbDojM2Q5MTkxO308L3N0eWxlPjwvZGVmcz48dGl0bGU+c25vd2Ryb3BfaWNvbl9yZ2JfZGVmYXVsdDwvdGl0bGU+PHBhdGggY2xhc3M9ImNscy0xIiBkPSJNMTAxMi42OSw1OTNjLTExLjEyLTM4LjA3LTMxLTczLTU5LjIxLTEwMy44LTkuNS0xMS4zLTIzLjIxLTI4LjI5LTM5LjA2LTQ3Ljk0QzgzMy41MywzNDEsNzQ1LjM3LDIzNC4xOCw2NzQsMTY4Ljk0Yy01LTUuMjYtMTAuMjYtMTAuMzEtMTUuNjUtMTUuMDdhMjQ2LjQ5LDI0Ni40OSwwLDAsMC0zNi41NS0yNi44LDE4Mi41LDE4Mi41LDAsMCwwLTIwLjMtMTEuNzcsMjAxLjUzLDIwMS41MywwLDAsMC00My4xOS0xNUExNTUuMjQsMTU1LjI0LDAsMCwwLDUyOCw5NS4yYy02Ljc2LS42OC0xMS43NC0uODEtMTQuMzktLjgxaDBsLTEuNjIsMC0xLjYyLDBhMTc3LjMsMTc3LjMsMCwwLDAtMzEuNzcsMy4zNSwyMDguMjMsMjA4LjIzLDAsMCwwLTU2LjEyLDE3LjU2LDE4MSwxODEsMCwwLDAtMjAuMjcsMTEuNzUsMjQ3LjQzLDI0Ny40MywwLDAsMC0zNi41NywyNi44MUMzNjAuMjUsMTU4LjYyLDM1NSwxNjMuNjgsMzUwLDE2OWMtNzEuMzUsNjUuMjUtMTU5LjUsMTcyLTI0MC4zOSwyNzIuMjhDOTMuNzMsNDYwLjg4LDgwLDQ3Ny44Nyw3MC41Miw0ODkuMTcsNDIuMzUsNTIwLDIyLjQzLDU1NC45LDExLjMxLDU5MywuNzIsNjI5LjIyLTEuNzMsNjY3LjY5LDQsNzA3LjMxLDE1LDc4Mi40OSw1NS43OCw4NTkuMTIsMTE4LjkzLDkyMy4wOWEyMiwyMiwwLDAsMCwxNS41OSw2LjUyaDEuODNsMS44Ny0uMzJjODEuMDYtMTMuOTEsMTEwLTc5LjU3LDE0My40OC0xNTUuNiwzLjkxLTguODgsNy45NS0xOC4wNSwxMi4yLTI3LjQzcTUuNDIsOC41NCwxMS4zOSwxNi4yM2MzMS44NSw0MC45MSw3NS4xMiw2NC42NywxMzIuMzIsNzIuNjNsMTguOCwyLjYyLDQuOTUtMTguMzNjMTMuMjYtNDkuMDcsMzUuMy05MC44NSw1MC42NC0xMTYuMTksMTUuMzQsMjUuMzQsMzcuMzgsNjcuMTIsNTAuNjQsMTE2LjE5bDUsMTguMzMsMTguOC0yLjYyYzU3LjItOCwxMDAuNDctMzEuNzIsMTMyLjMyLTcyLjYzcTYtNy42OCwxMS4zOS0xNi4yM2M0LjI1LDkuMzgsOC4yOSwxOC41NSwxMi4yLDI3LjQzLDMzLjQ5LDc2LDYyLjQyLDE0MS42OSwxNDMuNDgsMTU1LjZsMS44MS4zMWgxLjg5YTIyLDIyLDAsMCwwLDE1LjU5LTYuNTJjNjMuMTUtNjQsMTAzLjk1LTE0MC42LDExNC44OS0yMTUuNzhDMTAyNS43Myw2NjcuNjksMTAyMy4yOCw2MjkuMjIsMTAxMi42OSw1OTNaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNMzY0LjE1LDE4NS4yM2MxNy44OS0xNi40LDM0LjctMzAuMTUsNDkuNzctNDAuMTFhMjEyLDIxMiwwLDAsMSw2NS45My0yNS43M0ExOTgsMTk4LDAsMCwxLDUxMiwxMTYuMjdhMTk2LjExLDE5Ni4xMSwwLDAsMSwzMiwzLjFjNC41LjkxLDkuMzYsMi4wNiwxNC41MywzLjUyLDYwLjQxLDIwLjQ4LDg0LjkyLDkxLjA1LTQ3LjQ0LDI0OC4wNi0yOC43NSwzNC4xMi0xNDAuNywxOTQuODQtMTg0LjY2LDI2OC40MmE2MzAuODYsNjMwLjg2LDAsMCwwLTMzLjIyLDU4LjMyQzI3Niw2NTUuMzQsMjY1LjQsNTk4LDI2NS40LDUyMC4yOSwyNjUuNCwzNDAuNjEsMzExLjY5LDI0MC43NCwzNjQuMTUsMTg1LjIzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTUyNy41NCwzODQuODNjODQuMDYtOTkuNywxMTYuMDYtMTc3LjI4LDk1LjIyLTIzMC43NCwxMS42Miw4LjY5LDI0LDE5LjIsMzcuMDYsMzEuMTMsNTIuNDgsNTUuNSw5OC43OCwxNTUuMzgsOTguNzgsMzM1LjA3LDAsNzcuNzEtMTAuNiwxMzUuMDUtMjcuNzcsMTc3LjRhNjI4LjczLDYyOC43MywwLDAsMC0zMy4yMy01OC4zMmMtMzktNjUuMjYtMTMxLjQ1LTE5OS0xNzEuOTMtMjUyLjI3QzUyNi4zMywzODYuMjksNTI3LDM4NS41Miw1MjcuNTQsMzg0LjgzWiIvPjxwYXRoIGNsYXNzPSJjbHMtNCIgZD0iTTEzNC41OCw5MDguMDdoLS4wNmEuMzkuMzksMCwwLDEtLjI3LS4xMWMtMTE5LjUyLTEyMS4wNy0xNTUtMjg3LjQtNDcuNTQtNDA0LjU4LDM0LjYzLTQxLjE0LDEyMC0xNTEuNiwyMDIuNzUtMjQyLjE5LTMuMTMsNy02LjEyLDE0LjI1LTguOTIsMjEuNjktMjQuMzQsNjQuNDUtMzYuNjcsMTQ0LjMyLTM2LjY3LDIzNy40MSwwLDU2LjUzLDUuNTgsMTA2LDE2LjU5LDE0Ny4xNEEzMDcuNDksMzA3LjQ5LDAsMCwwLDI4MC45MSw3MjNDMjM3LDgxNi44OCwyMTYuOTMsODkzLjkzLDEzNC41OCw5MDguMDdaIi8+PHBhdGggY2xhc3M9ImNscy01IiBkPSJNNTgzLjQzLDgxMy43OUM1NjAuMTgsNzI3LjcyLDUxMiw2NjQuMTUsNTEyLDY2NC4xNXMtNDguMTcsNjMuNTctNzEuNDMsMTQ5LjY0Yy00OC40NS02Ljc0LTEwMC45MS0yNy41Mi0xMzUuNjYtOTEuMThhNjQ1LjY4LDY0NS42OCwwLDAsMSwzOS41Ny03MS41NGwuMjEtLjMyLjE5LS4zM2MzOC02My42MywxMjYuNC0xOTEuMzcsMTY3LjEyLTI0NS42Niw0MC43MSw1NC4yOCwxMjkuMSwxODIsMTY3LjEyLDI0NS42NmwuMTkuMzMuMjEuMzJhNjQ1LjY4LDY0NS42OCwwLDAsMSwzOS41Nyw3MS41NEM2ODQuMzQsNzg2LjI3LDYzMS44OCw4MDcuMDUsNTgzLjQzLDgxMy43OVoiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik04ODkuNzUsOTA4YS4zOS4zOSwwLDAsMS0uMjcuMTFoLS4wNkM4MDcuMDcsODkzLjkzLDc4Nyw4MTYuODgsNzQzLjA5LDcyM2EzMDcuNDksMzA3LjQ5LDAsMCwwLDIwLjQ1LTU1LjU0YzExLTQxLjExLDE2LjU5LTkwLjYxLDE2LjU5LTE0Ny4xNCwwLTkzLjA4LTEyLjMzLTE3My0zNi42Ni0yMzcuNHEtNC4yMi0xMS4xNi04LjkzLTIxLjdjODIuNzUsOTAuNTksMTY4LjEyLDIwMS4wNSwyMDIuNzUsMjQyLjE5QzEwNDQuNzksNjIwLjU2LDEwMDkuMjcsNzg2Ljg5LDg4OS43NSw5MDhaIi8+PC9zdmc+Cg== ... 1 The icon defined as a base64 value. Viewing the icon element in the web console The icon appears on the introductory tile of the quick start on the Quick Starts page. 10.4.2.6. introduction element Viewing the introduction element in the YAML file ... introduction: >- 1 **Spring** is a Java framework for building applications based on a distributed microservices architecture. - Spring enables easy packaging and configuration of Spring applications into a self-contained executable application which can be easily deployed as a container to OpenShift. - Spring applications can integrate OpenShift capabilities to provide a natural "Spring on OpenShift" developer experience for both existing and net-new Spring applications. For example: - Externalized configuration using Kubernetes ConfigMaps and integration with Spring Cloud Kubernetes - Service discovery using Kubernetes Services - Load balancing with Replication Controllers - Kubernetes health probes and integration with Spring Actuator - Metrics: Prometheus, Grafana, and integration with Spring Cloud Sleuth - Distributed tracing with Istio & Jaeger tracing - Developer tooling through Red Hat OpenShift and Red Hat CodeReady developer tooling to quickly scaffold new Spring projects, gain access to familiar Spring APIs in your favorite IDE, and deploy to Red Hat OpenShift ... 1 The introduction introduces the quick start and lists the tasks within it. Viewing the introduction element in the web console After clicking a quick start card, a side panel slides in that introduces the quick start and lists the tasks within it. 10.4.3. Adding a custom icon to a quick start A default icon is provided for all quick starts. You can provide your own custom icon. Procedure Find the .svg file that you want to use as your custom icon. Use an online tool to convert the text to base64 . In the YAML file, add icon: >- , then on the line include data:image/svg+xml;base64 followed by the output from the base64 conversion. For example: icon: >- data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHJvbGU9ImltZyIgdmlld. 10.4.4. Limiting access to a quick start Not all quick starts should be available for everyone. The accessReviewResources section of the YAML file provides the ability to limit access to the quick start. To only allow the user to access the quick start if they have the ability to create HelmChartRepository resources, use the following configuration: accessReviewResources: - group: helm.openshift.io resource: helmchartrepositories verb: create To only allow the user to access the quick start if they have the ability to list Operator groups and package manifests, thus ability to install Operators, use the following configuration: accessReviewResources: - group: operators.coreos.com resource: operatorgroups verb: list - group: packages.operators.coreos.com resource: packagemanifests verb: list 10.4.5. Linking to other quick starts Procedure In the nextQuickStart section of the YAML file, provide the name , not the displayName , of the quick start to which you want to link. For example: nextQuickStart: - add-healthchecks 10.4.6. Supported tags for quick starts Write your quick start content in markdown using these tags. The markdown is converted to HTML. Tag Description 'b', Defines bold text. 'img', Embeds an image. 'i', Defines italic text. 'strike', Defines strike-through text. 's', Defines smaller text 'del', Defines smaller text. 'em', Defines emphasized text. 'strong', Defines important text. 'a', Defines an anchor tag. 'p', Defines paragraph text. 'h1', Defines a level 1 heading. 'h2', Defines a level 2 heading. 'h3', Defines a level 3 heading. 'h4', Defines a level 4 heading. 'ul', Defines an unordered list. 'ol', Defines an ordered list. 'li', Defines a list item. 'code', Defines a text as code. 'pre', Defines a block of preformatted text. 'button', Defines a button in text. 10.4.7. Quick start highlighting markdown reference The highlighting, or hint, feature enables Quick Starts to contain a link that can highlight and animate a component of the web console. The markdown syntax contains: Bracketed link text The highlight keyword, followed by the ID of the element that you want to animate 10.4.7.1. Perspective switcher [Perspective switcher]{{highlight qs-perspective-switcher}} 10.4.7.2. Administrator perspective navigation links [Home]{{highlight qs-nav-home}} [Operators]{{highlight qs-nav-operators}} [Workloads]{{highlight qs-nav-workloads}} [Serverless]{{highlight qs-nav-serverless}} [Networking]{{highlight qs-nav-networking}} [Storage]{{highlight qs-nav-storage}} [Service catalog]{{highlight qs-nav-servicecatalog}} [Compute]{{highlight qs-nav-compute}} [User management]{{highlight qs-nav-usermanagement}} [Administration]{{highlight qs-nav-administration}} 10.4.7.3. Developer perspective navigation links [Add]{{highlight qs-nav-add}} [Topology]{{highlight qs-nav-topology}} [Search]{{highlight qs-nav-search}} [Project]{{highlight qs-nav-project}} [Helm]{{highlight qs-nav-helm}} 10.4.7.4. Common navigation links [Builds]{{highlight qs-nav-builds}} [Pipelines]{{highlight qs-nav-pipelines}} [Monitoring]{{highlight qs-nav-monitoring}} 10.4.7.5. Masthead links [CloudShell]{{highlight qs-masthead-cloudshell}} [Utility Menu]{{highlight qs-masthead-utilitymenu}} [User Menu]{{highlight qs-masthead-usermenu}} [Applications]{{highlight qs-masthead-applications}} [Import]{{highlight qs-masthead-import}} [Help]{{highlight qs-masthead-help}} [Notifications]{{highlight qs-masthead-notifications}} 10.4.8. Code snippet markdown reference You can execute a CLI code snippet when it is included in a quick start from the web console. To use this feature, you must first install the Web Terminal Operator. The web terminal and code snippet actions that execute in the web terminal are not present if you do not install the Web Terminal Operator. Alternatively, you can copy a code snippet to the clipboard regardless of whether you have the Web Terminal Operator installed or not. 10.4.8.1. Syntax for inline code snippets Note If the execute syntax is used, the Copy to clipboard action is present whether you have the Web Terminal Operator installed or not. 10.4.8.2. Syntax for multi-line code snippets 10.5. Quick start content guidelines 10.5.1. Card copy You can customize the title and description on a quick start card, but you cannot customize the status. Keep your description to one to two sentences. Start with a verb and communicate the goal of the user. Correct example: 10.5.2. Introduction After clicking a quick start card, a side panel slides in that introduces the quick start and lists the tasks within it. Make your introduction content clear, concise, informative, and friendly. State the outcome of the quick start. A user should understand the purpose of the quick start before they begin. Give action to the user, not the quick start. Correct example : Incorrect example : The introduction should be a maximum of four to five sentences, depending on the complexity of the feature. A long introduction can overwhelm the user. List the quick start tasks after the introduction content, and start each task with a verb. Do not specify the number of tasks because the copy would need to be updated every time a task is added or removed. Correct example : Incorrect example : 10.5.3. Task steps After the user clicks Start , a series of steps appears that they must perform to complete the quick start. Follow these general guidelines when writing task steps: Use "Click" for buttons and labels. Use "Select" for checkboxes, radio buttons, and drop-down menus. Use "Click" instead of "Click on" Correct example : Incorrect example : Tell users how to navigate between Administrator and Developer perspectives. Even if you think a user might already be in the appropriate perspective, give them instructions on how to get there so that they are definitely where they need to be. Examples: Use the "Location, action" structure. Tell a user where to go before telling them what to do. Correct example : Incorrect example : Keep your product terminology capitalization consistent. If you must specify a menu type or list as a dropdown, write "dropdown" as one word without a hyphen. Clearly distinguish between a user action and additional information on product functionality. User action : Additional information : Avoid directional language, like "In the top-right corner, click the icon". Directional language becomes outdated every time UI layouts change. Also, a direction for desktop users might not be accurate for users with a different screen size. Instead, identify something using its name. Correct example : Incorrect example : Do not identify items by color alone, like "Click the gray circle". Color identifiers are not useful for sight-limited users, especially colorblind users. Instead, identify an item using its name or copy, like button copy. Correct example : Incorrect example : Use the second-person point of view, you, consistently: Correct example : Incorrect example : 10.5.4. Check your work module After a user completes a step, a Check your work module appears. This module prompts the user to answer a yes or no question about the step results, which gives them the opportunity to review their work. For this module, you only need to write a single yes or no question. If the user answers Yes , a check mark will appear. If the user answers No , an error message appears with a link to relevant documentation, if necessary. The user then has the opportunity to go back and try again. 10.5.5. Formatting UI elements Format UI elements using these guidelines: Copy for buttons, dropdowns, tabs, fields, and other UI controls: Write the copy as it appears in the UI and bold it. All other UI elements-including page, window, and panel names: Write the copy as it appears in the UI and bold it. Code or user-entered text: Use monospaced font. Hints: If a hint to a navigation or masthead element is included, style the text as you would a link. CLI commands: Use monospaced font. In running text, use a bold, monospaced font for a command. If a parameter or option is a variable value, use an italic monospaced font. Use a bold, monospaced font for the parameter and a monospaced font for the option. 10.6. Additional resources For voice and tone requirements, refer to PatternFly's brand voice and tone guidelines . For other UX content guidance, refer to all areas of PatternFly's UX writing style guide .
[ "oc get -o yaml consolequickstart spring-with-s2i > my-quick-start.yaml", "oc create -f my-quick-start.yaml", "oc explain consolequickstarts", "summary: failed: Try the steps again. success: Your Spring application is running. title: Run the Spring application conclusion: >- Your Spring application is deployed and ready. 1", "apiVersion: console.openshift.io/v1 kind: ConsoleQuickStart metadata: name: spring-with-s2i spec: description: 'Import a Spring Application from git, build, and deploy it onto OpenShift.' 1", "apiVersion: console.openshift.io/v1 kind: ConsoleQuickStart metadata: name: spring-with-s2i spec: description: 'Import a Spring Application from git, build, and deploy it onto OpenShift.' displayName: Get started with Spring 1 durationMinutes: 10", "apiVersion: console.openshift.io/v1 kind: ConsoleQuickStart metadata: name: spring-with-s2i spec: description: 'Import a Spring Application from git, build, and deploy it onto OpenShift.' displayName: Get started with Spring durationMinutes: 10 1", "spec: description: 'Import a Spring Application from git, build, and deploy it onto OpenShift.' displayName: Get started with Spring durationMinutes: 10 icon: >- 1 data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIGlkPSJMYXllcl8xIiBkYXRhLW5hbWU9IkxheWVyIDEiIHZpZXdCb3g9IjAgMCAxMDI0IDEwMjQiPjxkZWZzPjxzdHlsZT4uY2xzLTF7ZmlsbDojMTUzZDNjO30uY2xzLTJ7ZmlsbDojZDhkYTlkO30uY2xzLTN7ZmlsbDojNThjMGE4O30uY2xzLTR7ZmlsbDojZmZmO30uY2xzLTV7ZmlsbDojM2Q5MTkxO308L3N0eWxlPjwvZGVmcz48dGl0bGU+c25vd2Ryb3BfaWNvbl9yZ2JfZGVmYXVsdDwvdGl0bGU+PHBhdGggY2xhc3M9ImNscy0xIiBkPSJNMTAxMi42OSw1OTNjLTExLjEyLTM4LjA3LTMxLTczLTU5LjIxLTEwMy44LTkuNS0xMS4zLTIzLjIxLTI4LjI5LTM5LjA2LTQ3Ljk0QzgzMy41MywzNDEsNzQ1LjM3LDIzNC4xOCw2NzQsMTY4Ljk0Yy01LTUuMjYtMTAuMjYtMTAuMzEtMTUuNjUtMTUuMDdhMjQ2LjQ5LDI0Ni40OSwwLDAsMC0zNi41NS0yNi44LDE4Mi41LDE4Mi41LDAsMCwwLTIwLjMtMTEuNzcsMjAxLjUzLDIwMS41MywwLDAsMC00My4xOS0xNUExNTUuMjQsMTU1LjI0LDAsMCwwLDUyOCw5NS4yYy02Ljc2LS42OC0xMS43NC0uODEtMTQuMzktLjgxaDBsLTEuNjIsMC0xLjYyLDBhMTc3LjMsMTc3LjMsMCwwLDAtMzEuNzcsMy4zNSwyMDguMjMsMjA4LjIzLDAsMCwwLTU2LjEyLDE3LjU2LDE4MSwxODEsMCwwLDAtMjAuMjcsMTEuNzUsMjQ3LjQzLDI0Ny40MywwLDAsMC0zNi41NywyNi44MUMzNjAuMjUsMTU4LjYyLDM1NSwxNjMuNjgsMzUwLDE2OWMtNzEuMzUsNjUuMjUtMTU5LjUsMTcyLTI0MC4zOSwyNzIuMjhDOTMuNzMsNDYwLjg4LDgwLDQ3Ny44Nyw3MC41Miw0ODkuMTcsNDIuMzUsNTIwLDIyLjQzLDU1NC45LDExLjMxLDU5MywuNzIsNjI5LjIyLTEuNzMsNjY3LjY5LDQsNzA3LjMxLDE1LDc4Mi40OSw1NS43OCw4NTkuMTIsMTE4LjkzLDkyMy4wOWEyMiwyMiwwLDAsMCwxNS41OSw2LjUyaDEuODNsMS44Ny0uMzJjODEuMDYtMTMuOTEsMTEwLTc5LjU3LDE0My40OC0xNTUuNiwzLjkxLTguODgsNy45NS0xOC4wNSwxMi4yLTI3LjQzcTUuNDIsOC41NCwxMS4zOSwxNi4yM2MzMS44NSw0MC45MSw3NS4xMiw2NC42NywxMzIuMzIsNzIuNjNsMTguOCwyLjYyLDQuOTUtMTguMzNjMTMuMjYtNDkuMDcsMzUuMy05MC44NSw1MC42NC0xMTYuMTksMTUuMzQsMjUuMzQsMzcuMzgsNjcuMTIsNTAuNjQsMTE2LjE5bDUsMTguMzMsMTguOC0yLjYyYzU3LjItOCwxMDAuNDctMzEuNzIsMTMyLjMyLTcyLjYzcTYtNy42OCwxMS4zOS0xNi4yM2M0LjI1LDkuMzgsOC4yOSwxOC41NSwxMi4yLDI3LjQzLDMzLjQ5LDc2LDYyLjQyLDE0MS42OSwxNDMuNDgsMTU1LjZsMS44MS4zMWgxLjg5YTIyLDIyLDAsMCwwLDE1LjU5LTYuNTJjNjMuMTUtNjQsMTAzLjk1LTE0MC42LDExNC44OS0yMTUuNzhDMTAyNS43Myw2NjcuNjksMTAyMy4yOCw2MjkuMjIsMTAxMi42OSw1OTNaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNMzY0LjE1LDE4NS4yM2MxNy44OS0xNi40LDM0LjctMzAuMTUsNDkuNzctNDAuMTFhMjEyLDIxMiwwLDAsMSw2NS45My0yNS43M0ExOTgsMTk4LDAsMCwxLDUxMiwxMTYuMjdhMTk2LjExLDE5Ni4xMSwwLDAsMSwzMiwzLjFjNC41LjkxLDkuMzYsMi4wNiwxNC41MywzLjUyLDYwLjQxLDIwLjQ4LDg0LjkyLDkxLjA1LTQ3LjQ0LDI0OC4wNi0yOC43NSwzNC4xMi0xNDAuNywxOTQuODQtMTg0LjY2LDI2OC40MmE2MzAuODYsNjMwLjg2LDAsMCwwLTMzLjIyLDU4LjMyQzI3Niw2NTUuMzQsMjY1LjQsNTk4LDI2NS40LDUyMC4yOSwyNjUuNCwzNDAuNjEsMzExLjY5LDI0MC43NCwzNjQuMTUsMTg1LjIzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTUyNy41NCwzODQuODNjODQuMDYtOTkuNywxMTYuMDYtMTc3LjI4LDk1LjIyLTIzMC43NCwxMS42Miw4LjY5LDI0LDE5LjIsMzcuMDYsMzEuMTMsNTIuNDgsNTUuNSw5OC43OCwxNTUuMzgsOTguNzgsMzM1LjA3LDAsNzcuNzEtMTAuNiwxMzUuMDUtMjcuNzcsMTc3LjRhNjI4LjczLDYyOC43MywwLDAsMC0zMy4yMy01OC4zMmMtMzktNjUuMjYtMTMxLjQ1LTE5OS0xNzEuOTMtMjUyLjI3QzUyNi4zMywzODYuMjksNTI3LDM4NS41Miw1MjcuNTQsMzg0LjgzWiIvPjxwYXRoIGNsYXNzPSJjbHMtNCIgZD0iTTEzNC41OCw5MDguMDdoLS4wNmEuMzkuMzksMCwwLDEtLjI3LS4xMWMtMTE5LjUyLTEyMS4wNy0xNTUtMjg3LjQtNDcuNTQtNDA0LjU4LDM0LjYzLTQxLjE0LDEyMC0xNTEuNiwyMDIuNzUtMjQyLjE5LTMuMTMsNy02LjEyLDE0LjI1LTguOTIsMjEuNjktMjQuMzQsNjQuNDUtMzYuNjcsMTQ0LjMyLTM2LjY3LDIzNy40MSwwLDU2LjUzLDUuNTgsMTA2LDE2LjU5LDE0Ny4xNEEzMDcuNDksMzA3LjQ5LDAsMCwwLDI4MC45MSw3MjNDMjM3LDgxNi44OCwyMTYuOTMsODkzLjkzLDEzNC41OCw5MDguMDdaIi8+PHBhdGggY2xhc3M9ImNscy01IiBkPSJNNTgzLjQzLDgxMy43OUM1NjAuMTgsNzI3LjcyLDUxMiw2NjQuMTUsNTEyLDY2NC4xNXMtNDguMTcsNjMuNTctNzEuNDMsMTQ5LjY0Yy00OC40NS02Ljc0LTEwMC45MS0yNy41Mi0xMzUuNjYtOTEuMThhNjQ1LjY4LDY0NS42OCwwLDAsMSwzOS41Ny03MS41NGwuMjEtLjMyLjE5LS4zM2MzOC02My42MywxMjYuNC0xOTEuMzcsMTY3LjEyLTI0NS42Niw0MC43MSw1NC4yOCwxMjkuMSwxODIsMTY3LjEyLDI0NS42NmwuMTkuMzMuMjEuMzJhNjQ1LjY4LDY0NS42OCwwLDAsMSwzOS41Nyw3MS41NEM2ODQuMzQsNzg2LjI3LDYzMS44OCw4MDcuMDUsNTgzLjQzLDgxMy43OVoiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik04ODkuNzUsOTA4YS4zOS4zOSwwLDAsMS0uMjcuMTFoLS4wNkM4MDcuMDcsODkzLjkzLDc4Nyw4MTYuODgsNzQzLjA5LDcyM2EzMDcuNDksMzA3LjQ5LDAsMCwwLDIwLjQ1LTU1LjU0YzExLTQxLjExLDE2LjU5LTkwLjYxLDE2LjU5LTE0Ny4xNCwwLTkzLjA4LTEyLjMzLTE3My0zNi42Ni0yMzcuNHEtNC4yMi0xMS4xNi04LjkzLTIxLjdjODIuNzUsOTAuNTksMTY4LjEyLDIwMS4wNSwyMDIuNzUsMjQyLjE5QzEwNDQuNzksNjIwLjU2LDEwMDkuMjcsNzg2Ljg5LDg4OS43NSw5MDhaIi8+PC9zdmc+Cg==", "introduction: >- 1 **Spring** is a Java framework for building applications based on a distributed microservices architecture. - Spring enables easy packaging and configuration of Spring applications into a self-contained executable application which can be easily deployed as a container to OpenShift. - Spring applications can integrate OpenShift capabilities to provide a natural \"Spring on OpenShift\" developer experience for both existing and net-new Spring applications. For example: - Externalized configuration using Kubernetes ConfigMaps and integration with Spring Cloud Kubernetes - Service discovery using Kubernetes Services - Load balancing with Replication Controllers - Kubernetes health probes and integration with Spring Actuator - Metrics: Prometheus, Grafana, and integration with Spring Cloud Sleuth - Distributed tracing with Istio & Jaeger tracing - Developer tooling through Red Hat OpenShift and Red Hat CodeReady developer tooling to quickly scaffold new Spring projects, gain access to familiar Spring APIs in your favorite IDE, and deploy to Red Hat OpenShift", "icon: >- data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHJvbGU9ImltZyIgdmlld.", "accessReviewResources: - group: helm.openshift.io resource: helmchartrepositories verb: create", "accessReviewResources: - group: operators.coreos.com resource: operatorgroups verb: list - group: packages.operators.coreos.com resource: packagemanifests verb: list", "nextQuickStart: - add-healthchecks", "[Perspective switcher]{{highlight qs-perspective-switcher}}", "[Home]{{highlight qs-nav-home}} [Operators]{{highlight qs-nav-operators}} [Workloads]{{highlight qs-nav-workloads}} [Serverless]{{highlight qs-nav-serverless}} [Networking]{{highlight qs-nav-networking}} [Storage]{{highlight qs-nav-storage}} [Service catalog]{{highlight qs-nav-servicecatalog}} [Compute]{{highlight qs-nav-compute}} [User management]{{highlight qs-nav-usermanagement}} [Administration]{{highlight qs-nav-administration}}", "[Add]{{highlight qs-nav-add}} [Topology]{{highlight qs-nav-topology}} [Search]{{highlight qs-nav-search}} [Project]{{highlight qs-nav-project}} [Helm]{{highlight qs-nav-helm}}", "[Builds]{{highlight qs-nav-builds}} [Pipelines]{{highlight qs-nav-pipelines}} [Monitoring]{{highlight qs-nav-monitoring}}", "[CloudShell]{{highlight qs-masthead-cloudshell}} [Utility Menu]{{highlight qs-masthead-utilitymenu}} [User Menu]{{highlight qs-masthead-usermenu}} [Applications]{{highlight qs-masthead-applications}} [Import]{{highlight qs-masthead-import}} [Help]{{highlight qs-masthead-help}} [Notifications]{{highlight qs-masthead-notifications}}", "`code block`{{copy}} `code block`{{execute}}", "``` multi line code block ```{{copy}} ``` multi line code block ```{{execute}}", "Create a serverless application.", "In this quick start, you will deploy a sample application to {product-title}.", "This quick start shows you how to deploy a sample application to {product-title}.", "Tasks to complete: Create a serverless application; Connect an event source; Force a new revision", "You will complete these 3 tasks: Creating a serverless application; Connecting an event source; Forcing a new revision", "Click OK.", "Click on the OK button.", "Enter the Developer perspective: In the main navigation, click the dropdown menu and select Developer. Enter the Administrator perspective: In the main navigation, click the dropdown menu and select Admin.", "In the node.js deployment, hover over the icon.", "Hover over the icon in the node.js deployment.", "Change the time range of the dashboard by clicking the dropdown menu and selecting time range.", "To look at data in a specific time frame, you can change the time range of the dashboard.", "In the navigation menu, click Settings.", "In the left-hand menu, click Settings.", "The success message indicates a connection.", "The message with a green icon indicates a connection.", "Set up your environment.", "Let's set up our environment." ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/web_console/creating-quick-start-tutorials
Chapter 34. Using Automount
Chapter 34. Using Automount Automount is a way to manage, organize, and access directories across multiple systems. Automount automatically mounts a directory whenever access to it is requested. This works exceptionally well within an IdM domain since it allows directories on clients within the domain to be shared easily. This is especially important with user home directories, see Section 11.1, "Setting up User Home Directories" . In IdM, automount works with the internal LDAP directory and also with DNS services if configured. 34.1. About Automount and IdM Automount provides a coherent structure to the way that directories are organized. Every directory is called a mount point or a key . Multiple keys that are grouped together create a map , and maps are associated according to their physical or conceptual location . The base configuration file for automount is the auto.master file in the /etc directory. If necessary, there can be multiple auto.master configuration files in separate server locations. When the autofs utility is configured on a server and the server is a client in an IdM domain, then all configuration information for automount is stored in the IdM directory. Rather than in separate text files, the autofs configuration containing maps, locations, and keys are stored as LDAP entries. For example, the default map file, auto.master , is stored as: Important Identity Management works with an existing autofs deployment but does not set up or configure autofs itself. Each new location is added as a container entry under cn=automount,dc=example,dc=com , and each map and each key are stored beneath that location. As with other IdM domain services, automount works with IdM natively. The automount configuration can be managed by IdM tools: The ipa automountlocation* commands for Locations , The ipa automountmap* commands for direct and indirect maps , The ipa automountkey* commands for keys . For automount to work within the IdM domain, the NFS server must be configured as an IdM client. Configuring NFS itself is covered in the Red Hat Enterprise Linux Storage Administration Guide .
[ "dn: automountmapname=auto.master,cn=default,cn=automount,dc=example,dc=com objectClass: automountMap objectClass: top automountMapName: auto.master" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/automount
9.2. Virtual Network Interface Cards
9.2. Virtual Network Interface Cards 9.2.1. vNIC Profile Overview A Virtual Network Interface Card (vNIC) profile is a collection of settings that can be applied to individual virtual network interface cards in the Manager. A vNIC profile allows you to apply Network QoS profiles to a vNIC, enable or disable port mirroring, and add or remove custom properties. A vNIC profile also offers an added layer of administrative flexibility in that permission to use (consume) these profiles can be granted to specific users. In this way, you can control the quality of service that different users receive from a given network. 9.2.2. Creating or Editing a vNIC Profile Create or edit a Virtual Network Interface Controller (vNIC) profile to regulate network bandwidth for users and groups. Note If you are enabling or disabling port mirroring, all virtual machines using the associated profile must be in a down state before editing. Creating or Editing a vNIC Profile Click Network Networks . Click the logical network's name to open the details view. Click the vNIC Profiles tab. Click New or Edit . Enter the Name and Description of the profile. Select the relevant Quality of Service policy from the QoS list. Select a Network Filter from the drop-down list to manage the traffic of network packets to and from virtual machines. For more information on network filters, see Applying network filtering in the Red Hat Enterprise Linux Virtualization Deployment and Administration Guide . Select the Passthrough check box to enable passthrough of the vNIC and allow direct device assignment of a virtual function. Enabling the passthrough property will disable QoS, network filtering, and port mirroring as these are not compatible. For more information on passthrough, see Section 9.2.4, "Enabling Passthrough on a vNIC Profile" . If Passthrough is selected, optionally deselect the Migratable check box to disable migration for vNICs using this profile. If you keep this check box selected, see Additional Prerequisites for Virtual Machines with SR-IOV-Enabled vNICs in the Virtual Machine Management Guide . Use the Port Mirroring and Allow all users to use this Profile check boxes to toggle these options. Select a custom property from the custom properties list, which displays Please select a key... by default. Use the + and - buttons to add or remove custom properties. Click OK . Apply this profile to users and groups to regulate their network bandwidth. If you edited a vNIC profile, you must either restart the virtual machine, or hot unplug and then hot plug the vNIC if the guest operating system supports vNIC hot plug and hot unplug. 9.2.3. Explanation of Settings in the VM Interface Profile Window Table 9.5. VM Interface Profile Window Field Name Description Network A drop-down list of the available networks to apply the vNIC profile to. Name The name of the vNIC profile. This must be a unique name with any combination of uppercase and lowercase letters, numbers, hyphens, and underscores between 1 and 50 characters. Description The description of the vNIC profile. This field is recommended but not mandatory. QoS A drop-down list of the available Network Quality of Service policies to apply to the vNIC profile. QoS policies regulate inbound and outbound network traffic of the vNIC. Network Filter A drop-down list of the available network filters to apply to the vNIC profile. Network filters improve network security by filtering the type of packets that can be sent to and from virtual machines. The default filter is vdsm-no-mac-spoofing , which is a combination of no-mac-spoofing and no-arp-mac-spoofing . For more information on the network filters provided by libvirt, see the Pre-existing network filters section of the Red Hat Enterprise Linux Virtualization Deployment and Administration Guide . Use <No Network Filter> for virtual machine VLANs and bonds. On trusted virtual machines, choosing not to use a network filter can improve performance. Note Red Hat no longer supports disabling filters by setting the EnableMACAntiSpoofingFilterRules parameter to false using the engine-config tool. Use the <No Network Filter> option instead. Passthrough A check box to toggle the passthrough property. Passthrough allows a vNIC to connect directly to a virtual function of a host NIC. The passthrough property cannot be edited if the vNIC profile is attached to a virtual machine. QoS, network filters, and port mirroring are disabled in the vNIC profile if passthrough is enabled. Migratable A check box to toggle whether or not vNICs using this profile can be migrated. Migration is enabled by default on regular vNIC profiles; the check box is selected and cannot be changed. When the Passthrough check box is selected, Migratable becomes available and can be deselected, if required, to disable migration of passthrough vNICs. Port Mirroring A check box to toggle port mirroring. Port mirroring copies layer 3 network traffic on the logical network to a virtual interface on a virtual machine. It it not selected by default. For further details, see Port Mirroring in the Technical Reference . Device Custom Properties A drop-down menu to select available custom properties to apply to the vNIC profile. Use the + and - buttons to add and remove properties respectively. Allow all users to use this Profile A check box to toggle the availability of the profile to all users in the environment. It is selected by default. 9.2.4. Enabling Passthrough on a vNIC Profile Note This is one in a series of topics that show how to set up and configure SR-IOV on Red Hat Virtualization. For more information, see Setting Up and Configuring SR-IOV The passthrough property of a vNIC profile enables a vNIC to be directly connected to a virtual function (VF) of an SR-IOV-enabled NIC. The vNIC will then bypass the software network virtualization and connect directly to the VF for direct device assignment. The passthrough property cannot be enabled if the vNIC profile is already attached to a vNIC; this procedure creates a new profile to avoid this. If a vNIC profile has passthrough enabled, QoS, network filters, and port mirroring cannot be enabled on the same profile. For more information on SR-IOV, direct device assignment, and the hardware considerations for implementing these in Red Hat Virtualization, see Hardware Considerations for Implementing SR-IOV . Enabling Passthrough Click Network Networks . Click the logical network's name to open the details view. Click the vNIC Profiles tab to list all vNIC profiles for that logical network. Click New . Enter the Name and Description of the profile. Select the Passthrough check box. Optionally deselect the Migratable check box to disable migration for vNICs using this profile. If you keep this check box selected, see Additional Prerequisites for Virtual Machines with SR-IOV-Enabled vNICs in the Virtual Machine Management Guide . If necessary, select a custom property from the custom properties list, which displays Please select a key... by default. Use the + and - buttons to add or remove custom properties. Click OK . The vNIC profile is now passthrough-capable. To use this profile to directly attach a virtual machine to a NIC or PCI VF, attach the logical network to the NIC and create a new PCI Passthrough vNIC on the desired virtual machine that uses the passthrough vNIC profile. For more information on these procedures respectively, see Section 9.4.2, "Editing Host Network Interfaces and Assigning Logical Networks to Hosts" , and Adding a New Network Interface in the Virtual Machine Management Guide . 9.2.5. Removing a vNIC Profile Remove a vNIC profile to delete it from your virtualized environment. Removing a vNIC Profile Click Network Networks . Click the logical network's name to open the details view. Click the vNIC Profiles tab to display available vNIC profiles. Select one or more profiles and click Remove . Click OK . 9.2.6. Assigning Security Groups to vNIC Profiles Note This feature is only available when OpenStack Networking (neutron) is added as an external network provider. Security groups cannot be created through the Red Hat Virtualization Manager. You must create security groups through OpenStack. For more information, see Project Security Management in the Red Hat OpenStack Platform Users and Identity Management Guide . You can assign security groups to the vNIC profile of networks that have been imported from an OpenStack Networking instance and that use the Open vSwitch plug-in. A security group is a collection of strictly enforced rules that allow you to filter inbound and outbound traffic over a network interface. The following procedure outlines how to attach a security group to a vNIC profile. Note A security group is identified using the ID of that security group as registered in the OpenStack Networking instance. You can find the IDs of security groups for a given tenant by running the following command on the system on which OpenStack Networking is installed: Assigning Security Groups to vNIC Profiles Click Network Networks . Click the logical network's name to open the details view. Click the vNIC Profiles tab. Click New , or select an existing vNIC profile and click Edit . From the custom properties drop-down list, select SecurityGroups . Leaving the custom property drop-down blank applies the default security settings, which permit all outbound traffic and intercommunication but deny all inbound traffic from outside of the default security group. Note that removing the SecurityGroups property later will not affect the applied security group. In the text field, enter the ID of the security group to attach to the vNIC profile. Click OK . You have attached a security group to the vNIC profile. All traffic through the logical network to which that profile is attached will be filtered in accordance with the rules defined for that security group. 9.2.7. User Permissions for vNIC Profiles Configure user permissions to assign users to certain vNIC profiles. Assign the VnicProfileUser role to a user to enable them to use the profile. Restrict users from certain profiles by removing their permission for that profile. User Permissions for vNIC Profiles Click Network vNIC Profile . Click the vNIC profile's name to open the details view. Click the Permissions tab to show the current user permissions for the profile. Click Add or Remove to change user permissions for the vNIC profile. In the Add Permissions to User window, click My Groups to display your user groups. You can use this option to grant permissions to other users in your groups. You have configured user permissions for a vNIC profile. 9.2.8. Configuring vNIC Profiles for UCS Integration Cisco's Unified Computing System (UCS) is used to manage data center aspects such as computing, networking and storage resources. The vdsm-hook-vmfex-dev hook allows virtual machines to connect to Cisco's UCS-defined port profiles by configuring the vNIC profile. The UCS-defined port profiles contain the properties and settings used to configure virtual interfaces in UCS. The vdsm-hook-vmfex-dev hook is installed by default with VDSM. See Appendix A, VDSM and Hooks for more information. When a virtual machine that uses the vNIC profile is created, it will use the Cisco vNIC. The procedure to configure the vNIC profile for UCS integration involves first configuring a custom device property. When configuring the custom device property, any existing value it contained is overwritten. When combining new and existing custom properties, include all of the custom properties in the command used to set the key's value. Multiple custom properties are separated by a semi-colon. Note A UCS port profile must be configured in Cisco UCS before configuring the vNIC profile. Configuring the Custom Device Property On the Red Hat Virtualization Manager, configure the vmfex custom property and set the cluster compatibility level using --cver . Verify that the vmfex custom device property was added. Restart the ovirt-engine service. The vNIC profile to configure can belong to a new or existing logical network. See Section 9.1.2, "Creating a New Logical Network in a Data Center or Cluster" for instructions to configure a new logical network. Configuring a vNIC Profile for UCS Integration Click Network Networks . Click the logical network's name to open the details view. Click the vNIC Profiles tab. Click New , or select a vNIC profile and click Edit . Enter the Name and Description of the profile. Select the vmfex custom property from the custom properties list and enter the UCS port profile name. Click OK .
[ "neutron security-group-list", "engine-config -s CustomDeviceProperties='{type=interface;prop={vmfex=^[a-zA-Z0-9_.-]{2,32}USD}}' --cver=3.6", "engine-config -g CustomDeviceProperties", "systemctl restart ovirt-engine.service" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/sect-virtual_network_interface_cards
Appendix A. collectd plug-ins
Appendix A. collectd plug-ins This section contains a complete list of collectd plug-ins and configurations for Red Hat OpenStack Platform 16.0. collectd-aggregation collectd::plugin::aggregation::aggregators collectd::plugin::aggregation::interval collectd-amqp1 collectd-apache collectd::plugin::apache::instances (ex.: { localhost ⇒ { url ⇒ http://localhost/mod_status?auto }}) collectd::plugin::apache::interval collectd-apcups collectd-battery collectd::plugin::battery::values_percentage collectd::plugin::battery::report_degraded collectd::plugin::battery::query_state_fs collectd::plugin::battery::interval collectd-ceph collectd::plugin::ceph::daemons collectd::plugin::ceph::longrunavglatency collectd::plugin::ceph::convertspecialmetrictypes collectd-cgroups collectd::plugin::cgroups::ignore_selected collectd::plugin::cgroups::interval collectd-conntrack None collectd-contextswitch collectd::plugin::contextswitch::interval collectd-cpu collectd::plugin::cpu::reportbystate collectd::plugin::cpu::reportbycpu collectd::plugin::cpu::valuespercentage collectd::plugin::cpu::reportnumcpu collectd::plugin::cpu::reportgueststate collectd::plugin::cpu::subtractgueststate collectd::plugin::cpu::interval collectd-cpufreq None collectd-cpusleep collectd-csv collectd::plugin::csv::datadir collectd::plugin::csv::storerates collectd::plugin::csv::interval collectd-df collectd::plugin::df::devices collectd::plugin::df::fstypes collectd::plugin::df::ignoreselected collectd::plugin::df::mountpoints collectd::plugin::df::reportbydevice collectd::plugin::df::reportinodes collectd::plugin::df::reportreserved collectd::plugin::df::valuesabsolute collectd::plugin::df::valuespercentage collectd::plugin::df::interval collectd-disk collectd::plugin::disk::disks collectd::plugin::disk::ignoreselected collectd::plugin::disk::udevnameattr collectd::plugin::disk::interval collectd-entropy collectd::plugin::entropy::interval collectd-ethstat collectd::plugin::ethstat::interfaces collectd::plugin::ethstat::maps collectd::plugin::ethstat::mappedonly collectd::plugin::ethstat::interval collectd-exec collectd::plugin::exec::commands collectd::plugin::exec::commands_defaults collectd::plugin::exec::globals collectd::plugin::exec::interval collectd-fhcount collectd::plugin::fhcount::valuesabsolute collectd::plugin::fhcount::valuespercentage collectd::plugin::fhcount::interval collectd-filecount collectd::plugin::filecount::directories collectd::plugin::filecount::interval collectd-fscache None collectd-hddtemp collectd::plugin::hddtemp::host collectd::plugin::hddtemp::port collectd::plugin::hddtemp::interval collectd-hugepages collectd::plugin::hugepages::report_per_node_hp collectd::plugin::hugepages::report_root_hp collectd::plugin::hugepages::values_pages collectd::plugin::hugepages::values_bytes collectd::plugin::hugepages::values_percentage collectd::plugin::hugepages::interval collectd-intel_rdt collectd-interface collectd::plugin::interface::interfaces collectd::plugin::interface::ignoreselected collectd::plugin::interface::reportinactive Collectd::plugin::interface::interval collectd-ipc None collectd-ipmi collectd::plugin::ipmi::ignore_selected collectd::plugin::ipmi::notify_sensor_add collectd::plugin::ipmi::notify_sensor_remove collectd::plugin::ipmi::notify_sensor_not_present collectd::plugin::ipmi::sensors collectd::plugin::ipmi::interval collectd-irq collectd::plugin::irq::irqs collectd::plugin::irq::ignoreselected collectd::plugin::irq::interval collectd-load collectd::plugin::load::report_relative collectd::plugin::load::interval collectd-logfile collectd::plugin::logfile::log_level collectd::plugin::logfile::log_file collectd::plugin::logfile::log_timestamp collectd::plugin::logfile::print_severity collectd::plugin::logfile::interval collectd-madwifi collectd-mbmon collectd-md collectd-memcached collectd::plugin::memcached::instances collectd::plugin::memcached::interval collectd-memory collectd::plugin::memory::valuesabsolute collectd::plugin::memory::valuespercentage collectd::plugin::memory::interval collectd-multimeter collectd-multimeter collectd-mysql collectd::plugin::mysql::interval collectd-netlink collectd::plugin::netlink::interfaces collectd::plugin::netlink::verboseinterfaces collectd::plugin::netlink::qdiscs collectd::plugin::netlink::classes collectd::plugin::netlink::filters collectd::plugin::netlink::ignoreselected collectd::plugin::netlink::interval collectd-network collectd::plugin::network::timetolive collectd::plugin::network::maxpacketsize collectd::plugin::network::forward collectd::plugin::network::reportstats collectd::plugin::network::listeners collectd::plugin::network::servers collectd::plugin::network::interval collectd-nfs collectd::plugin::nfs::interval collectd-ntpd collectd::plugin::ntpd::host collectd::plugin::ntpd::port collectd::plugin::ntpd::reverselookups collectd::plugin::ntpd::includeunitid collectd::plugin::ntpd::interval collectd-numa None collectd-olsrd collectd-openvpn collectd::plugin::openvpn::statusfile collectd::plugin::openvpn::improvednamingschema collectd::plugin::openvpn::collectcompression collectd::plugin::openvpn::collectindividualusers collectd::plugin::openvpn::collectusercount collectd::plugin::openvpn::interval collectd-ovs_events collectd::plugin::ovs_events::address collectd::plugin::ovs_events::dispatch collectd::plugin::ovs_events::interfaces collectd::plugin::ovs_events::send_notification collectd::plugin::ovs_events::USDport collectd::plugin::ovs_events::socket collectd-ovs_stats collectd::plugin::ovs_stats::address collectd::plugin::ovs_stats::bridges collectd::plugin::ovs_stats::port collectd::plugin::ovs_stats::socket collectd-ping collectd::plugin::ping::hosts collectd::plugin::ping::timeout collectd::plugin::ping::ttl collectd::plugin::ping::source_address collectd::plugin::ping::device collectd::plugin::ping::max_missed collectd::plugin::ping::size collectd::plugin::ping::interval collectd-powerdns collectd::plugin::powerdns::interval collectd::plugin::powerdns::servers collectd::plugin::powerdns::recursors collectd::plugin::powerdns::local_socket collectd::plugin::powerdns::interval collectd-processes collectd::plugin::processes::processes collectd::plugin::processes::process_matches collectd::plugin::processes::collect_context_switch collectd::plugin::processes::collect_file_descriptor collectd::plugin::processes::collect_memory_maps collectd::plugin::powerdns::interval collectd-protocols collectd::plugin::protocols::ignoreselected collectd::plugin::protocols::values collectd-python collectd-serial collectd-smart collectd::plugin::smart::disks collectd::plugin::smart::ignoreselected collectd::plugin::smart::interval collectd-snmp_agent collectd-statsd collectd::plugin::statsd::host collectd::plugin::statsd::port collectd::plugin::statsd::deletecounters collectd::plugin::statsd::deletetimers collectd::plugin::statsd::deletegauges collectd::plugin::statsd::deletesets collectd::plugin::statsd::countersum collectd::plugin::statsd::timerpercentile collectd::plugin::statsd::timerlower collectd::plugin::statsd::timerupper collectd::plugin::statsd::timersum collectd::plugin::statsd::timercount collectd::plugin::statsd::interval collectd-swap collectd::plugin::swap::reportbydevice collectd::plugin::swap::reportbytes collectd::plugin::swap::valuesabsolute collectd::plugin::swap::valuespercentage collectd::plugin::swap::reportio collectd::plugin::swap::interval collectd-syslog collectd::plugin::syslog::log_level collectd::plugin::syslog::notify_level collectd::plugin::syslog::interval collectd-table collectd::plugin::table::tables collectd::plugin::table::interval collectd-tail collectd::plugin::tail::files collectd::plugin::tail::interval collectd-tail_csv collectd::plugin::tail_csv::metrics collectd::plugin::tail_csv::files collectd-tcpconns collectd::plugin::tcpconns::localports collectd::plugin::tcpconns::remoteports collectd::plugin::tcpconns::listening collectd::plugin::tcpconns::allportssummary collectd::plugin::tcpconns::interval collectd-ted collectd-thermal collectd::plugin::thermal::devices collectd::plugin::thermal::ignoreselected collectd::plugin::thermal::interval collectd-threshold collectd::plugin::threshold::types collectd::plugin::threshold::plugins collectd::plugin::threshold::hosts collectd::plugin::threshold::interval collectd-turbostat collectd::plugin::turbostat::core_c_states collectd::plugin::turbostat::package_c_states collectd::plugin::turbostat::system_management_interrupt collectd::plugin::turbostat::digital_temperature_sensor collectd::plugin::turbostat::tcc_activation_temp collectd::plugin::turbostat::running_average_power_limit collectd::plugin::turbostat::logical_core_names collectd-unixsock collectd-uptime collectd::plugin::uptime::interval collectd-users collectd::plugin::users::interval collectd-uuid collectd::plugin::uuid::uuid_file collectd::plugin::uuid::interval collectd-virt collectd::plugin::virt::connection collectd::plugin::virt::refresh_interval collectd::plugin::virt::domain collectd::plugin::virt::block_device collectd::plugin::virt::interface_device collectd::plugin::virt::ignore_selected collectd::plugin::virt::hostname_format collectd::plugin::virt::interface_format collectd::plugin::virt::extra_stats collectd::plugin::virt::interval collectd-vmem collectd::plugin::vmem::verbose collectd::plugin::vmem::interval collectd-vserver collectd-wireless collectd-write_graphite collectd::plugin::write_graphite::carbons collectd::plugin::write_graphite::carbon_defaults collectd::plugin::write_graphite::globals collectd-write_kafka collectd::plugin::write_kafka::kafka_host collectd::plugin::write_kafka::kafka_port collectd::plugin::write_kafka::kafka_hosts collectd::plugin::write_kafka::topics collectd-write_log collectd::plugin::write_log::format collectd-zfs_arc None
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/service_telemetry_framework_1.0/appe-stf-collectd-plugins
14.3. Attaching Interface Devices
14.3. Attaching Interface Devices The virsh attach-interface domain type source command can take the following options: --live - get value from running domain --config - get value to be used on boot --current - get value according to current domain state --persistent - behaves like --config for an offline domain, and like --live for a running domain. --target - indicates the target device in the guest virtual machine. --mac - use this to specify the MAC address of the network interface --script - use this to specify a path to a script file handling a bridge instead of the default one. --model - use this to specify the model type. --inbound - controls the inbound bandwidth of the interface. Acceptable values are average , peak , and burst . --outbound - controls the outbound bandwidth of the interface. Acceptable values are average , peak , and burst . The type can be either network to indicate a physical network device, or bridge to indicate a bridge to a device. source is the source of the device. To remove the attached device, use the virsh detach-device .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-Managing_guest_virtual_machines_with_virsh-Attaching_interface_devices
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/external_load_balancing_for_the_overcloud/making-open-source-more-inclusive
Integrating the Red Hat Hybrid Cloud Console with third-party applications
Integrating the Red Hat Hybrid Cloud Console with third-party applications Red Hat Hybrid Cloud Console 1-latest Configuring integrations between third-party tools and the Red Hat Hybrid Cloud Console Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_hybrid_cloud_console/1-latest/html/integrating_the_red_hat_hybrid_cloud_console_with_third-party_applications/index
11.16. Recommended Configurations - Dispersed Volume
11.16. Recommended Configurations - Dispersed Volume This chapter describes the recommended configurations, examples, and illustrations for Dispersed and Distributed Dispersed volumes. For a Distributed Dispersed volume, there will be multiple sets of bricks (subvolumes) that stores data with erasure coding. All the files are distributed over these sets of erasure coded subvolumes. In this scenario, even if a redundant number of bricks is lost from every dispersed subvolume, there is no data loss. For example, assume you have Distributed Dispersed volume of configuration 2 X (4 + 2). Here, you have two sets of dispersed subvolumes where the data is erasure coded between 6 bricks with 2 bricks for redundancy. The files will be stored in one of these dispersed subvolumes. Therefore, even if we lose two bricks from each set, there is no data loss. Brick Configurations The following table lists the brick layout details of multiple server/disk configurations for dispersed and distributed dispersed volumes. Table 11.3. Brick Configurations for Dispersed and Distributed Dispersed Volumes Redundancy Level Supported Configurations Bricks per Server per Subvolume Node Loss Max brick failure count within a subvolume Compatible Server Node count Increment Size (no. of nodes) Min number of sub-volumes Total Spindles Tolerated HDD Failure Percentage 12 HDD Chassis 2 4 + 2 2 1 2 3 3 6 36 33.33% 1 2 2 6 6 12 72 33.33% 2 8+2 2 1 2 5 5 6 60 20.00% 1 2 2 10 10 12 120 20.00% 3 8 + 3 1-2 1 3 6 6 6 72 25.00% 4 8 + 4 4 1 4 3 3 3 36 33.33% 2 2 4 6 6 6 72 33.33% 1 4 4 12 12 12 144 33.33% 4 16 + 4 4 1 4 5 5 3 60 20.00% 2 2 4 10 10 6 120 20.00% 1 4 4 20 20 12 240 20.00% 24 HDD Chassis 2 4 + 2 2 1 2 3 3 12 72 33.33% 1 2 2 6 6 24 144 33.33% 2 8+ 2 2 1 2 5 5 12 120 20.00% 1 2 2 10 10 24 240 20.00% 4 8 + 4 4 1 4 3 3 6 72 33.33% 2 2 4 6 6 12 144 33.33% 1 4 4 12 12 24 288 33.33% 4 16 + 4 4 1 4 5 5 6 120 20.00% 2 2 4 10 10 12 240 20.00% 1 4 4 20 20 24 480 20.00% 36 HDD Chassis 2 4 + 2 2 1 2 3 3 18 108 33.33% 1 2 2 6 6 36 216 33.33% 2 8 + 2 2 1 1 5 5 18 180 20.00% 1 2 2 10 10 36 360 20.00% 3 8 + 3 1-2 1 3 6 6 19 216 26.39% 4 8 + 4 4 1 4 3 3 9 108 33.33% 2 2 4 6 6 18 216 33.33% 1 4 4 12 12 36 432 33.33% 4 16 + 4 4 1 4 5 5 9 180 20.00% 2 2 4 10 10 18 360 20.00% 1 4 4 20 20 36 720 20.00% 60 HDD Chassis 2 4 + 2 2 1 2 3 3 30 180 33.33% 1 2 2 6 6 60 360 33.33% 2 8 + 2 2 1 2 5 5 30 300 20.00% 1 2 2 10 10 60 600 20.00% 3 8 + 3 1-2 1 3 6 6 32 360 26.67% 4 8 + 4 4 1 4 3 3 15 180 33.33% 2 2 4 6 6 30 360 33.33% 1 4 4 12 12 60 720 33.33% 4 16 + 4 4 1 4 5 5 15 300 20.00% 2 2 4 10 10 30 600 20.00% 1 4 4 20 20 60 1200 20.00% Example 1 - Dispersed 4+2 configuration on three servers This example describes a compact configuration of three servers, with each server attached to a 12 HDD chassis to create a dispersed volume. In this example, each HDD is assumed to contain a single brick. This example's brick configuration is explained in row 1 of Table 11.3, "Brick Configurations for Dispersed and Distributed Dispersed Volumes" . With this server-to-spindle ratio, 36 disks/spindles are allocated for the dispersed volume configuration. For example, to create a compact 4+2 dispersed volume using 6 spindles from the total disk pool over three servers, run the following command: Note that the --force parameter is required because this configuration is not optimal in terms of fault tolerance. Since each server provides two bricks, this configuration has a greater risk to data availability if a server goes offline than it would if each brick was provided by a separate server. Run the gluster volume info command to view the volume information. Additionally, you can convert the dispersed volume to a distributed dispersed volume in increments of 4+2. Add six bricks from the disk pool using the following command: Run the gluster volume info command to view distributed dispersed volume information. Using this configuration example, you can create configuration combinations of 6 x (4 + 2) distributed dispersed volumes. This example configuration has tolerance up to 12 brick failures. For details about creating an optimal configuration, see Section 5.8, "Creating Dispersed Volumes" . Example 2 - Dispersed 8+4 configuration on three servers The following diagram illustrates a dispersed 8+4 configuration on three servers as explained in the row 3 of Table 11.3, "Brick Configurations for Dispersed and Distributed Dispersed Volumes" The command to create the disperse volume for this configuration: Note that the --force parameter is required because this configuration is not optimal in terms of fault tolerance. Since each server provides more than one brick, this configuration has a greater risk to data availability if a server goes offline than it would if each brick was provided by a separate server. For details about creating an optimal configuration, see Section 5.8, "Creating Dispersed Volumes" . Figure 11.1. Example Configuration of 8+4 Dispersed Volume Configuration In this example, there are m bricks (refer to section Section 5.8, "Creating Dispersed Volumes" for information on n = k+m equation) from a dispersed subvolume on each server. If you add more than m bricks from a dispersed subvolume on server S, and if the server S goes down, data will be unavailable. If S (a single column in the above diagram) goes down, there is no data loss, but if there is any additional hardware failure, either another node going down or a storage device failure, there would be immediate data loss. Example 3 - Dispersed 4+2 configuration on six servers The following diagram illustrates dispersed 4+2 configuration on six servers and each server with 12-disk-per-server configuration as explained in the row 2 of Table 11.3, "Brick Configurations for Dispersed and Distributed Dispersed Volumes" . The command to create the disperse volume for this configuration: Figure 11.2. Example Configuration of 4+2 Dispersed Volume Configuration Redundancy Comparison The following chart illustrates the redundancy comparison of all supported dispersed volume configurations. Figure 11.3. Illustration of the redundancy comparison
[ "gluster volume create test_vol disperse-data 4 redundancy 2 transport tcp server1:/rhgs/brick1 server1:/rhgs/brick2 server2:/rhgs/brick3 server2:/rhgs/brick4 server3:/rhgs/brick5 server3:/rhgs/brick6 --force", "gluster volume info test-volume Volume Name: test-volume Type: Disperse Status: Started Number of Bricks: 1 x (4 + 2) = 6 Transport-type: tcp Bricks: Brick1: server1:/rhgs/brick1 Brick2: server1:/rhgs/brick2 Brick3: server2:/rhgs/brick3 Brick4: server2:/rhgs/brick4 Brick5: server3:/rhgs/brick5 Brick6: server3:/rhgs/brick6", "gluster volume add-brick test_vol server1:/rhgs/brick7 server1:/rhgs/brick8 server2:/rhgs/brick9 server2:/rhgs/brick10 server3:/rhgs/brick11 server3:/rhgs/brick12", "gluster volume info test-volume Volume Name: test-volume Type: Distributed-Disperse Status: Started Number of Bricks: 2 x (4 + 2) = 12 Transport-type: tcp Bricks: Brick1: server1:/rhgs/brick1 Brick2: server1:/rhgs/brick2 Brick3: server2:/rhgs/brick3 Brick4: server2:/rhgs/brick4 Brick5: server3:/rhgs/brick5 Brick6: server3:/rhgs/brick6 Brick7: server1:/rhgs/brick7 Brick8: server1:/rhgs/brick8 Brick9: server2:/rhgs/brick9 Brick10: server2:/rhgs/brick10 Brick11: server3:/rhgs/brick11 Brick12: server3:/rhgs/brick12", "gluster volume create test_vol disperse-data 8 redundancy 4 transport tcp server1:/rhgs/brick1 server1:/rhgs/brick2 server1:/rhgs/brick3 server1:/rhgs/brick4 server2:/rhgs/brick1 server2:/rhgs/brick2 server2:/rhgs/brick3 server2:/rhgs/brick4 server3:/rhgs/brick1 server3:/rhgs/brick2 server3:/rhgs/brick3 server3:/rhgs/brick4 server1:/rhgs/brick5 server1:/rhgs/brick6 server1:/rhgs/brick7 server1:/rhgs/brick8 server2:/rhgs/brick5 server2:/rhgs/brick6 server2:/rhgs/brick7 server2:/rhgs/brick8 server3:/rhgs/brick5 server3:/rhgs/brick6 server3:/rhgs/brick7 server3:/rhgs/brick8 server1:/rhgs/brick9 server1:/rhgs/brick10 server1:/rhgs/brick11 server1:/rhgs/brick12 server2:/rhgs/brick9 server2:/rhgs/brick10 server2:/rhgs/brick11 server2:/rhgs/brick12 server3:/rhgs/brick9 server3:/rhgs/brick10 server3:/rhgs/brick11 server3:/rhgs/brick12 --force", "gluster volume create test_vol disperse-data 4 redundancy 2 transport tcp server1:/rhgs/brick1 server2:/rhgs/brick1 server3:/rhgs/brick1 server4:/rhgs/brick1 server5:/rhgs/brick1 server6:/rhgs/brick1server1:/rhgs/brick2 server2:/rhgs/brick2 server3:/rhgs/brick2 server4:/rhgs/brick2 server5:/rhgs/brick2 server6:/rhgs/brick2 server1:/rhgs/brick3 server2:/rhgs/brick3 server3:/rhgs/brick3 server4:/rhgs/brick3 server5:/rhgs/brick3 server6:/rhgs/brick3 server1:/rhgs/brick4 server2:/rhgs/brick4 server3:/rhgs/brick4 server4:/rhgs/brick4 server5:/rhgs/brick4 server6:/rhgs/brick4 server1:/rhgs/brick5 server2:/rhgs/brick5 server3:/rhgs/brick5 server4:/rhgs/brick5 server5:/rhgs/brick5 server6:/rhgs/brick5 server1:/rhgs/brick6 server2:/rhgs/brick6 server3:/rhgs/brick6 server4:/rhgs/brick6 server5:/rhgs/brick6 server6:/rhgs/brick6 server1:/rhgs/brick7 server2:/rhgs/brick7 server3:/rhgs/brick7 server4:/rhgs/brick7 server5:/rhgs/brick7 server6:/rhgs/brick7 server1:/rhgs/brick8 server2:/rhgs/brick8 server3:/rhgs/brick8 server4:/rhgs/brick8 server5:/rhgs/brick8 server6:/rhgs/brick8 server1:/rhgs/brick9 server2:/rhgs/brick9 server3:/rhgs/brick9 server4:/rhgs/brick9 server5:/rhgs/brick9 server6:/rhgs/brick9 server1:/rhgs/brick10 server2:/rhgs/brick10 server3:/rhgs/brick10 server4:/rhgs/brick10 server5:/rhgs/brick10 server6:/rhgs/brick10 server1:/rhgs/brick11 server2:/rhgs/brick11 server3:/rhgs/brick11 server4:/rhgs/brick11 server5:/rhgs/brick11 server6:/rhgs/brick11 server1:/rhgs/brick12 server2:/rhgs/brick12 server3:/rhgs/brick12 server4:/rhgs/brick12 server5:/rhgs/brick12 server6:/rhgs/brick12" ]
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/sect-Recommended-Configuration_Dispersed
Chapter 62. router
Chapter 62. router This chapter describes the commands under the router command. 62.1. router add port Add a port to a router Usage: Table 62.1. Positional arguments Value Summary <router> Router to which port will be added (name or id) <port> Port to be added (name or id) Table 62.2. Command arguments Value Summary -h, --help Show this help message and exit 62.2. router add route Add extra static routes to a router's routing table. Usage: Table 62.3. Positional arguments Value Summary <router> Router to which extra static routes will be added (name or ID). Table 62.4. Command arguments Value Summary -h, --help Show this help message and exit --route destination=<subnet>,gateway=<ip-address> Add extra static route to the router. destination: destination subnet (in CIDR notation), gateway: nexthop IP address. Repeat option to add multiple routes. Trying to add a route that's already present (exactly, including destination and nexthop) in the routing table is allowed and is considered a successful operation. Table 62.5. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 62.6. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 62.7. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 62.8. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 62.3. router add subnet Add a subnet to a router Usage: Table 62.9. Positional arguments Value Summary <router> Router to which subnet will be added (name or id) <subnet> Subnet to be added (name or id) Table 62.10. Command arguments Value Summary -h, --help Show this help message and exit 62.4. router create Create a new router Usage: Table 62.11. Positional arguments Value Summary <name> New router name Table 62.12. Command arguments Value Summary -h, --help Show this help message and exit --extra-property type=<property_type>,name=<property_name>,value=<property_value> Additional parameters can be passed using this property. Default type of the extra property is string ( str ), but other types can be used as well. Available types are: dict , list , str , bool , int . In case of list type, value can be semicolon-separated list of values. For dict value is semicolon-separated list of the key:value pairs. --enable Enable router (default) --disable Disable router --distributed Create a distributed router --centralized Create a centralized router --ha Create a highly available router --no-ha Create a legacy router --description <description> Set router description --project <project> Owner's project (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --availability-zone-hint <availability-zone> Availability zone in which to create this router (Router Availability Zone extension required, repeat option to set multiple availability zones) --tag <tag> Tag to be added to the router (repeat option to set multiple tags) --no-tag No tags associated with the router --external-gateway <network> External network used as router's gateway (name or id) --fixed-ip subnet=<subnet>,ip-address=<ip-address> Desired ip and/or subnet (name or id) on external gateway: subnet=<subnet>,ip-address=<ip-address> (repeat option to set multiple fixed IP addresses) --enable-snat Enable source nat on external gateway --disable-snat Disable source nat on external gateway --enable-ndp-proxy Enable ipv6 ndp proxy on external gateway --disable-ndp-proxy Disable ipv6 ndp proxy on external gateway --flavor <flavor-id> Associate the router to a flavor (by name or id Table 62.13. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 62.14. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 62.15. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 62.16. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 62.5. router delete Delete router(s) Usage: Table 62.17. Positional arguments Value Summary <router> Router(s) to delete (name or id) Table 62.18. Command arguments Value Summary -h, --help Show this help message and exit 62.6. router list List routers Usage: Table 62.19. Command arguments Value Summary -h, --help Show this help message and exit --name <name> List routers according to their name --enable List enabled routers --disable List disabled routers --long List additional fields in output --project <project> List routers according to their project (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --agent <agent-id> List routers hosted by an agent (id only) --tags <tag>[,<tag>,... ] List routers which have all given tag(s) (comma- separated list of tags) --any-tags <tag>[,<tag>,... ] List routers which have any given tag(s) (comma- separated list of tags) --not-tags <tag>[,<tag>,... ] Exclude routers which have all given tag(s) (comma- separated list of tags) --not-any-tags <tag>[,<tag>,... ] Exclude routers which have any given tag(s) (comma- separated list of tags) Table 62.20. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 62.21. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 62.22. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 62.23. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 62.7. router ndp proxy create Create NDP proxy Usage: Table 62.24. Positional arguments Value Summary <router> The name or id of a router Table 62.25. Command arguments Value Summary -h, --help Show this help message and exit --name <name> New ndp proxy name --port <port> The name or id of the network port associated to the NDP proxy --ip-address <ip-address> The ipv6 address that is to be proxied. in case the port has multiple addresses assigned, use this option to select which address is to be used. --description <description> A text to describe/contextualize the use of the ndp proxy configuration Table 62.26. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 62.27. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 62.28. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 62.29. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 62.8. router ndp proxy delete Delete NDP proxy Usage: Table 62.30. Positional arguments Value Summary <ndp-proxy> Ndp proxy(s) to delete (name or id) Table 62.31. Command arguments Value Summary -h, --help Show this help message and exit 62.9. router ndp proxy list List NDP proxies Usage: Table 62.32. Command arguments Value Summary -h, --help Show this help message and exit --router <router> List only ndp proxies belong to this router (name or ID) --port <port> List only ndp proxies assocate to this port (name or ID) --ip-address ip-address List only ndp proxies according to their ipv6 address --project <project> List ndp proxies according to their project (name or ID) --name <name> List ndp proxies according to their name --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. Table 62.33. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 62.34. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 62.35. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 62.36. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 62.10. router ndp proxy set Set NDP proxy properties Usage: Table 62.37. Positional arguments Value Summary <ndp-proxy> The id or name of the ndp proxy to update Table 62.38. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Set ndp proxy name --description <description> A text to describe/contextualize the use of the ndp proxy configuration 62.11. router ndp proxy show Display NDP proxy details Usage: Table 62.39. Positional arguments Value Summary <ndp-proxy> The id or name of the ndp proxy Table 62.40. Command arguments Value Summary -h, --help Show this help message and exit Table 62.41. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 62.42. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 62.43. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 62.44. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 62.12. router remove port Remove a port from a router Usage: Table 62.45. Positional arguments Value Summary <router> Router from which port will be removed (name or id) <port> Port to be removed and deleted (name or id) Table 62.46. Command arguments Value Summary -h, --help Show this help message and exit 62.13. router remove route Remove extra static routes from a router's routing table. Usage: Table 62.47. Positional arguments Value Summary <router> Router from which extra static routes will be removed (name or ID). Table 62.48. Command arguments Value Summary -h, --help Show this help message and exit --route destination=<subnet>,gateway=<ip-address> Remove extra static route from the router. destination: destination subnet (in CIDR notation), gateway: nexthop IP address. Repeat option to remove multiple routes. Trying to remove a route that's already missing (fully, including destination and nexthop) from the routing table is allowed and is considered a successful operation. Table 62.49. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 62.50. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 62.51. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 62.52. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 62.14. router remove subnet Remove a subnet from a router Usage: Table 62.53. Positional arguments Value Summary <router> Router from which the subnet will be removed (name or id) <subnet> Subnet to be removed (name or id) Table 62.54. Command arguments Value Summary -h, --help Show this help message and exit 62.15. router set Set router properties Usage: Table 62.55. Positional arguments Value Summary <router> Router to modify (name or id) Table 62.56. Command arguments Value Summary -h, --help Show this help message and exit --extra-property type=<property_type>,name=<property_name>,value=<property_value> Additional parameters can be passed using this property. Default type of the extra property is string ( str ), but other types can be used as well. Available types are: dict , list , str , bool , int . In case of list type, value can be semicolon-separated list of values. For dict value is semicolon-separated list of the key:value pairs. --name <name> Set router name --description <description> Set router description --enable Enable router --disable Disable router --distributed Set router to distributed mode (disabled router only) --centralized Set router to centralized mode (disabled router only) --route destination=<subnet>,gateway=<ip-address> Add routes to the router destination: destination subnet (in CIDR notation) gateway: nexthop IP address (repeat option to add multiple routes). This is deprecated in favor of router add/remove route since it is prone to race conditions between concurrent clients when not used together with --no-route to overwrite the current value of routes . --no-route Clear routes associated with the router. specify both --route and --no-route to overwrite current value of routes. --ha Set the router as highly available (disabled router only) --no-ha Clear high availability attribute of the router (disabled router only) --external-gateway <network> External network used as router's gateway (name or id) --fixed-ip subnet=<subnet>,ip-address=<ip-address> Desired ip and/or subnet (name or id) on external gateway: subnet=<subnet>,ip-address=<ip-address> (repeat option to set multiple fixed IP addresses) --enable-snat Enable source nat on external gateway --disable-snat Disable source nat on external gateway --enable-ndp-proxy Enable ipv6 ndp proxy on external gateway --disable-ndp-proxy Disable ipv6 ndp proxy on external gateway --qos-policy <qos-policy> Attach qos policy to router gateway ips --no-qos-policy Remove qos policy from router gateway ips --tag <tag> Tag to be added to the router (repeat option to set multiple tags) --no-tag Clear tags associated with the router. specify both --tag and --no-tag to overwrite current tags 62.16. router show Display router details Usage: Table 62.57. Positional arguments Value Summary <router> Router to display (name or id) Table 62.58. Command arguments Value Summary -h, --help Show this help message and exit Table 62.59. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 62.60. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 62.61. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 62.62. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 62.17. router unset Unset router properties Usage: Table 62.63. Positional arguments Value Summary <router> Router to modify (name or id) Table 62.64. Command arguments Value Summary -h, --help Show this help message and exit --extra-property type=<property_type>,name=<property_name>,value=<property_value> Additional parameters can be passed using this property. Default type of the extra property is string ( str ), but other types can be used as well. Available types are: dict , list , str , bool , int . In case of list type, value can be semicolon-separated list of values. For dict value is semicolon-separated list of the key:value pairs. --route destination=<subnet>,gateway=<ip-address> Routes to be removed from the router destination: destination subnet (in CIDR notation) gateway: nexthop IP address (repeat option to unset multiple routes) --external-gateway Remove external gateway information from the router --qos-policy Remove qos policy from router gateway ips --tag <tag> Tag to be removed from the router (repeat option to remove multiple tags) --all-tag Clear all tags associated with the router
[ "openstack router add port [-h] <router> <port>", "openstack router add route [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--route destination=<subnet>,gateway=<ip-address>] <router>", "openstack router add subnet [-h] <router> <subnet>", "openstack router create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--extra-property type=<property_type>,name=<property_name>,value=<property_value>] [--enable | --disable] [--distributed | --centralized] [--ha | --no-ha] [--description <description>] [--project <project>] [--project-domain <project-domain>] [--availability-zone-hint <availability-zone>] [--tag <tag> | --no-tag] [--external-gateway <network>] [--fixed-ip subnet=<subnet>,ip-address=<ip-address>] [--enable-snat | --disable-snat] [--enable-ndp-proxy | --disable-ndp-proxy] [--flavor <flavor-id>] <name>", "openstack router delete [-h] <router> [<router> ...]", "openstack router list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--name <name>] [--enable | --disable] [--long] [--project <project>] [--project-domain <project-domain>] [--agent <agent-id>] [--tags <tag>[,<tag>,...]] [--any-tags <tag>[,<tag>,...]] [--not-tags <tag>[,<tag>,...]] [--not-any-tags <tag>[,<tag>,...]]", "openstack router ndp proxy create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--name <name>] --port <port> [--ip-address <ip-address>] [--description <description>] <router>", "openstack router ndp proxy delete [-h] <ndp-proxy> [<ndp-proxy> ...]", "openstack router ndp proxy list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--router <router>] [--port <port>] [--ip-address ip-address] [--project <project>] [--name <name>] [--project-domain <project-domain>]", "openstack router ndp proxy set [-h] [--name <name>] [--description <description>] <ndp-proxy>", "openstack router ndp proxy show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <ndp-proxy>", "openstack router remove port [-h] <router> <port>", "openstack router remove route [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--route destination=<subnet>,gateway=<ip-address>] <router>", "openstack router remove subnet [-h] <router> <subnet>", "openstack router set [-h] [--extra-property type=<property_type>,name=<property_name>,value=<property_value>] [--name <name>] [--description <description>] [--enable | --disable] [--distributed | --centralized] [--route destination=<subnet>,gateway=<ip-address>] [--no-route] [--ha | --no-ha] [--external-gateway <network>] [--fixed-ip subnet=<subnet>,ip-address=<ip-address>] [--enable-snat | --disable-snat] [--enable-ndp-proxy | --disable-ndp-proxy] [--qos-policy <qos-policy> | --no-qos-policy] [--tag <tag>] [--no-tag] <router>", "openstack router show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <router>", "openstack router unset [-h] [--extra-property type=<property_type>,name=<property_name>,value=<property_value>] [--route destination=<subnet>,gateway=<ip-address>] [--external-gateway] [--qos-policy] [--tag <tag> | --all-tag] <router>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/command_line_interface_reference/router
Chapter 4. Advisories related to this release
Chapter 4. Advisories related to this release The following advisories are issued to document bug fixes and CVE fixes included in this release: RHSA-2024:1826 RHSA-2024:1827 RHSA-2024:1828 Revised on 2024-05-09 14:51:27 UTC
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/release_notes_for_red_hat_build_of_openjdk_21.0.3/openjdk-2103-advisory_openjdk
Authorization APIs
Authorization APIs OpenShift Container Platform 4.13 Reference guide for authorization APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html-single/authorization_apis/index
Chapter 2. Understanding disconnected installation mirroring
Chapter 2. Understanding disconnected installation mirroring You can use a mirror registry for disconnected installations and to ensure that your clusters only use container images that satisfy your organization's controls on external content. Before you install a cluster on infrastructure that you provision in a disconnected environment, you must mirror the required container images into that environment. To mirror container images, you must have a registry for mirroring. 2.1. Mirroring images for a disconnected installation through the Agent-based Installer You can use one of the following procedures to mirror your OpenShift Container Platform image repository to your mirror registry: Mirroring images for a disconnected installation Mirroring images for a disconnected installation using the oc-mirror plugin 2.2. About mirroring the OpenShift Container Platform image repository for a disconnected registry To use mirror images for a disconnected installation with the Agent-based Installer, you must modify the install-config.yaml file. You can mirror the release image by using the output of either the oc adm release mirror or oc mirror command. This is dependent on which command you used to set up the mirror registry. The following example shows the output of the oc adm release mirror command. USD oc adm release mirror Example output To use the new mirrored repository to install, add the following section to the install-config.yaml: imageContentSources: mirrors: virthost.ostest.test.metalkube.org:5000/localimages/local-release-image source: quay.io/openshift-release-dev/ocp-v4.0-art-dev mirrors: virthost.ostest.test.metalkube.org:5000/localimages/local-release-image source: registry.ci.openshift.org/ocp/release The following example shows part of the imageContentSourcePolicy.yaml file generated by the oc-mirror plugin. The file can be found in the results directory, for example oc-mirror-workspace/results-1682697932/ . Example imageContentSourcePolicy.yaml file spec: repositoryDigestMirrors: - mirrors: - virthost.ostest.test.metalkube.org:5000/openshift/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev - mirrors: - virthost.ostest.test.metalkube.org:5000/openshift/release-images source: quay.io/openshift-release-dev/ocp-release 2.2.1. Configuring the Agent-based Installer to use mirrored images You must use the output of either the oc adm release mirror command or the oc-mirror plugin to configure the Agent-based Installer to use mirrored images. Procedure If you used the oc-mirror plugin to mirror your release images: Open the imageContentSourcePolicy.yaml located in the results directory, for example oc-mirror-workspace/results-1682697932/ . Copy the text in the repositoryDigestMirrors section of the yaml file. If you used the oc adm release mirror command to mirror your release images: Copy the text in the imageContentSources section of the command output. Paste the copied text into the imageContentSources field of the install-config.yaml file. Add the certificate file used for the mirror registry to the additionalTrustBundle field of the yaml file. Important The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Example install-config.yaml file additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- If you are using {ztp} manifests: add the registries.conf and ca-bundle.crt files to the mirror path to add the mirror configuration in the agent ISO image. Note You can create the registries.conf file from the output of either the oc adm release mirror command or the oc mirror plugin. The format of the /etc/containers/registries.conf file has changed. It is now version 2 and in TOML format. Example registries.conf file [[registry]] location = "registry.ci.openshift.org/ocp/release" mirror-by-digest-only = true [[registry.mirror]] location = "virthost.ostest.test.metalkube.org:5000/localimages/local-release-image" [[registry]] location = "quay.io/openshift-release-dev/ocp-v4.0-art-dev" mirror-by-digest-only = true [[registry.mirror]] location = "virthost.ostest.test.metalkube.org:5000/localimages/local-release-image"
[ "oc adm release mirror", "To use the new mirrored repository to install, add the following section to the install-config.yaml: imageContentSources: mirrors: virthost.ostest.test.metalkube.org:5000/localimages/local-release-image source: quay.io/openshift-release-dev/ocp-v4.0-art-dev mirrors: virthost.ostest.test.metalkube.org:5000/localimages/local-release-image source: registry.ci.openshift.org/ocp/release", "spec: repositoryDigestMirrors: - mirrors: - virthost.ostest.test.metalkube.org:5000/openshift/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev - mirrors: - virthost.ostest.test.metalkube.org:5000/openshift/release-images source: quay.io/openshift-release-dev/ocp-release", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----", "[[registry]] location = \"registry.ci.openshift.org/ocp/release\" mirror-by-digest-only = true [[registry.mirror]] location = \"virthost.ostest.test.metalkube.org:5000/localimages/local-release-image\" [[registry]] location = \"quay.io/openshift-release-dev/ocp-v4.0-art-dev\" mirror-by-digest-only = true [[registry.mirror]] location = \"virthost.ostest.test.metalkube.org:5000/localimages/local-release-image\"" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_an_on-premise_cluster_with_the_agent-based_installer/understanding-disconnected-installation-mirroring
Chapter 2. Red Hat Process Automation Manager BPMN and DMN modelers
Chapter 2. Red Hat Process Automation Manager BPMN and DMN modelers Red Hat Process Automation Manager provides the following extensions or applications that you can use to design Business Process Model and Notation (BPMN) process models and Decision Model and Notation (DMN) decision models using graphical modelers. Business Central : Enables you to view and design BPMN models, DMN models, and test scenario files in a related embedded designer. To use Business Central, you can set up a development environment containing a Business Central to design business rules and processes, and a KIE Server to execute and test the created business rules and processes. Red Hat Process Automation Manager VS Code extension : Enables you to view and design BPMN models, DMN models, and test scenario files in Visual Studio Code (VS Code). The VS Code extension requires VS Code 1.46.0 or later. To install the Red Hat Process Automation Manager VS Code extension, select the Extensions menu option in VS Code and search for and install the Red Hat Business Automation Bundle extension. Standalone BPMN and DMN editors : Enable you to view and design BPMN and DMN models embedded in your web applications. To download the necessary files, you can either use the NPM artifacts from the NPM registry or download the JavaScript files directly for the DMN standalone editor library at https://<YOUR_PAGE>/dmn/index.js and for the BPMN standalone editor library at https://<YOUR_PAGE>/bpmn/index.js . 2.1. Installing the Red Hat Process Automation Manager VS Code extension bundle Red Hat Process Automation Manager provides a Red Hat Business Automation Bundle VS Code extension that enables you to design Decision Model and Notation (DMN) decision models, Business Process Model and Notation (BPMN) 2.0 business processes, and test scenarios directly in VS Code. VS Code is the preferred integrated development environment (IDE) for developing new business applications. Red Hat Process Automation Manager also provides individual DMN Editor and BPMN Editor VS Code extensions for DMN or BPMN support only, if needed. Important The editors in the VS Code are partially compatible with the editors in the Business Central, and several Business Central features are not supported in the VS Code. Prerequisites The latest stable version of VS Code is installed. Procedure In your VS Code IDE, select the Extensions menu option and search for Red Hat Business Automation Bundle for DMN, BPMN, and test scenario file support. For DMN or BPMN file support only, you can also search for the individual DMN Editor or BPMN Editor extensions. When the Red Hat Business Automation Bundle extension appears in VS Code, select it and click Install . For optimal VS Code editor behavior, after the extension installation is complete, reload or close and re-launch your instance of VS Code. After you install the VS Code extension bundle, any .dmn , .bpmn , or .bpmn2 files that you open or create in VS Code are automatically displayed as graphical models. Additionally, any .scesim files that you open or create are automatically displayed as tabular test scenario models for testing the functionality of your business decisions. If the DMN, BPMN, or test scenario modelers open only the XML source of a DMN, BPMN, or test scenario file and displays an error message, review the reported errors and the model file to ensure that all elements are correctly defined. Note For new DMN or BPMN models, you can also enter dmn.new or bpmn.new in a web browser to design your DMN or BPMN model in the online modeler. When you finish creating your model, you can click Download in the online modeler page to import your DMN or BPMN file into your Red Hat Process Automation Manager project in VS Code. 2.2. Configuring the Red Hat Process Automation Manager standalone editors Red Hat Process Automation Manager provides standalone editors that are distributed in a self-contained library providing an all-in-one JavaScript file for each editor. The JavaScript file uses a comprehensive API to set and control the editor. You can install the standalone editors using the following methods: Download each JavaScript file manually Use the NPM package Procedure Install the standalone editors using one of the following methods: Download each JavaScript file manually : For this method, follow these steps: Download the JavaScript files. Add the downloaded Javascript files to your hosted application. Add the following <script> tag to your HTML page: Script tag for your HTML page for the DMN editor Script tag for your HTML page for the BPMN editor Use the NPM package : For this method, follow these steps: Add the NPM package to your package.json file: Adding the NPM package Import each editor library to your TypeScript file: Importing each editor After you install the standalone editors, open the required editor by using the provided editor API, as shown in the following example for opening a DMN editor. The API is the same for each editor. Opening the DMN standalone editor const editor = DmnEditor.open({ container: document.getElementById("dmn-editor-container"), initialContent: Promise.resolve(""), readOnly: false, origin: "", resources: new Map([ [ "MyIncludedModel.dmn", { contentType: "text", content: Promise.resolve("") } ] ]) }); Use the following parameters with the editor API: Table 2.1. Example parameters Parameter Description container HTML element in which the editor is appended. initialContent Promise to a DMN model content. This parameter can be empty, as shown in the following examples: Promise.resolve("") Promise.resolve("<DIAGRAM_CONTENT_DIRECTLY_HERE>") fetch("MyDmnModel.dmn").then(content ⇒ content.text()) readOnly (Optional) Enables you to allow changes in the editor. Set to false (default) to allow content editing and true for read-only mode in editor. origin (Optional) Origin of the repository. The default value is window.location.origin . resources (Optional) Map of resources for the editor. For example, this parameter is used to provide included models for the DMN editor or work item definitions for the BPMN editor. Each entry in the map contains a resource name and an object that consists of content-type ( text or binary ) and content (similar to the initialContent parameter). The returned object contains the methods that are required to manipulate the editor. Table 2.2. Returned object methods Method Description getContent(): Promise<string> Returns a promise containing the editor content. setContent(path: string, content: string): void Sets the content of the editor. getPreview(): Promise<string> Returns a promise containing an SVG string of the current diagram. subscribeToContentChanges(callback: (isDirty: boolean) ⇒ void): (isDirty: boolean) ⇒ void Sets a callback to be called when the content changes in the editor and returns the same callback to be used for unsubscription. unsubscribeToContentChanges(callback: (isDirty: boolean) ⇒ void): void Unsubscribes the passed callback when the content changes in the editor. markAsSaved(): void Resets the editor state that indicates that the content in the editor is saved. Also, it activates the subscribed callbacks related to content change. undo(): void Undoes the last change in the editor. Also, it activates the subscribed callbacks related to content change. redo(): void Redoes the last undone change in the editor. Also, it activates the subscribed callbacks related to content change. close(): void Closes the editor. getElementPosition(selector: string): Promise<Rect> Provides an alternative to extend the standard query selector when an element lives inside a canvas or a video component. The selector parameter must follow the <PROVIDER>:::<SELECT> format, such as Canvas:::MySquare or Video:::PresenterHand . This method returns a Rect representing the element position. envelopeApi: MessageBusClientApi<KogitoEditorEnvelopeApi> This is an advanced editor API. For more information about advanced editor API, see MessageBusClientApi and KogitoEditorEnvelopeApi .
[ "<script src=\"https://<YOUR_PAGE>/dmn/index.js\"></script>", "<script src=\"https://<YOUR_PAGE>/bpmn/index.js\"></script>", "npm install @kie-tools/kie-editors-standalone", "import * as DmnEditor from \"@kie-tools/kie-editors-standalone/dist/dmn\" import * as BpmnEditor from \"@kie-tools/kie-editors-standalone/dist/bpmn\"", "const editor = DmnEditor.open({ container: document.getElementById(\"dmn-editor-container\"), initialContent: Promise.resolve(\"\"), readOnly: false, origin: \"\", resources: new Map([ [ \"MyIncludedModel.dmn\", { contentType: \"text\", content: Promise.resolve(\"\") } ] ]) });" ]
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_decision_services_in_red_hat_process_automation_manager/con-bpmn-dmn-modelers_dmn-models
Appendix C. Ceph firewall ports
Appendix C. Ceph firewall ports These are the general firewall ports used by various components in Red Hat Ceph Storage. Port Port type Component 3260 and 5000 TCP Ceph iSCSI Gateway (Deprecated) 6800-7300 TCP Ceph OSDs 3300 TCP Ceph clients and Ceph daemons connecting to the Ceph Monitor daemon. This port is preferred over 6789. 6789 TCP Ceph clients and Ceph daemons connecting to the Ceph Monitor daemon. This port is considered if port 3300 fails.
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/configuration_guide/ceph-firewall-ports_conf
Chapter 5. Advanced topics
Chapter 5. Advanced topics This section covers topics that are beyond the scope of the introductory tutorial but are useful in real-world RPM packaging. 5.1. Signing RPM packages You can sign RPM packages to ensure that no third party can alter their content. To add an additional layer of security, use the HTTPS protocol when downloading the package. You can sign a package by using the --addsign option provided by the rpm-sign package. Prerequisites You have created a GNU Privacy Guard (GPG) key as described in Creating a GPG key . 5.1.1. Creating a GPG key Use the following procedure to create a GNU Privacy Guard (GPG) key required for signing packages. Procedure Generate a GPG key pair: Check the generated key pair: Export the public key: Replace <Key_name> with the real key name that you have selected. Import the exported public key into an RPM database: 5.1.2. Configuring RPM to sign a package To be able to sign an RPM package, you need to specify the %_gpg_name RPM macro. The following procedure describes how to configure RPM for signing a package. Procedure Define the %_gpg_name macro in your USDHOME/.rpmmacros file as follows: Replace Key ID with the GNU Privacy Guard (GPG) key ID that you will use to sign a package. A valid GPG key ID value is either a full name or email address of the user who created the key. 5.1.3. Adding a signature to an RPM package The most usual case is when a package is built without a signature. The signature is added just before the release of the package. To add a signature to an RPM package, use the --addsign option provided by the rpm-sign package. Procedure Add a signature to a package: Replace package-name with the name of an RPM package you want to sign. Note You must enter the password to unlock the secret key for the signature. 5.2. More on macros This section covers selected built-in RPM Macros. For an exhaustive list of such macros, see RPM Documentation . 5.2.1. Defining your own macros The following section describes how to create a custom macro. Procedure Include the following line in the RPM spec file: All whitespace surrounding <body> is removed. Name may be composed of alphanumeric characters, and the character _ and must be at least 3 characters in length. Inclusion of the (opts) field is optional: Simple macros do not contain the (opts) field. In this case, only recursive macro expansion is performed. Parametrized macros contain the (opts) field. The opts string between parentheses is passed to getopt(3) for argc/argv processing at the beginning of a macro invocation. Note Older RPM spec files use the %define <name> <body> macro pattern instead. The differences between %define and %global macros are as follows: %define has local scope. It applies to a specific part of a spec file. The body of a %define macro is expanded when used. %global has global scope. It applies to an entire spec file. The body of a %global macro is expanded at definition time. Important Macros are evaluated even if they are commented out or the name of the macro is given into the %changelog section of the spec file. To comment out a macro, use %% . For example: %%global . Additional resources Macro syntax 5.2.2. Using the %setup macro This section describes how to build packages with source code tarballs using different variants of the %setup macro. Note that the macro variants can be combined. The rpmbuild output illustrates standard behavior of the %setup macro. At the beginning of each phase, the macro outputs Executing(%... ) , as shown in the below example. Example 5.1. Example %setup macro output The shell output is set with set -x enabled. To see the content of /var/tmp/rpm-tmp.DhddsG , use the --debug option because rpmbuild deletes temporary files after a successful build. This displays the setup of environment variables followed by for example: The %setup macro: Ensures that we are working in the correct directory. Removes residues of builds. Unpacks the source tarball. Sets up some default privileges. 5.2.2.1. Using the %setup -q macro The -q option limits the verbosity of the %setup macro. Only tar -xof is executed instead of tar -xvvof . Use this option as the first option. 5.2.2.2. Using the %setup -n macro The -n option is used to specify the name of the directory from expanded tarball. This is used in cases when the directory from expanded tarball has a different name from what is expected ( %{name}-%{version} ), which can lead to an error of the %setup macro. For example, if the package name is cello , but the source code is archived in hello-1.0.tgz and contains the hello/ directory, the spec file content needs to be as follows: 5.2.2.3. Using the %setup -c macro The -c option is used if the source code tarball does not contain any subdirectories and after unpacking, files from an archive fills the current directory. The -c option then creates the directory and steps into the archive expansion as shown below: The directory is not changed after archive expansion. 5.2.2.4. Using the %setup -D and %setup -T macros The -D option disables deleting of source code directory, and is particularly useful if the %setup macro is used several times. With the -D option, the following lines are not used: The -T option disables expansion of the source code tarball by removing the following line from the script: 5.2.2.5. Using the %setup -a and %setup -b macros The -a and -b options expand specific sources: The -b option stands for before . This option expands specific sources before entering the working directory. The -a option stands for after . This option expands those sources after entering. Their arguments are source numbers from the spec file preamble. In the following example, the cello-1.0.tar.gz archive contains an empty examples directory. The examples are shipped in a separate examples.tar.gz tarball and they expand into the directory of the same name. In this case, use -a 1 if you want to expand Source1 after entering the working directory: In the following example, examples are provided in a separate cello-1.0-examples.tar.gz tarball, which expands into cello-1.0/examples . In this case, use -b 1 to expand Source1 before entering the working directory: 5.2.3. Common RPM macros in the %files section The following table lists advanced RPM Macros that are needed in the %files section of a spec file. Table 5.1. Advanced RPM Macros in the %files section Macro Definition %license The %license macro identifies the file listed as a LICENSE file and it will be installed and labeled as such by RPM. Example: %license LICENSE . %doc The %doc macro identifies a file listed as documentation and it will be installed and labeled as such by RPM. The %doc macro is used for documentation about the packaged software and also for code examples and various accompanying items. If code examples are included, care must be taken to remove executable mode from the file. Example: %doc README %dir The %dir macro ensures that the path is a directory owned by this RPM. This is important so that the RPM file manifest accurately knows what directories to clean up on uninstall. Example: %dir %{_libdir}/%{name} %config(noreplace) The %config(noreplace) macro ensures that the following file is a configuration file and therefore should not be overwritten (or replaced) on a package install or update if the file has been modified from the original installation checksum. If there is a change, the file will be created with .rpmnew appended to the end of the filename upon upgrade or install so that the pre-existing or modified file on the target system is not modified. Example: %config(noreplace) %{_sysconfdir}/%{name}/%{name}.conf 5.2.4. Displaying the built-in macros Red Hat Enterprise Linux provides multiple built-in RPM macros. Procedure To display all built-in RPM macros, run: Note The output is quite sizeable. To narrow the result, use the command above with the grep command. To find information about the RPMs macros for your system's version of RPM, run: Note RPM macros are the files titled macros in the output directory structure. 5.2.5. RPM distribution macros Different distributions provide different sets of recommended RPM macros based on the language implementation of the software being packaged or the specific guidelines of the distribution. The sets of recommended RPM macros are often provided as RPM packages, ready to be installed with the yum package manager. Once installed, the macro files can be found in the /usr/lib/rpm/macros.d/ directory. Procedure To display the raw RPM macro definitions, run: The above output displays the raw RPM macro definitions. To determine what a macro does and how it can be helpful when packaging RPMs, run the rpm --eval command with the name of the macro used as its argument: Additional resources rpm man page 5.2.6. Creating custom macros You can override the distribution macros in the ~/.rpmmacros file with your custom macros. Any changes that you make affect every build on your machine. Warning Defining any new macros in the ~/.rpmmacros file is not recommended. Such macros would not be present on other machines, where users may want to try to rebuild your package. Procedure To override a macro, run: You can create the directory from the example above, including all subdirectories through the rpmdev-setuptree utility. The value of this macro is by default ~/rpmbuild . The macro above is often used to pass to Makefile, for example make %{?_smp_mflags} , and to set a number of concurrent processes during the build phase. By default, it is set to -jX , where X is a number of cores. If you alter the number of cores, you can speed up or slow down a build of packages. 5.3. Epoch, Scriptlets and Triggers This section covers Epoch , Scriptlets , and Triggers , which represent advanced directives for RMP spec files. All these directives influence not only the spec file, but also the end machine on which the resulting RPM is installed. 5.3.1. The Epoch directive The Epoch directive enables to define weighted dependencies based on version numbers. If this directive is not listed in the RPM spec file, the Epoch directive is not set at all. This is contrary to common belief that not setting Epoch results in an Epoch of 0. However, the yum utility treats an unset Epoch as the same as an Epoch of 0 for the purposes of depsolving. However, listing Epoch in a spec file is usually omitted because in majority of cases introducing an Epoch value skews the expected RPM behavior when comparing versions of packages. Example 5.2. Using Epoch If you install the foobar package with Epoch: 1 and Version: 1.0 , and someone else packages foobar with Version: 2.0 but without the Epoch directive, the new version will never be considered an update. The reason being that the Epoch version is preferred over the traditional Name-Version-Release marker that signifies versioning for RPM Packages. Using of Epoch is thus quite rare. However, Epoch is typically used to resolve an upgrade ordering issue. The issue can appear as a side effect of upstream change in software version number schemes or versions incorporating alphabetical characters that cannot always be compared reliably based on encoding. 5.3.2. Scriptlets directives Scriptlets are a series of RPM directives that are executed before or after packages are installed or deleted. Use Scriptlets only for tasks that cannot be done at build time or in an start up script. A set of common Scriptlet directives exists. They are similar to the spec file section headers, such as %build or %install . They are defined by multi-line segments of code, which are often written as a standard POSIX shell script. However, they can also be written in other programming languages that RPM for the target machine's distribution accepts. RPM Documentation includes an exhaustive list of available languages. The following table includes Scriptlet directives listed in their execution order. Note that a package containing the scripts is installed between the %pre and %post directive, and it is uninstalled between the %preun and %postun directive. Table 5.2. Scriptlet directives Directive Definition %pretrans Scriptlet that is executed just before installing or removing any package. %pre Scriptlet that is executed just before installing the package on the target system. %post Scriptlet that is executed just after the package was installed on the target system. %preun Scriptlet that is executed just before uninstalling the package from the target system. %postun Scriptlet that is executed just after the package was uninstalled from the target system. %posttrans Scriptlet that is executed at the end of the transaction. 5.3.3. Turning off a scriptlet execution The following procedure describes how to turn off the execution of any scriptlet using the rpm command together with the --no_scriptlet_name_ option. Procedure For example, to turn off the execution of the %pretrans scriptlets, run: You can also use the -- noscripts option, which is equivalent to all of the following: --nopre --nopost --nopreun --nopostun --nopretrans --noposttrans Additional resources rpm(8) man page. 5.3.4. Scriptlets macros The Scriptlets directives also work with RPM macros. The following example shows the use of systemd scriptlet macro, which ensures that systemd is notified about a new unit file. 5.3.5. The Triggers directives Triggers are RPM directives which provide a method for interaction during package installation and uninstallation. Warning Triggers may be executed at an unexpected time, for example on update of the containing package. Triggers are difficult to debug, therefore they need to be implemented in a robust way so that they do not break anything when executed unexpectedly. For these reasons, Red Hat recommends to minimize the use of Triggers . The order of execution on a single package upgrade and the details for each existing Triggers are listed below: The above items are found in the /usr/share/doc/rpm-4.*/triggers file. 5.3.6. Using non-shell scripts in a spec file The -p scriptlet option in a spec file enables the user to invoke a specific interpreter instead of the default shell scripts interpreter ( -p /bin/sh ). The following procedure describes how to create a script, which prints out a message after installation of the pello.py program: Procedure Open the pello.spec file. Find the following line: Under the above line, insert: Build your package as described in Building RPMs . Install your package: Check the output message after the installation: Note To use a Python 3 script, include the following line under install -m in a spec file: To use a Lua script, include the following line under install -m in a SPEC file: This way, you can specify any interpreter in a spec file. 5.4. RPM conditionals RPM Conditionals enable conditional inclusion of various sections of the spec file. Conditional inclusions usually deal with: Architecture-specific sections Operating system-specific sections Compatibility issues between various versions of operating systems Existence and definition of macros 5.4.1. RPM conditionals syntax RPM conditionals use the following syntax: If expression is true, then do some action: If expression is true, then do some action, in other case, do another action: 5.4.2. The %if conditionals The following examples shows the usage of %if RPM conditionals. Example 5.3. Using the %if conditional to handle compatibility between Red Hat Enterprise Linux 8 and other operating systems This conditional handles compatibility between RHEL 8 and other operating systems in terms of support of the AS_FUNCTION_DESCRIBE macro. If the package is built for RHEL, the %rhel macro is defined, and it is expanded to RHEL version. If its value is 8, meaning the package is build for RHEL 8, then the references to AS_FUNCTION_DESCRIBE, which is not supported by RHEL 8, are deleted from autoconfig scripts. Example 5.4. Using the %if conditional to handle definition of macros This conditional handles definition of macros. If the %milestone or the %revision macros are set, the %ruby_archive macro, which defines the name of the upstream tarball, is redefined. 5.4.3. Specialized variants of %if conditionals The %ifarch conditional, %ifnarch conditional and %ifos conditional are specialized variants of the %if conditionals. These variants are commonly used, hence they have their own macros. The %ifarch conditional The %ifarch conditional is used to begin a block of the spec file that is architecture-specific. It is followed by one or more architecture specifiers, each separated by commas or whitespace. Example 5.5. An example use of the %ifarch conditional All the contents of the spec file between %ifarch and %endif are processed only on the 32-bit AMD and Intel architectures or Sun SPARC-based systems. The %ifnarch conditional The %ifnarch conditional has a reverse logic than %ifarch conditional. Example 5.6. An example use of the %ifnarch conditional All the contents of the spec file between %ifnarch and %endif are processed only if not done on a Digital Alpha/AXP-based system. The %ifos conditional The %ifos conditional is used to control processing based on the operating system of the build. It can be followed by one or more operating system names. Example 5.7. An example use of the %ifos conditional All the contents of the spec file between %ifos and %endif are processed only if the build was done on a Linux system. 5.5. Packaging Python 3 RPMs Most Python projects use Setuptools for packaging, and define package information in the setup.py file. For more information about Setuptools packaging, see the Setuptools documentation . You can also package your Python project into an RPM package, which provides the following advantages compared to Setuptools packaging: Specification of dependencies of a package on other RPMs (even non-Python) Cryptographic signing With cryptographic signing, content of RPM packages can be verified, integrated, and tested with the rest of the operating system. 5.5.1. The spec file description for a Python package A spec file contains instructions that the rpmbuild utility uses to build an RPM. The instructions are included in a series of sections. A spec file has two main parts in which the sections are defined: Preamble (contains a series of metadata items that are used in the Body) Body (contains the main part of the instructions) An RPM SPEC file for Python projects has some specifics compared to non-Python RPM SPEC files. Most notably, a name of any RPM package of a Python library must always include the prefix determining the version, for example, python3 for Python 3.6, python38 for Python 3.8, python39 for Python 3.9, python3.11 for Python 3.11, or python3.12 for Python 3.12. Other specifics are shown in the following spec file example for the python3-detox package . For description of such specifics, see the notes below the example. %global modname detox 1 Name: python3-detox 2 Version: 0.12 Release: 4%{?dist} Summary: Distributing activities of the tox tool License: MIT URL: https://pypi.io/project/detox Source0: https://pypi.io/packages/source/d/%{modname}/%{modname}-%{version}.tar.gz BuildArch: noarch BuildRequires: python36-devel 3 BuildRequires: python3-setuptools BuildRequires: python36-rpm-macros BuildRequires: python3-six BuildRequires: python3-tox BuildRequires: python3-py BuildRequires: python3-eventlet %?python_enable_dependency_generator 4 %description Detox is the distributed version of the tox python testing tool. It makes efficient use of multiple CPUs by running all possible activities in parallel. Detox has the same options and configuration that tox has, so after installation you can run it in the same way and with the same options that you use for tox. USD detox %prep %autosetup -n %{modname}-%{version} %build %py3_build 5 %install %py3_install %check %{__python3} setup.py test 6 %files -n python3-%{modname} %doc CHANGELOG %license LICENSE %{_bindir}/detox %{python3_sitelib}/%{modname}/ %{python3_sitelib}/%{modname}-%{version}* %changelog ... 1 The modname macro contains the name of the Python project. In this example it is detox . 2 When packaging a Python project into RPM, the python3 prefix always needs to be added to the original name of the project. The original name here is detox and the name of the RPM is python3-detox . 3 BuildRequires specifies what packages are required to build and test this package. In BuildRequires, always include items providing tools necessary for building Python packages: python36-devel and python3-setuptools . The python36-rpm-macros package is required so that files with /usr/bin/python3 interpreter directives are automatically changed to /usr/bin/python3.6 . 4 Every Python package requires some other packages to work correctly. Such packages need to be specified in the spec file as well. To specify the dependencies , you can use the %python_enable_dependency_generator macro to automatically use dependencies defined in the setup.py file. If a package has dependencies that are not specified using Setuptools, specify them within additional Requires directives. 5 The %py3_build and %py3_install macros run the setup.py build and setup.py install commands, respectively, with additional arguments to specify installation locations, the interpreter to use, and other details. 6 The check section provides a macro that runs the correct version of Python. The %{__python3} macro contains a path for the Python 3 interpreter, for example /usr/bin/python3 . We recommend to always use the macro rather than a literal path. 5.5.2. Common macros for Python 3 RPMs In a spec file, always use the macros that are described in the following Macros for Python 3 RPMs table rather than hardcoding their values. In macro names, always use python3 or python2 instead of unversioned python . Configure the particular Python 3 version in the BuildRequires section of the SPEC file to python36-rpm-macros , python38-rpm-macros , python39-rpm-macros , python3.11-rpm-macros , or python3.12-rpm-macros . Table 5.3. Macros for Python 3 RPMs Macro Normal Definition Description %{__python3} /usr/bin/python3 Python 3 interpreter %{python3_version} 3.6 The full version of the Python 3 interpreter. %{python3_sitelib} /usr/lib/python3.6/site-packages Where pure-Python modules are installed. %{python3_sitearch} /usr/lib64/python3.6/site-packages Where modules containing architecture-specific extensions are installed. %py3_build Runs the setup.py build command with arguments suitable for a system package. %py3_install Runs the setup.py install command with arguments suitable for a system package. 5.5.3. Automatic provides for Python RPMs When packaging a Python project, make sure that the following directories are included in the resulting RPM if these directories are present: .dist-info .egg-info .egg-link From these directories, the RPM build process automatically generates virtual pythonX.Ydist provides, for example, python3.6dist(detox) . These virtual provides are used by packages that are specified by the %python_enable_dependency_generator macro. 5.6. Handling interpreter directives in Python scripts In Red Hat Enterprise Linux 8, executable Python scripts are expected to use interpreter directives (also known as hashbangs or shebangs) that explicitly specify at a minimum the major Python version. For example: The /usr/lib/rpm/redhat/brp-mangle-shebangs buildroot policy (BRP) script is run automatically when building any RPM package, and attempts to correct interpreter directives in all executable files. The BRP script generates errors when encountering a Python script with an ambiguous interpreter directive, such as: or 5.6.1. Modifying interpreter directives in Python scripts Modify interpreter directives in the Python scripts that cause the build errors at RPM build time. Prerequisites Some of the interpreter directives in your Python scripts cause a build error. Procedure To modify interpreter directives, complete one of the following tasks: Apply the pathfix.py script from the platform-python-devel package: Note that multiple PATH s can be specified. If a PATH is a directory, pathfix.py recursively scans for any Python scripts matching the pattern ^[a-zA-Z0-9_]+\.pyUSD , not only those with an ambiguous interpreter directive. Add this command to the %prep section or at the end of the %install section. Modify the packaged Python scripts so that they conform to the expected format. For this purpose, pathfix.py can be used outside the RPM build process, too. When running pathfix.py outside an RPM build, replace %{__python3} from the example above with a path for the interpreter directive, such as /usr/bin/python3 . If the packaged Python scripts require a version other than Python 3.6, adjust the preceding commands to include the required version. 5.6.2. Changing /usr/bin/python3 interpreter directives in your custom packages By default, interpreter directives in the form of /usr/bin/python3 are replaced with interpreter directives pointing to Python from the platform-python package, which is used for system tools with Red Hat Enterprise Linux. You can change the /usr/bin/python3 interpreter directives in your custom packages to point to a specific version of Python that you have installed from the AppStream repository. Procedure To build your package for a specific version of Python, add the python* -rpm-macros subpackage of the respective python package to the BuildRequires section of the spec file. For example, for Python 3.6, include the following line: As a result, the /usr/bin/python3 interpreter directives in your custom package are automatically converted to /usr/bin/python3.6 . Note To prevent the BRP script from checking and modifying interpreter directives, use the following RPM directive: 5.7. RubyGems packages This section explains what RubyGems packages are, and how to re-package them into RPM. 5.7.1. What RubyGems are Ruby is a dynamic, interpreted, reflective, object-oriented, general-purpose programming language. Programs written in Ruby are typically packaged using the RubyGems project, which provides a specific Ruby packaging format. Packages created by RubyGems are called gems, and they can be re-packaged into RPM as well. Note This documentation refers to terms related to the RubyGems concept with the gem prefix, for example .gemspec is used for the gem specification , and terms related to RPM are unqualified. 5.7.2. How RubyGems relate to RPM RubyGems represent Ruby's own packaging format. However, RubyGems contain metadata similar to those needed by RPM, which enables the conversion from RubyGems to RPM. According to Ruby Packaging Guidelines , it is possible to re-package RubyGems packages into RPM in this way: Such RPMs fit with the rest of the distribution. End users are able to satisfy dependencies of a gem by installing the appropriate RPM-packaged gem. RubyGems use similar terminology as RPM, such as spec files, package names, dependencies and other items. To fit into the rest of RHEL RPM distribution, packages created by RubyGems must follow the conventions listed below: Names of gems must follow this pattern: To implement a shebang line, the following string must be used: 5.7.3. Creating RPM packages from RubyGems packages To create a source RPM for a RubyGems package, the following files are needed: A gem file An RPM spec file The following sections describe how to create RPM packages from packages created by RubyGems. 5.7.3.1. RubyGems spec file conventions A RubyGems spec file must meet the following conventions: Contain a definition of %{gem_name} , which is the name from the gem's specification. The source of the package must be the full URL to the released gem archive; the version of the package must be the gem's version. Contain the BuildRequires: a directive defined as follows to be able to pull in the macros needed to build. Not contain any RubyGems Requires or Provides , because those are autogenerated. Not contain the BuildRequires: directive defined as follows, unless you want to explicitly specify Ruby version compatibility: The automatically generated dependency on RubyGems ( Requires: ruby(rubygems) ) is sufficient. 5.7.3.2. RubyGems macros The following table lists macros useful for packages created by RubyGems. These macros are provided by the rubygems-devel packages. Table 5.4. RubyGems' macros Macro name Extended path Usage %{gem_dir} /usr/share/gems Top directory for the gem structure. %{gem_instdir} %{gem_dir}/gems/%{gem_name}-%{version} Directory with the actual content of the gem. %{gem_libdir} %{gem_instdir}/lib The library directory of the gem. %{gem_cache} %{gem_dir}/cache/%{gem_name}-%{version}.gem The cached gem. %{gem_spec} %{gem_dir}/specifications/%{gem_name}-%{version}.gemspec The gem specification file. %{gem_docdir} %{gem_dir}/doc/%{gem_name}-%{version} The RDoc documentation of the gem. %{gem_extdir_mri} %{_libdir}/gems/ruby/%{gem_name}-%{version} The directory for gem extension. 5.7.3.3. RubyGems spec file example Example spec file for building gems together with an explanation of its particular sections follows. An example RubyGems spec file The following table explains the specifics of particular items in a RubyGems spec file: Table 5.5. RubyGems' spec directives specifics Directive RubyGems specifics %prep RPM can directly unpack gem archives, so you can run the gem unpack comamnd to extract the source from the gem. The %setup -n %{gem_name}-%{version} macro provides the directory into which the gem has been unpacked. At the same directory level, the %{gem_name}-%{version}.gemspec file is automatically created, which can be used to rebuild the gem later, to modify the .gemspec , or to apply patches to the code. %build This directive includes commands or series of commands for building the software into machine code. The %gem_install macro operates only on gem archives, and the gem is recreated with the gem build. The gem file that is created is then used by %gem_install to build and install the code into the temporary directory, which is ./%{gem_dir} by default. The %gem_install macro both builds and installs the code in one step. Before being installed, the built sources are placed into a temporary directory that is created automatically. The %gem_install macro accepts two additional options: -n <gem_file> , which allows to override gem used for installation, and -d <install_dir> , which might override the gem installation destination; using this option is not recommended. The %gem_install macro must not be used to install into the %{buildroot} . %install The installation is performed into the %{buildroot} hierarchy. You can create the directories that you need and then copy what was installed in the temporary directories into the %{buildroot} hierarchy. If this gem creates shared objects, they are moved into the architecture-specific %{gem_extdir_mri} path. Additional resources Ruby Packaging Guidelines 5.7.3.4. Converting RubyGems packages to RPM spec files with gem2rpm The gem2rpm utility converts RubyGems packages to RPM spec files. The following sections describe how to: Install the gem2rpm utility Display all gem2rpm options Use gem2rpm to convert RubyGems packages to RPM spec files Edit gem2rpm templates 5.7.3.4.1. Installing gem2rpm The following procedure describes how to install the gem2rpm utility. Procedure To install gem2rpm from RubyGems.org , run: 5.7.3.4.2. Displaying all options of gem2rpm The following procedure describes how to display all options of the gem2rpm utility. Procedure To see all options of gem2rpm , run: 5.7.3.4.3. Using gem2rpm to convert RubyGems packages to RPM spec files The following procedure describes how to use the gem2rpm utility to convert RubyGems packages to RPM spec files. Procedure Download a gem in its latest version, and generate the RPM spec file for this gem: The described procedure creates an RPM spec file based on the information provided in the gem's metadata. However, the gem misses some important information that is usually provided in RPMs, such as the license and the changelog. The generated spec file thus needs to be edited. 5.7.3.4.4. gem2rpm templates The gem2rpm template is a standard Embedded Ruby (ERB) file, which includes variables listed in the following table. Table 5.6. Variables in the gem2rpm template Variable Explanation package The Gem::Package variable for the gem. spec The Gem::Specification variable for the gem (the same as format.spec). config The Gem2Rpm::Configuration variable that can redefine default macros or rules used in spec template helpers. runtime_dependencies The Gem2Rpm::RpmDependencyList variable providing a list of package runtime dependencies. development_dependencies The Gem2Rpm::RpmDependencyList variable providing a list of package development dependencies. tests The Gem2Rpm::TestSuite variable providing a list of test frameworks allowing their execution. files The Gem2Rpm::RpmFileList variable providing an unfiltered list of files in a package. main_files The Gem2Rpm::RpmFileList variable providing a list of files suitable for the main package. doc_files The Gem2Rpm::RpmFileList variable providing a list of files suitable for the -doc subpackage. format The Gem::Format variable for the gem. Note that this variable is now deprecated. 5.7.3.4.5. Listing available gem2rpm templates Use the following procedure describes to list all available gem2rpm templates. Procedure To see all available templates, run: 5.7.3.4.6. Editing gem2rpm templates You can edit the template from which the RPM spec file is generated instead of editing the generated spec file. Use the following procedure to edit the gem2rpm templates. Procedure Save the default template: Edit the template as needed. Generate the spec file by using the edited template: You can now build an RPM package by using the edited template as described in Building RPMs . 5.8. How to handle RPM packages with Perls scripts Since RHEL 8, the Perl programming language is not included in the default buildroot. Therefore, the RPM packages that include Perl scripts must explicitly indicate the dependency on Perl using the BuildRequires: directive in RPM spec file. 5.8.1. Common Perl-related dependencies The most frequently occurring Perl-related build dependencies used in BuildRequires: are : perl-generators Automatically generates run-time Requires and Provides for installed Perl files. When you install a Perl script or a Perl module, you must include a build dependency on this package. perl-interpreter The Perl interpreter must be listed as a build dependency if it is called in any way, either explicitly via the perl package or the %__perl macro, or as a part of your package's build system. perl-devel Provides Perl header files. If building architecture-specific code which links to the libperl.so library, such as an XS Perl module, you must include BuildRequires: perl-devel . 5.8.2. Using a specific Perl module If a specific Perl module is required at build time, use the following procedure: Procedure Apply the following syntax in your RPM spec file: Note Apply this syntax to Perl core modules as well, because they can move in and out of the perl package over time. 5.8.3. Limiting a package to a specific Perl version To limit your package to a specific Perl version, follow this procedure: Procedure Use the perl(:VERSION) dependency with the desired version constraint in your RPM spec file: For example, to limit a package to Perl version 5.22 and later, use: Warning Do not use a comparison against the version of the perl package because it includes an epoch number. 5.8.4. Ensuring that a package uses the correct Perl interpreter Red Hat provides multiple Perl interpreters, which are not fully compatible. Therefore, any package that delivers a Perl module must use at run time the same Perl interpreter that was used at build time. To ensure this, follow the procedure below: Procedure Include versioned MODULE_COMPAT Requires in RPM spec file for any package that delivers a Perl module:
[ "gpg --gen-key", "gpg --list-keys", "gpg --export -a '<Key_name>' > RPM-GPG-KEY-pmanager", "rpm --import RPM-GPG-KEY-pmanager", "%_gpg_name Key ID", "rpm --addsign package-name .rpm", "%global <name>[(opts)] <body>", "Executing(%prep): /bin/sh -e /var/tmp/rpm-tmp.DhddsG", "cd '/builddir/build/BUILD' rm -rf 'cello-1.0' /usr/bin/gzip -dc '/builddir/build/SOURCES/cello-1.0.tar.gz' | /usr/bin/tar -xof - STATUS=USD? if [ USDSTATUS -ne 0 ]; then exit USDSTATUS fi cd 'cello-1.0' /usr/bin/chmod -Rf a+rX,u+w,g-w,o-w .", "Name: cello Source0: https://example.com/%{name}/release/hello-%{version}.tar.gz ... %prep %setup -n hello", "/usr/bin/mkdir -p cello-1.0 cd 'cello-1.0'", "rm -rf 'cello-1.0'", "/usr/bin/gzip -dc '/builddir/build/SOURCES/cello-1.0.tar.gz' | /usr/bin/tar -xvvof -", "Source0: https://example.com/%{name}/release/%{name}-%{version}.tar.gz Source1: examples.tar.gz ... %prep %setup -a 1", "Source0: https://example.com/%{name}/release/%{name}-%{version}.tar.gz Source1: %{name}-%{version}-examples.tar.gz ... %prep %setup -b 1", "--showrc", "-ql rpm", "--showrc", "--eval %{_MACRO}", "%_topdir /opt/some/working/directory/rpmbuild", "%_smp_mflags -l3", "rpm --nopretrans", "rpm --showrc | grep systemd -14: __transaction_systemd_inhibit %{__plugindir}/systemd_inhibit.so -14: _journalcatalogdir /usr/lib/systemd/catalog -14: _presetdir /usr/lib/systemd/system-preset -14: _unitdir /usr/lib/systemd/system -14: _userunitdir /usr/lib/systemd/user /usr/lib/systemd/systemd-binfmt %{?*} >/dev/null 2>&1 || : /usr/lib/systemd/systemd-sysctl %{?*} >/dev/null 2>&1 || : -14: systemd_post -14: systemd_postun -14: systemd_postun_with_restart -14: systemd_preun -14: systemd_requires Requires(post): systemd Requires(preun): systemd Requires(postun): systemd -14: systemd_user_post %systemd_post --user --global %{?*} -14: systemd_user_postun %{nil} -14: systemd_user_postun_with_restart %{nil} -14: systemd_user_preun systemd-sysusers %{?*} >/dev/null 2>&1 || : echo %{?*} | systemd-sysusers - >/dev/null 2>&1 || : systemd-tmpfiles --create %{?*} >/dev/null 2>&1 || : rpm --eval %{systemd_post} if [ USD1 -eq 1 ] ; then # Initial installation systemctl preset >/dev/null 2>&1 || : fi rpm --eval %{systemd_postun} systemctl daemon-reload >/dev/null 2>&1 || : rpm --eval %{systemd_preun} if [ USD1 -eq 0 ] ; then # Package removal, not upgrade systemctl --no-reload disable > /dev/null 2>&1 || : systemctl stop > /dev/null 2>&1 || : fi", "all-%pretrans ... any-%triggerprein (%triggerprein from other packages set off by new install) new-%triggerprein new-%pre for new version of package being installed ... (all new files are installed) new-%post for new version of package being installed any-%triggerin (%triggerin from other packages set off by new install) new-%triggerin old-%triggerun any-%triggerun (%triggerun from other packages set off by old uninstall) old-%preun for old version of package being removed ... (all old files are removed) old-%postun for old version of package being removed old-%triggerpostun any-%triggerpostun (%triggerpostun from other packages set off by old un install) ... all-%posttrans", "install -m 0644 %{name}.py* %{buildroot}/usr/lib/%{name}/", "%post -p /usr/bin/python3 print(\"This is {} code\".format(\"python\"))", "yum install /home/<username>/rpmbuild/RPMS/noarch/pello-0.1.2-1.el8.noarch.rpm", "Installing : pello-0.1.2-1.el8.noarch 1/1 Running scriptlet: pello-0.1.2-1.el8.noarch 1/1 This is python code", "%post -p /usr/bin/python3", "%post -p <lua>", "%if expression ... %endif", "%if expression ... %else ... %endif", "%if 0%{?rhel} == 8 sed -i '/AS_FUNCTION_DESCRIBE/ s/^/#/' configure.in sed -i '/AS_FUNCTION_DESCRIBE/ s/^/#/' acinclude.m4 %endif", "%define ruby_archive %{name}-%{ruby_version} %if 0%{?milestone:1}%{?revision:1} != 0 %define ruby_archive %{ruby_archive}-%{?milestone}%{?!milestone:%{?revision:r%{revision}}} %endif", "%ifarch i386 sparc ... %endif", "%ifnarch alpha ... %endif", "%ifos linux ... %endif", "%global modname detox 1 Name: python3-detox 2 Version: 0.12 Release: 4%{?dist} Summary: Distributing activities of the tox tool License: MIT URL: https://pypi.io/project/detox Source0: https://pypi.io/packages/source/d/%{modname}/%{modname}-%{version}.tar.gz BuildArch: noarch BuildRequires: python36-devel 3 BuildRequires: python3-setuptools BuildRequires: python36-rpm-macros BuildRequires: python3-six BuildRequires: python3-tox BuildRequires: python3-py BuildRequires: python3-eventlet %?python_enable_dependency_generator 4 %description Detox is the distributed version of the tox python testing tool. It makes efficient use of multiple CPUs by running all possible activities in parallel. Detox has the same options and configuration that tox has, so after installation you can run it in the same way and with the same options that you use for tox. USD detox %prep %autosetup -n %{modname}-%{version} %build %py3_build 5 %install %py3_install %check %{__python3} setup.py test 6 %files -n python3-%{modname} %doc CHANGELOG %license LICENSE %{_bindir}/detox %{python3_sitelib}/%{modname}/ %{python3_sitelib}/%{modname}-%{version}* %changelog", "#!/usr/bin/python3 #!/usr/bin/python3.6 #!/usr/bin/python3.8 #!/usr/bin/python3.9 #!/usr/bin/python3.11 #!/usr/bin/python3.12 #!/usr/bin/python2", "#!/usr/bin/python", "#!/usr/bin/env python", "pathfix.py -pn -i %{__python3} PATH ...", "BuildRequires: python36-rpm-macros", "%undefine __brp_mangle_shebangs", "rubygem-%{gem_name}", "#!/usr/bin/ruby", "BuildRequires:rubygems-devel", "Requires: ruby(release)", "%prep %setup -q -n %{gem_name}-%{version} Modify the gemspec if necessary Also apply patches to code if necessary %patch0 -p1 %build Create the gem as gem install only works on a gem file gem build ../%{gem_name}-%{version}.gemspec %%gem_install compiles any C extensions and installs the gem into ./%%gem_dir by default, so that we can move it into the buildroot in %%install %gem_install %install mkdir -p %{buildroot}%{gem_dir} cp -a ./%{gem_dir}/* %{buildroot}%{gem_dir}/ If there were programs installed: mkdir -p %{buildroot}%{_bindir} cp -a ./%{_bindir}/* %{buildroot}%{_bindir} If there are C extensions, copy them to the extdir. mkdir -p %{buildroot}%{gem_extdir_mri} cp -a .%{gem_extdir_mri}/{gem.build_complete,*.so} %{buildroot}%{gem_extdir_mri}/", "gem install gem2rpm", "gem2rpm --help", "gem2rpm --fetch <gem_name> > <gem_name>.spec", "gem2rpm --templates", "gem2rpm -T > rubygem-<gem_name>.spec.template", "gem2rpm -t rubygem-<gem_name>.spec.template <gem_name>-<latest_version.gem > <gem_name>-GEM.spec", "BuildRequires: perl(MODULE)", "BuildRequires: perl(:VERSION) >= 5.22", "Requires: perl(:MODULE_COMPAT_%(eval `perl -V:version`; echo USDversion))" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/packaging_and_distributing_software/advanced-topics
Chapter 9. Configuring Desktop with GSettings and dconf
Chapter 9. Configuring Desktop with GSettings and dconf 9.1. Terminology Explained: GSettings, gsettings, and dconf This section defines several terms that are easily confused. dconf dconf is a key-based configuration system which manages user settings. It is the back end for GSettings used in Red Hat Enterprise Linux 7. dconf manages a range of different settings, including GDM , application, and proxy settings. dconf The dconf command-line utility is used for reading and writing individual values or entire directories from and to a dconf database. GSettings GSettings is a high-level API for application settings, front end for dconf . gsettings The gsettings command-line tool is used to view and change user settings.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/desktop_migration_and_administration_guide/configuration-overview-gsettings-dconf
Chapter 2. Customizing the dashboard
Chapter 2. Customizing the dashboard The Red Hat OpenStack Platform (RHOSP) dashboard (horizon) uses a default theme (RCUE), which is stored inside the horizon container. You can add your own theme to the container image and customize certain parameters to change the look and feel of the following dashboard elements: Logo Site colors Stylesheets HTML title Site branding link Help URL Note To ensure continued support for modified RHOSP container images, the resulting images must comply with the Red Hat Container Support Policy . 2.1. Obtaining the horizon container image To obtain a copy of the horizon container image, pull the image either into the undercloud or a separate client system that is running podman. Procedure Pull the horizon container image: You can use this image as a basis for a modified image. 2.2. Obtaining the RCUE theme The horizon container image uses the Red Hat branded RCUE theme by default. You can use this theme as a basis for your own theme and extract a copy from the container image. Procedure Create a directory for your theme: Start a container that executes a null loop. For example, run the following command: Copy the RCUE theme from the container to your local directory: Terminate the container: Result: You now have a local copy of the RCUE theme. 2.3. Creating your own theme based on RCUE To use RCUE as a basis, copy the entire RCUE theme directory rcue to a new location. This procedure uses mytheme as an example name. Procedure Copy the theme: To change the colors, graphics, fonts, and other elements of a theme, edit the files in mytheme. When you edit this theme, check for all instances of rcue including paths, files, and directories to ensure that you change them to the new mytheme name. 2.4. Creating a file to enable your theme and customize the dashboard To enable your theme in the dashboard container, you must create a file to override the AVAILABLE_THEMES parameter. Procedure Create a new file called _12_mytheme_theme.py in the horizon-themes directory and add the following content: The 12 in the file name ensures this file is loaded after the RCUE file, which uses 11 , and overrides the AVAILABLE_THEMES parameter. Optional: You can also set custom parameters in the _12_mytheme_theme.py file. Use the following examples as a guide: SITE_BRANDING Set the HTML title that appears at the top of the browser window. SITE_BRANDING_LINK Changes the hyperlink of the theme logo, which normally redirects to horizon:user_home by default. 2.5. Generating a modified horizon image When your custom theme is ready, you can create a new container image that uses your theme. Procedure Use a Dockerfile to generate a new container image using the original horizon image as a basis, as shown in the following example: FROM registry.redhat.io/rhosp-rhel8/openstack-horizon MAINTAINER Acme LABEL name="rhosp-rhel8/openstack-horizon-mytheme" vendor="Acme" version="0" release="1" COPY mytheme /usr/share/openstack-dashboard/openstack_dashboard/themes/mytheme COPY _12_mytheme_theme.py /etc/openstack-dashboard/local_settings.d/_12_mytheme_theme.py RUN sudo chown apache:apache /etc/openstack-dashboard/local_settings.d/_12_mytheme_theme.py Save this file in your horizon-themes directory as Dockerfile . Use the Dockerfile to generate the new image: USD sudo podman build . -t "172.24.10.10:8787/rhosp-rhel8/openstack-horizon:0-5" --log-level debug The -t option names and tags the resulting image. It uses the following syntax: LOCATION This is usually the location of the container registry that the overcloud eventually uses to pull images. In this instance, you push this image to the container registry of the undercloud, so set this to the undercloud IP and port. NAME For consistency, this is usually the same name as the original container image followed by the name of your theme. In this instance, it is rhosp-rhel8/openstack-horizon-mytheme . TAG The tag for the image. Red Hat uses the version and release labels as a basis for this tag. If you generate a new version of this image, increment the release , for example, 0-2 . Push the image to the container registry of the undercloud: USD sudo openstack tripleo container image push --local 172.24.10.10:8787/rhosp-rhel8/openstack-horizon:0-5 Verify that the image has uploaded to the local registry: [stack@director horizon-themes]USD curl http://172.24.10.10:8787/v2/_catalog | jq .repositories[] | grep -i hori "rhosp-rhel8/openstack-horizon" [stack@director horizon-themes]USD [stack@director ~]USD sudo openstack tripleo container image list | grep hor | docker://director.ctlplane.localdomain:8787/rhosp-rhel8/openstack-horizon:16.0-84 | docker://director.ctlplane.localdomain:8787/rhosp-rhel8/openstack-horizon:0-5 <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<,Uploaded [stack@director ~]USD Important If you update or upgrade Red Hat OpenStack Platform, you must reapply the theme to the new horizon image and push a new version of the modified image to the undercloud. 2.6. Using the modified container image in the overcloud To use the container image that you modified with your overcloud deployment, edit the environment file that contains the list of container image locations. This environment file is usually named overcloud-images.yaml . Procedure Edit the DockerHorizonConfigImage and DockerHorizonImage parameters to point to your modified container image: Save this new version of the overcloud-images.yaml file. 2.7. Editing puppet parameters Director provides a set of dashboard parameters that you can modify with environment files. Procedure Use the ExtraConfig parameter to set Puppet hieradata. For example, the default help URL points to https://access.redhat.com/documentation/en/red-hat-openstack-platform . To modify this URL, use the following environment file content and replace the URL: Additional resources Dashboard parameters 2.8. Deploying an overcloud with a customized dashboard Procedure To deploy the overcloud with your dashboard customizations, include the following environment files in the openstack overcloud deploy command: The environment file with your modified container image locations. The environment file with additional dashboard modifications. Any other environment files that are relevant to your overcloud configuration.
[ "sudo podman pull registry.redhat.io/rhosp-rhel8/openstack-horizon:17.0", "mkdir ~/horizon-themes cd ~/horizon-themes", "sudo podman run --rm -d --name horizon-temp registry.redhat.io/rhosp-rhel8/openstack-horizon /usr/bin/sleep infinity", "sudo podman cp horizon-temp:/usr/share/openstack-dashboard/openstack_dashboard/themes/rcue .", "sudo podman kill horizon-temp", "cp -r rcue mytheme", "AVAILABLE_THEMES = [('mytheme', 'My Custom Theme', 'themes/mytheme')]", "SITE_BRANDING = \"Example, Inc. Cloud\"", "SITE_BRANDING_LINK = \"http://example.com\"", "FROM registry.redhat.io/rhosp-rhel8/openstack-horizon MAINTAINER Acme LABEL name=\"rhosp-rhel8/openstack-horizon-mytheme\" vendor=\"Acme\" version=\"0\" release=\"1\" COPY mytheme /usr/share/openstack-dashboard/openstack_dashboard/themes/mytheme COPY _12_mytheme_theme.py /etc/openstack-dashboard/local_settings.d/_12_mytheme_theme.py RUN sudo chown apache:apache /etc/openstack-dashboard/local_settings.d/_12_mytheme_theme.py", "sudo podman build . -t \"172.24.10.10:8787/rhosp-rhel8/openstack-horizon:0-5\" --log-level debug", "[LOCATION]/[NAME]:[TAG]", "sudo openstack tripleo container image push --local 172.24.10.10:8787/rhosp-rhel8/openstack-horizon:0-5", "[stack@director horizon-themes]USD curl http://172.24.10.10:8787/v2/_catalog | jq .repositories[] | grep -i hori \"rhosp-rhel8/openstack-horizon\" [stack@director horizon-themes]USD [stack@director ~]USD sudo openstack tripleo container image list | grep hor | docker://director.ctlplane.localdomain:8787/rhosp-rhel8/openstack-horizon:16.0-84 | docker://director.ctlplane.localdomain:8787/rhosp-rhel8/openstack-horizon:0-5 <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<,Uploaded [stack@director ~]USD", "parameter_defaults: ContainerHorizonConfigImage: 192.168.24.1:8787/rhosp-rhel8/openstack-horizon-mytheme:0-1 ContainerHorizonImage: 192.168.24.1:8787/rhosp-rhel8/openstack-horizon-mytheme:0-1", "parameter_defaults: ExtraConfig: horizon::help_url: \"http://openstack.example.com\"", "openstack overcloud deploy --templates -e /home/stack/templates/overcloud-images.yaml -e /home/stack/templates/help_url.yaml [OTHER OPTIONS]" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/introduction_to_the_openstack_dashboard/customizing-the-dashboard_osp
Chapter 2. Securing Your Network
Chapter 2. Securing Your Network 2.1. Workstation Security Securing a Linux environment begins with the workstation. Whether locking down a personal machine or securing an enterprise system, sound security policy begins with the individual computer. A computer network is only as secure as its weakest node. 2.1.1. Evaluating Workstation Security When evaluating the security of a Red Hat Enterprise Linux workstation, consider the following: BIOS and Boot Loader Security - Can an unauthorized user physically access the machine and boot into single user or rescue mode without a password? Password Security - How secure are the user account passwords on the machine? Administrative Controls - Who has an account on the system and how much administrative control do they have? Available Network Services - What services are listening for requests from the network and should they be running at all? Personal Firewalls - What type of firewall, if any, is necessary? Security Enhanced Communication Tools - Which tools should be used to communicate between workstations and which should be avoided? 2.1.2. BIOS and Boot Loader Security Password protection for the BIOS (or BIOS equivalent) and the boot loader can prevent unauthorized users who have physical access to systems from booting using removable media or obtaining root privileges through single user mode. The security measures you should take to protect against such attacks depends both on the sensitivity of the information on the workstation and the location of the machine. For example, if a machine is used in a trade show and contains no sensitive information, then it may not be critical to prevent such attacks. However, if an employee's laptop with private, unencrypted SSH keys for the corporate network is left unattended at that same trade show, it could lead to a major security breach with ramifications for the entire company. If the workstation is located in a place where only authorized or trusted people have access, however, then securing the BIOS or the boot loader may not be necessary. 2.1.2.1. BIOS Passwords The two primary reasons for password protecting the BIOS of a computer are [3] : Preventing Changes to BIOS Settings - If an intruder has access to the BIOS, they can set it to boot from a CD-ROM or a flash drive. This makes it possible for an intruder to enter rescue mode or single user mode, which in turn allows them to start arbitrary processes on the system or copy sensitive data. Preventing System Booting - Some BIOSes allow password protection of the boot process. When activated, an attacker is forced to enter a password before the BIOS launches the boot loader. Because the methods for setting a BIOS password vary between computer manufacturers, consult the computer's manual for specific instructions. If you forget the BIOS password, it can either be reset with jumpers on the motherboard or by disconnecting the CMOS battery. For this reason, it is good practice to lock the computer case if possible. However, consult the manual for the computer or motherboard before attempting to disconnect the CMOS battery. 2.1.2.1.1. Securing Non-x86 Platforms Other architectures use different programs to perform low-level tasks roughly equivalent to those of the BIOS on x86 systems. For instance, Intel (R) Itanium TM computers use the Extensible Firmware Interface ( EFI ) shell. For instructions on password protecting BIOS-like programs on other architectures, see the manufacturer's instructions. 2.1.2.2. Boot Loader Passwords The primary reasons for password protecting a Linux boot loader are as follows: Preventing Access to Single User Mode - If attackers can boot the system into single user mode, they are logged in automatically as root without being prompted for the root password. Warning Protecting access to single user mode with a password by editing the SINGLE parameter in the /etc/sysconfig/init file is not recommended. An attacker can bypass the password by specifying a custom initial command (using the init= parameter) on the kernel command line in GRUB. It is recommended to password-protect the GRUB boot loader as specified in Section 2.1.2.2.1, "Password Protecting GRUB" . Preventing Access to the GRUB Console - If the machine uses GRUB as its boot loader, an attacker can use the GRUB editor interface to change its configuration or to gather information using the cat command. Preventing Access to Insecure Operating Systems - If it is a dual-boot system, an attacker can select an operating system at boot time (for example, DOS), which ignores access controls and file permissions. Red Hat Enterprise Linux 6 includes the GRUB boot loader on the x86 platform. For a detailed look at GRUB, see the Red Hat Enterprise Linux Installation Guide . 2.1.2.2.1. Password Protecting GRUB You can configure GRUB to address the first two issues listed in Section 2.1.2.2, "Boot Loader Passwords" by adding a password directive to its configuration file. To do this, first choose a strong password, open a shell, log in as root, and then type the following command: /sbin/grub-md5-crypt When prompted, type the GRUB password and press Enter . This returns an MD5 hash of the password. , edit the GRUB configuration file /boot/grub/grub.conf . Open the file and below the timeout line in the main section of the document, add the following line: Replace <password-hash> with the value returned by /sbin/grub-md5-crypt [4] . The time the system boots, the GRUB menu prevents access to the editor or command interface without first pressing p followed by the GRUB password. Unfortunately, this solution does not prevent an attacker from booting into an insecure operating system in a dual-boot environment. For this, a different part of the /boot/grub/grub.conf file must be edited. Look for the title line of the operating system that you want to secure, and add a line with the lock directive immediately beneath it. For a DOS system, the stanza should begin similar to the following: Warning A password line must be present in the main section of the /boot/grub/grub.conf file for this method to work properly. Otherwise, an attacker can access the GRUB editor interface and remove the lock line. To create a different password for a particular kernel or operating system, add a lock line to the stanza, followed by a password line. Each stanza protected with a unique password should begin with lines similar to the following example: 2.1.2.2.2. Disabling Interactive Startup Pressing the I key at the beginning of the boot sequence allows you to start up your system interactively. During an interactive startup, the system prompts you to start up each service one by one. However, this may allow an attacker who gains physical access to your system to disable the security-related services and gain access to the system. To prevent users from starting up the system interactively, as root, disable the PROMPT parameter in the /etc/sysconfig/init file: 2.1.3. Password Security Passwords are the primary method that Red Hat Enterprise Linux uses to verify a user's identity. This is why password security is so important for protection of the user, the workstation, and the network. For security purposes, the installation program configures the system to use Secure Hash Algorithm 512 ( SHA512 ) and shadow passwords. It is highly recommended that you do not alter these settings. If shadow passwords are deselected during installation, all passwords are stored as a one-way hash in the world-readable /etc/passwd file, which makes the system vulnerable to offline password cracking attacks. If an intruder can gain access to the machine as a regular user, he can copy the /etc/passwd file to his own machine and run any number of password cracking programs against it. If there is an insecure password in the file, it is only a matter of time before the password attacker discovers it. Shadow passwords eliminate this type of attack by storing the password hashes in the file /etc/shadow , which is readable only by the root user. This forces a potential attacker to attempt password cracking remotely by logging into a network service on the machine, such as SSH or FTP. This sort of brute-force attack is much slower and leaves an obvious trail as hundreds of failed login attempts are written to system files. Of course, if the attacker starts an attack in the middle of the night on a system with weak passwords, the cracker may have gained access before dawn and edited the log files to cover his tracks. In addition to format and storage considerations is the issue of content. The single most important thing a user can do to protect his account against a password cracking attack is create a strong password. 2.1.3.1. Creating Strong Passwords When creating a secure password, the user must remember that long passwords are stronger than short and complex ones. It is not a good idea to create a password of just eight characters, even if it contains digits, special characters and uppercase letters. Password cracking tools, such as John The Ripper, are optimized for breaking such passwords, which are also hard to remember by a person. In information theory, entropy is the level of uncertainty associated with a random variable and is presented in bits. The higher the entropy value, the more secure the password is. According to NIST SP 800-63-1, passwords that are not present in a dictionary comprised of 50000 commonly selected passwords should have at least 10 bits of entropy. As such, a password that consists of four random words contains around 40 bits of entropy. A long password consisting of multiple words for added security is also called a passphrase , for example: If the system enforces the use of uppercase letters, digits, or special characters, the passphrase that follows the above recommendation can be modified in a simple way, for example by changing the first character to uppercase and appending " 1! ". Note that such a modification does not increase the security of the passphrase significantly. While there are different approaches to creating a secure password, always avoid the following bad practices: Using a single dictionary word, a word in a foreign language, an inverted word, or only numbers. Using less than 10 characters for a password or passphrase. Using a sequence of keys from the keyboard layout. Writing down your passwords. Using personal information in a password, such as birth dates, anniversaries, family member names, or pet names. Using the same passphrase or password on multiple machines. While creating secure passwords is imperative, managing them properly is also important, especially for system administrators within larger organizations. The following section details good practices for creating and managing user passwords within an organization. 2.1.4. Creating User Passwords Within an Organization If an organization has a large number of users, the system administrators have two basic options available to force the use of good passwords. They can create passwords for the user, or they can let users create their own passwords, while verifying the passwords are of acceptable quality. Creating the passwords for the users ensures that the passwords are good, but it becomes a daunting task as the organization grows. It also increases the risk of users writing their passwords down. For these reasons, most system administrators prefer to have the users create their own passwords, but actively verify that the passwords are good and, in some cases, force users to change their passwords periodically through password aging. 2.1.4.1. Forcing Strong Passwords To protect the network from intrusion it is a good idea for system administrators to verify that the passwords used within an organization are strong ones. When users are asked to create or change passwords, they can use the command line application passwd , which is Pluggable Authentication Modules ( PAM ) aware and therefore checks to see if the password is too short or otherwise easy to crack. This check is performed using the pam_cracklib.so PAM module. In Red Hat Enterprise Linux, the pam_cracklib PAM module can be used to check a password's strength against a set of rules. It can be stacked alongside other PAM modules in the password component of the /etc/pam.d/passwd file to configure a custom set of rules for user login. The pam_cracklib 's routine consists of two parts: it checks whether the password provided is found in a dictionary, and, if that is not the case, it continues with a number of additional checks. For a complete list of these checks, see the pam_cracklib(8) manual page. Example 2.1. Configuring password strength-checking with pam_cracklib To require a password with a minimum length of 8 characters, including all four classes of characters, add the following line to the password section of the /etc/pam.d/passwd file: To set a password strength-check for consecutive or repetitive characters, add the following line to the password section of the /etc/pam.d/passwd file: In this example, the password entered cannot contain more than 3 consecutive characters, such as "abcd" or "1234". Additionally, the number of identical consecutive characters is limited to 3. Note As these checks are not performed for the root user, he can set any password for a regular user, despite the warning messages. Since PAM is customizable, it is possible to add more password integrity checkers, such as pam_passwdqc (available from http://www.openwall.com/passwdqc/ ) or to write a new module. For a list of available PAM modules, see http://uw714doc.sco.com/en/SEC_pam/pam-6.html . For more information about PAM, see the Managing Single Sign-On and Smart Cards guide. The password check that is performed at the time of their creation does not discover bad passwords as effectively as running a password cracking program against the passwords. Many password cracking programs are available that run under Red Hat Enterprise Linux, although none ship with the operating system. Below is a brief list of some of the more popular password cracking programs: John The Ripper - A fast and flexible password cracking program. It allows the use of multiple word lists and is capable of brute-force password cracking. It is available online at http://www.openwall.com/john/ . Crack - Perhaps the most well known password cracking software, Crack is also very fast, though not as easy to use as John The Ripper . Slurpie - Slurpie is similar to John The Ripper and Crack , but it is designed to run on multiple computers simultaneously, creating a distributed password cracking attack. It can be found along with a number of other distributed attack security evaluation tools online at http://www.ussrback.com/distributed.htm . Warning Always get authorization in writing before attempting to crack passwords within an organization. 2.1.4.2. Passphrases Passphrases and passwords are the cornerstone to security in most of today's systems. Unfortunately, techniques such as biometrics and two-factor authentication have not yet become mainstream in many systems. If passwords are going to be used to secure a system, then the use of passphrases should be considered. Passphrases are longer than passwords and provide better protection than a password even when implemented with non-standard characters such as numbers and symbols. 2.1.4.3. Password Aging Password aging is another technique used by system administrators to defend against bad passwords within an organization. Password aging means that after a specified period (usually 90 days), the user is prompted to create a new password. The theory behind this is that if a user is forced to change his password periodically, a cracked password is only useful to an intruder for a limited amount of time. The downside to password aging, however, is that users are more likely to write their passwords down. There are two primary programs used to specify password aging under Red Hat Enterprise Linux: the chage command or the graphical User Manager ( system-config-users ) application. Important Shadow passwords must be enabled to use the chage command. For more information, see the Red Hat Enterprise Linux 6 Deployment Guide . The -M option of the chage command specifies the maximum number of days the password is valid. For example, to set a user's password to expire in 90 days, use the following command: chage -M 90 <username> In the above command, replace <username> with the name of the user. To disable password expiration, it is traditional to use a value of 99999 after the -M option (this equates to a little over 273 years). For more information on the options available with the chage command, see the table below. Table 2.1. chage command line options Option Description -d days Specifies the number of days since January 1, 1970 the password was changed. -E date Specifies the date on which the account is locked, in the format YYYY-MM-DD. Instead of the date, the number of days since January 1, 1970 can also be used. -I days Specifies the number of inactive days after the password expiration before locking the account. If the value is 0 , the account is not locked after the password expires. -l Lists current account aging settings. -m days Specify the minimum number of days after which the user must change passwords. If the value is 0 , the password does not expire. -M days Specify the maximum number of days for which the password is valid. When the number of days specified by this option plus the number of days specified with the -d option is less than the current day, the user must change passwords before using the account. -W days Specifies the number of days before the password expiration date to warn the user. You can also use the chage command in interactive mode to modify multiple password aging and account details. Use the following command to enter interactive mode: chage <username> The following is a sample interactive session using this command: You can configure a password to expire the first time a user logs in. This forces users to change passwords immediately. Set up an initial password. There are two common approaches to this step: you can either assign a default password, or you can use a null password. To assign a default password, type the following at a shell prompt as root : passwd username To assign a null password instead, use the following command: passwd -d username Warning Using a null password, while convenient, is a highly insecure practice, as any third party can log in first and access the system using the insecure user name. Always make sure that the user is ready to log in before unlocking an account with a null password. Force immediate password expiration by running the following command as root : chage -d 0 username This command sets the value for the date the password was last changed to the epoch (January 1, 1970). This value forces immediate password expiration no matter what password aging policy, if any, is in place. Upon the initial log in, the user is now prompted for a new password. You can also use the graphical User Manager application to create password aging policies, as follows. Note: you need Administrator privileges to perform this procedure. Click the System menu on the Panel, point to Administration and then click Users and Groups to display the User Manager. Alternatively, type the command system-config-users at a shell prompt. Click the Users tab, and select the required user in the list of users. Click Properties on the toolbar to display the User Properties dialog box (or choose Properties on the File menu). Click the Password Info tab, and select the check box for Enable password expiration . Enter the required value in the Days before change required field, and click OK . Figure 2.1. Specifying password aging options screenshot needs to be updated 2.1.5. Locking Inactive Accounts The pam_lastlog PAM module is used to lock out users who have not logged in recently enough, or to display information about the last login attempt of a user. The module does not perform a check on the root account, so it is never locked out. The lastlog command displays the last login of the user, as opposed to the last command, which displays all current and login sessions. The commands read respectively from the /var/log/lastlog and /var/log/wtmp files where the data is stored in binary format. To display the number of failed login attempts prior to the last successful login of a user, add, as root, the following line to the session section in the /etc/pam.d/login file: Account locking due to inactivity can be configured to work for the console, GUI, or both: To lock out an account after 10 days of inactivity, add, as root, the following line to the auth section of the /etc/pam.d/login file: To lock out an account for the GNOME desktop environment, add, as root, the following line to the auth section of the /etc/pam.d/gdm file: Note Note that for other desktop environments, the respective files of those environments should be edited. 2.1.6. Customizing Access Control The pam_access PAM module allows an administrator to customize access control based on login names, host or domain names, or IP addresses. By default, the module reads the access rules from the /etc/security/access.conf file if no other is specified. For a complete description of the format of these rules, see the access.conf(5) manual page. By default, in Red Hat Enterprise Linux, pam_access is included in the /etc/pam.d/crond and /etc/pam.d/atd files. To deny the user john from accessing system from the console and the graphic desktop environment, follow these steps: Include the following line in the account section of both /etc/pam.d/login and /etc/pam.d/gdm-* files: Specify the following rule in the /etc/security/access.conf file: This rule prohibits all logins from user john from any location. To grant access to all users attempting to log in using SSH except the user john from the 1.2.3.4 IP address, follow these steps: Include the following line in the account section of /etc/pam.d/sshd : Specify the following rule in the /etc/security/access.conf file: In order to limit access from other services, the pam_access module should be required in the respective file in the /etc/pam.d directory. It is possible to call the pam_access module for all services that call the system wide PAM configuration files ( *-auth files in the /etc/pam.d directory) using the following command: Alternatively, you can enable the pam_access module using the Authentication Configuration utility. To start this utility, select System Administration Authentication from the top menu. From the Advanced Options tab, check the "enable local access control option". This will add the pam_access module to the systemwide PAM configuration. 2.1.7. Time-based Restriction of Access The pam_time PAM module is used to restrict access during a certain time of the day. It can also be configured to control access based on specific days of a week, user name, usage of a system service, and more. By default, the module reads the access rules from the /etc/security/time.conf file. For a complete description of the format of these rules, see the time.conf(5) manual page. To restrict all users except the root user from logging in from 05:30 PM to 08:00 AM on Monday till Friday and Saturday and Sunday, follow these steps: Include the following line in the account section of the /etc/pam.d/login file: Specify the following rule in the /etc/security/time.conf file: To allow user john to use the SSH service during working hours and working days only (starting with Monday), follow these steps: Add the following line to the /etc/pam.d/sshd file: Specify the following rule in the /etc/security/time.conf file: Note For these configurations to be applied to the desktop environment, the pam_time module should be included in the corresponding files in the /etc/pam.d directory. 2.1.8. Applying Account Limits The pam_limits PAM module is used to: apply limits to user login sessions, such as maximum simultaneous login sessions per user, specify limits to be set by the ulimit utility, and specify priority to be set by the nice utility. By default, the rules are read from the /etc/security/limits.conf file. For a complete description of the format of these rules, see the limits.conf(5) manual page. Additionally, you can create individual configuration files in the /etc/security/limits.d directory specifically for certain applications or services. By default, the pam_limits module is included in a number of files in the /etc/pam.d/ directory. A default limit of user processes is defined in the /etc/security/limits.d/90-nproc.conf file to prevent malicious denial of service attacks, such as fork bombs. To change the default limit of user processes to 50, change the value in the /etc/security/limits.d/90-nproc.conf , following the format in the file: Example 2.2. Specifying a maximum number of logins per user To set a maximum number of simultaneous logins for each user in a group called office , specify the following rule in the /etc/security/limits.conf file: The following line should be present by default in /etc/pam.d/system-auth . If not, add it manually. 2.1.9. Administrative Controls When administering a home machine, the user must perform some tasks as the root user or by acquiring effective root privileges through a setuid program, such as sudo or su . A setuid program is one that operates with the user ID ( UID ) of the program's owner rather than the user operating the program. Such programs are denoted by an s in the owner section of a long format listing, as in the following example: Note The s may be upper case or lower case. If it appears as upper case, it means that the underlying permission bit has not been set. For the system administrators of an organization, however, choices must be made as to how much administrative access users within the organization should have to their machine. Through a PAM module called pam_console.so , some activities normally reserved only for the root user, such as rebooting and mounting removable media are allowed for the first user that logs in at the physical console (see Managing Single Sign-On and Smart Cards for more information about the pam_console.so module.) However, other important system administration tasks, such as altering network settings, configuring a new mouse, or mounting network devices, are not possible without administrative privileges. As a result, system administrators must decide how much access the users on their network should receive. 2.1.9.1. Allowing Root Access If the users within an organization are trusted and computer-literate, then allowing them root access may not be an issue. Allowing root access by users means that minor activities, like adding devices or configuring network interfaces, can be handled by the individual users, leaving system administrators free to deal with network security and other important issues. On the other hand, giving root access to individual users can lead to the following issues: Machine Misconfiguration - Users with root access can misconfigure their machines and require assistance to resolve issues. Even worse, they might open up security holes without knowing it. Running Insecure Services - Users with root access might run insecure servers on their machine, such as FTP or Telnet, potentially putting user names and passwords at risk. These services transmit this information over the network in plain text. Running Email Attachments As Root - Although rare, email viruses that affect Linux do exist. The only time they are a threat, however, is when they are run by the root user. Keeping the audit trail intact - Because the root account is often shared by multiple users, so that multiple system administrators can maintain the system, it is impossible to figure out which of those users was root at a given time. When using separate logins, the account a user logs in with, as well as a unique number for session tracking purposes, is put into the task structure, which is inherited by every process that the user starts. When using concurrent logins, the unique number can be used to trace actions to specific logins. When an action generates an audit event, it is recorded with the login account and the session associated with that unique number. Use the aulast command to view these logins and sessions. The --proof option of the aulast command can be used suggest a specific ausearch query to isolate auditable events generated by a particular session. 2.1.9.2. Disallowing Root Access If an administrator is uncomfortable allowing users to log in as root for these or other reasons, the root password should be kept secret, and access to runlevel one or single user mode should be disallowed through boot loader password protection (see Section 2.1.2.2, "Boot Loader Passwords" for more information on this topic.) The following are four different ways that an administrator can further ensure that root logins are disallowed: Changing the root shell To prevent users from logging in directly as root, the system administrator can set the root account's shell to /sbin/nologin in the /etc/passwd file. Table 2.2. Disabling the Root Shell Effects Does Not Affect Prevents access to the root shell and logs any such attempts. The following programs are prevented from accessing the root account: login gdm kdm xdm su ssh scp sftp Programs that do not require a shell, such as FTP clients, mail clients, and many setuid programs. The following programs are not prevented from accessing the root account: sudo FTP clients Email clients Disabling root access through any console device (tty) To further limit access to the root account, administrators can disable root logins at the console by editing the /etc/securetty file. This file lists all devices the root user is allowed to log into. If the file does not exist at all, the root user can log in through any communication device on the system, whether through the console or a raw network interface. This is dangerous, because a user can log in to their machine as root through Telnet, which transmits the password in plain text over the network. By default, Red Hat Enterprise Linux's /etc/securetty file only allows the root user to log in at the console physically attached to the machine. To prevent the root user from logging in, remove the contents of this file by typing the following command at a shell prompt as root: To enable securetty support in the KDM, GDM, and XDM login managers, add the following line: to the files listed below: /etc/pam.d/gdm /etc/pam.d/gdm-autologin /etc/pam.d/gdm-fingerprint /etc/pam.d/gdm-password /etc/pam.d/gdm-smartcard /etc/pam.d/kdm /etc/pam.d/kdm-np /etc/pam.d/xdm Warning A blank /etc/securetty file does not prevent the root user from logging in remotely using the OpenSSH suite of tools because the console is not opened until after authentication. Table 2.3. Disabling Root Logins Effects Does Not Affect Prevents access to the root account using the console or the network. The following programs are prevented from accessing the root account: login gdm kdm xdm Other network services that open a tty Programs that do not log in as root, but perform administrative tasks through setuid or other mechanisms. The following programs are not prevented from accessing the root account: su sudo ssh scp sftp Disabling root SSH logins To prevent root logins using the SSH protocol, edit the SSH daemon's configuration file, /etc/ssh/sshd_config , and change the line that reads: to read as follows: Table 2.4. Disabling Root SSH Logins Effects Does Not Affect Prevents root access using the OpenSSH suite of tools. The following programs are prevented from accessing the root account: ssh scp sftp Programs that are not part of the OpenSSH suite of tools. Using PAM to limit root access to services PAM, through the /lib/security/pam_listfile.so module, allows great flexibility in denying specific accounts. The administrator can use this module to reference a list of users who are not allowed to log in. To limit root access to a system service, edit the file for the target service in the /etc/pam.d/ directory and make sure the pam_listfile.so module is required for authentication. The following is an example of how the module is used for the vsftpd FTP server in the /etc/pam.d/vsftpd PAM configuration file (the \ character at the end of the first line is not necessary if the directive is on a single line): This instructs PAM to consult the /etc/vsftpd.ftpusers file and deny access to the service for any listed user. The administrator can change the name of this file, and can keep separate lists for each service or use one central list to deny access to multiple services. If the administrator wants to deny access to multiple services, a similar line can be added to the PAM configuration files, such as /etc/pam.d/pop and /etc/pam.d/imap for mail clients, or /etc/pam.d/ssh for SSH clients. For more information about PAM, see the chapter titled Using Pluggable Authentication Modules (PAM) in the Red Hat Enterprise Linux Managing Single Sign-On and Smart Cards guide. Table 2.5. Disabling Root Using PAM Effects Does Not Affect Prevents root access to network services that are PAM aware. The following services are prevented from accessing the root account: login gdm kdm xdm ssh scp sftp FTP clients Email clients Any PAM aware services Programs and services that are not PAM aware. 2.1.9.3. Enabling Automatic Logouts When the user is logged in as root , an unattended login session may pose a significant security risk. To reduce this risk, you can configure the system to automatically log out idle users after a fixed period of time: Make sure the screen package is installed. You can do so by running the following command as root : ~]# yum install screen For more information on how to install packages in Red Hat Enterprise Linux, see the Installing Packages section in the Red Hat Enterprise Linux 6 Deployment Guide . As root , add the following line at the beginning of the /etc/profile file to make sure the processing of this file cannot be interrupted: trap "" 1 2 3 15 Add the following lines at the end of the /etc/profile file to start a screen session each time a user logs in to a virtual console or remotely: SCREENEXEC="screen" if [ -w USD(tty) ]; then trap "exec USDSCREENEXEC" 1 2 3 15 echo -n 'Starting session in 10 seconds' sleep 10 exec USDSCREENEXEC fi Note that each time a new session starts, a message will be displayed and the user will have to wait ten seconds. To adjust the time to wait before starting a session, change the value after the sleep command. Add the following lines to the /etc/screenrc configuration file to close the screen session after a given period of inactivity: idle 120 quit autodetach off This will set the time limit to 120 seconds. To adjust this limit, change the value after the idle directive. Alternatively, you can configure the system to only lock the session by using the following lines instead: idle 120 lockscreen autodetach off This way, a password will be required to unlock the session. The changes take effect the time a user logs in to the system. 2.1.9.4. Limiting Root Access Rather than completely denying access to the root user, the administrator may want to allow access only by setuid programs, such as su or sudo . For more information on su and sudo , see the Red Hat Enterprise Linux 6 Deployment Guide and the su(1) and sudo(8) man pages. 2.1.9.5. Account Locking In Red Hat Enterprise Linux 6, the pam_faillock PAM module allows system administrators to lock out user accounts after a specified number of failed attempts. Limiting user login attempts serves mainly as a security measure that aims to prevent possible brute force attacks targeted to obtain a user's account password. With the pam_faillock module, failed login attempts are stored in a separate file for each user in the /var/run/faillock directory. Note The order of lines in the failed attempt log files is important. Any change in this order can lock all user accounts, including the root user account when the even_deny_root option is used. Follow these steps to configure account locking: To lock out any non-root user after three unsuccessful attempts and unlock that user after 10 minutes, add the following lines to the auth section of the /etc/pam.d/system-auth and /etc/pam.d/password-auth files: Add the following line to the account section of both files specified in the step: To apply account locking for the root user as well, add the even_deny_root option to the pam_faillock entries in the /etc/pam.d/system-auth and /etc/pam.d/password-auth files: When user john attempts to log in for the fourth time after failing to log in three times previously, his account is locked upon the fourth attempt: To prevent the system from locking users out even after multiple failed logins, add the following line just above the line where pam_faillock is called for the first time in both /etc/pam.d/system-auth and /etc/pam.d/password-auth . Also replace user1 , user2 , user3 with the actual user names. To view the number of failed attempts per user, run, as root, the following command: To unlock a user's account, run, as root, the following command: When modifying authentication configuration using the authconfig utility, the system-auth and password-auth files are overwritten with the settings from the authconfig utility. This can be avoided by creating symbolic links in place of the configuration files, which authconfig recognizes and does not overwrite. In order to use custom settings in the configuration files and authconfig simultaneously, configure account locking using the following steps: Rename the configuration files: Create the following symbolic links: The /etc/pam.d/system-auth-local file should contain the following lines: The /etc/pam.d/password-auth-local file should contain the following lines: For more information on various pam_faillock configuration options, see the pam_faillock(8) man page. 2.1.10. Session Locking Users may need to leave their workstation unattended for a number of reasons during everyday operation. This could present an opportunity for an attacker to physically access the machine, especially in environments with insufficient physical security measures (see Section 1.1.3.1, "Physical Controls" ). Laptops are especially exposed since their mobility interferes with physical security. You can alleviate these risks by using session locking features which prevent access to the system until a correct password is entered. Note The main advantage of locking the screen instead of logging out is that a lock allows the user's processes (such as file transfers) to continue running. Logging out would stop these processes. 2.1.10.1. Locking GNOME Using gnome-screensaver-command The default desktop environment for Red Hat Enterprise Linux 6, GNOME, includes a feature which allows users to lock their screen at any time. There are several ways to activate the lock: Press the key combination specified in System Preferences Keyboard Shortcuts Desktop Lock screen . The default combination is Ctrl + Alt + L . Select System Lock screen on the panel. Execute the following command from a command line interface: All of the techniques described have the same result: the screen saver is activated and the screen is locked. Users can then press any key to deactivate the screen saver, enter their password and continue working. Keep in mind that this function requires the gnome-screensaver process to be running. You can check whether this is the case by using any command which provides information about processes. For example, execute the following command from the terminal: If the gnome-screensaver process is currently running, a number denoting its identification number (PID) will be displayed on the screen after executing the command. If the process is not currently running, the command will provide no output at all. Refer to the gnome-screensaver-command(1) man page for additional information. Important The means of locking the screen described above rely on manual activation. Administrators should therefore advise their users to lock their computers every time they leave them unattended, even if only for a short period of time. 2.1.10.1.1. Automatic Lock on Screen Saver Activation As the name gnome-screensaver-command suggests, the locking functionality is tied to GNOME's screen saver. It is possible to tie the lock to the screen saver's activation, locking the workstation every time it is left unattended for a set period of time. This function is activated by default with a five minute timeout. To change the automatic locking settings, select System Preferences Screensaver on the main panel. This opens a window which allows setting the timeout period (the Regard the computer as idle after slider) and activating or deactivating the automatic lock (the Lock screen when screensaver is active check box). Figure 2.2. Changing the screen saver preferences Note Disabling the Activate screensaver when computer is idle option in the Screensaver Preferences dialog prevents the screen saver from starting automatically. Automatic locking is therefore disabled as well, but it is still possible to lock the workstation manually using the techniques described in Section 2.1.10.1, "Locking GNOME Using gnome-screensaver-command" . 2.1.10.1.2. Remote Session Locking You can also lock a GNOME session remotely using ssh as long as the target workstation accepts connections over this protocol. To remotely lock the screen on a machine you have access to, execute the following command: Replace <username> with your user name and <server> with the IP address of the workstation you want to lock. Refer to Section 3.2.2, "Secure Shell" for more information regarding ssh . 2.1.10.2. Locking Virtual Consoles Using vlock Users may also need to lock a virtual console. This can be done using a utility called vlock . To install this utility, execute the following command as root: After installation, any console session can be locked using the vlock command without any additional parameters. This locks the currently active virtual console session while still allowing access to the others. To prevent access to all virtual consoles on the workstation, execute the following: In this case, vlock locks the currently active console and the -a option prevents switching to other virtual consoles. Refer to the vlock(1) man page for additional information. Important There are several known issues relevant to the version of vlock currently available for Red Hat Enterprise Linux 6: The program does not currently allow unlocking consoles using the root password. Additional information can be found in BZ# 895066 . Locking a console does not clear the screen and scrollback buffer, allowing anyone with physical access to the workstation to view previously issued commands and any output displayed in the console. Refer to BZ# 807369 for more information. 2.1.11. Available Network Services While user access to administrative controls is an important issue for system administrators within an organization, monitoring which network services are active is of paramount importance to anyone who administers and operates a Linux system. Many services under Red Hat Enterprise Linux 6 behave as network servers. If a network service is running on a machine, then a server application (called a daemon ), is listening for connections on one or more network ports. Each of these servers should be treated as a potential avenue of attack. 2.1.11.1. Risks To Services Network services can pose many risks for Linux systems. Below is a list of some of the primary issues: Denial of Service Attacks (DoS) - By flooding a service with requests, a denial of service attack can render a system unusable as it tries to log and answer each request. Distributed Denial of Service Attack (DDoS) - A type of DoS attack which uses multiple compromised machines (often numbering in the thousands or more) to direct a coordinated attack on a service, flooding it with requests and making it unusable. Script Vulnerability Attacks - If a server is using scripts to execute server-side actions, as Web servers commonly do, an attacker can attack improperly written scripts. These script vulnerability attacks can lead to a buffer overflow condition or allow the attacker to alter files on the system. Buffer Overflow Attacks - Services that connect to ports numbered 0 through 1023 must run as an administrative user. If the application has an exploitable buffer overflow, an attacker could gain access to the system as the user running the daemon. Because exploitable buffer overflows exist, attackers use automated tools to identify systems with vulnerabilities, and once they have gained access, they use automated rootkits to maintain their access to the system. Note The threat of buffer overflow vulnerabilities is mitigated in Red Hat Enterprise Linux by ExecShield , an executable memory segmentation and protection technology supported by x86-compatible uni- and multi-processor kernels. ExecShield reduces the risk of buffer overflow by separating virtual memory into executable and non-executable segments. Any program code that tries to execute outside of the executable segment (such as malicious code injected from a buffer overflow exploit) triggers a segmentation fault and terminates. Execshield also includes support for No eXecute ( NX ) technology on AMD64 platforms and eXecute Disable ( XD ) technology on Itanium and Intel (R) 64 systems. These technologies work in conjunction with ExecShield to prevent malicious code from running in the executable portion of virtual memory with a granularity of 4KB of executable code, lowering the risk of attack from buffer overflow exploits. Important To limit exposure to attacks over the network, disable all services that are unused. 2.1.11.2. Identifying and Configuring Services To enhance security, most network services installed with Red Hat Enterprise Linux are turned off by default. There are, however, some notable exceptions: cupsd - The default print server for Red Hat Enterprise Linux. lpd - An alternative print server. xinetd - A super server that controls connections to a range of subordinate servers, such as gssftp and telnet . sendmail - The Sendmail Mail Transport Agent ( MTA ) is enabled by default, but only listens for connections from the localhost . sshd - The OpenSSH server, which is a secure replacement for Telnet. When determining whether to leave these services running, it is best to use common sense and avoid taking any risks. For example, if a printer is not available, do not leave cupsd running. The same is true for portmap . If you do not mount NFSv3 volumes or use NIS (the ypbind service), then portmap should be disabled. Figure 2.3. Services Configuration Tool If unsure of the purpose for a particular service, the Services Configuration Tool has a description field, illustrated in Figure 2.3, "Services Configuration Tool" , that provides additional information. Checking which network services are available to start at boot time is not sufficient. It is recommended to also check which ports are open and listening. Refer to Section 2.2.9, "Verifying Which Ports Are Listening" for more information. 2.1.11.3. Insecure Services Potentially, any network service is insecure. This is why turning off unused services is so important. Exploits for services are routinely revealed and patched, making it very important to regularly update packages associated with any network service. Refer to Section 1.5, "Security Updates" for more information. Some network protocols are inherently more insecure than others. These include any services that: Transmit Usernames and Passwords Over a Network Unencrypted - Many older protocols, such as Telnet and FTP, do not encrypt the authentication session and should be avoided whenever possible. Transmit Sensitive Data Over a Network Unencrypted - Many protocols transmit data over the network unencrypted. These protocols include Telnet, FTP, HTTP, and SMTP. Many network file systems, such as NFS and SMB, also transmit information over the network unencrypted. It is the user's responsibility when using these protocols to limit what type of data is transmitted. Remote memory dump services, like netdump , transmit the contents of memory over the network unencrypted. Memory dumps can contain passwords or, even worse, database entries and other sensitive information. Other services like finger and rwhod reveal information about users of the system. Examples of inherently insecure services include rlogin , rsh , telnet , and vsftpd . All remote login and shell programs ( rlogin , rsh , and telnet ) should be avoided in favor of SSH. Refer to Section 2.1.13, "Security Enhanced Communication Tools" for more information about sshd . FTP is not as inherently dangerous to the security of the system as remote shells, but FTP servers must be carefully configured and monitored to avoid problems. Refer to Section 2.2.6, "Securing FTP" for more information about securing FTP servers. Services that should be carefully implemented and behind a firewall include: finger authd (this was called identd in Red Hat Enterprise Linux releases.) netdump netdump-server nfs rwhod sendmail smb (Samba) yppasswdd ypserv ypxfrd More information on securing network services is available in Section 2.2, "Server Security" . The section discusses tools available to set up a simple firewall. 2.1.12. Personal Firewalls After the necessary network services are configured, it is important to implement a firewall. Important Configure the necessary services and implement a firewall before connecting to the Internet or any other network that you do not trust. Firewalls prevent network packets from accessing the system's network interface. If a request is made to a port that is blocked by a firewall, the request is ignored. If a service is listening on one of these blocked ports, it does not receive the packets and is effectively disabled. For this reason, ensure that you block access to ports not in use when configuring a firewall, while not blocking access to ports used by configured services. For most users, the best tool for configuring a simple firewall is the graphical firewall configuration tool which includes Red Hat Enterprise Linux: the Firewall Configuration Tool ( system-config-firewall ). This tool creates broad iptables rules for a general-purpose firewall using a control panel interface. Refer to Section 2.8.2, "Basic Firewall Configuration" for more information about using this application and its available options. For advanced users and server administrators, manually configuring a firewall with iptables is preferable. Refer to Section 2.8, "Firewalls" for more information. Refer to Section 2.8.9, "IPTables" for a comprehensive guide to the iptables command. 2.1.13. Security Enhanced Communication Tools As the size and popularity of the Internet has grown, so has the threat of communication interception. Over the years, tools have been developed to encrypt communications as they are transferred over the network. Red Hat Enterprise Linux 6 includes two basic tools that use high-level, public-key-cryptography-based encryption algorithms to protect information as it travels over the network. OpenSSH - A free implementation of the SSH protocol for encrypting network communication. Gnu Privacy Guard (GPG) - A free implementation of the PGP (Pretty Good Privacy) encryption application for encrypting data. OpenSSH is a safer way to access a remote machine and replaces older, unencrypted services like telnet and rsh . OpenSSH includes a network service called sshd and three command line client applications: ssh - A secure remote console access client. scp - A secure remote copy command. sftp - A secure pseudo-ftp client that allows interactive file transfer sessions. Refer to Section 3.2.2, "Secure Shell" for more information regarding OpenSSH. Important Although the sshd service is inherently secure, the service must be kept up-to-date to prevent security threats. Refer to Section 1.5, "Security Updates" for more information. GPG is one way to ensure private email communication. It can be used both to email sensitive data over public networks and to protect sensitive data on hard drives. 2.1.14. Enforcing Read-Only Mounting of Removable Media To enforce read-only mounting of removable media (such as USB flash disks), the administrator can use a udev rule to detect removable media and configure them to be mounted read-only using the blockdev utility. Starting with Red Hat Enterprise Linux 6.7, a special parameter can be also passed to the udisks disk manager to force read-only mounting of file systems. While the udev rule that triggers the blockdev utility is sufficient for enforcing read-only mounting of physical media, the udisks parameter can be used to enforce read-only mounting of filesystems on read-write mounted media. Using blockdev to Force Read-Only Mounting of Removable Media To force all removable media to be mounted read-only, create a new udev configuration file named, for example, 80-readonly-removables.rules in the /etc/udev/rules.d/ directory with the following content: The above udev rule ensures that any newly connected removable block (storage) device is automatically configured as read-only using the blockdev utility. Using udisks to Force Read-Only Mounting of Filesystems To force all file systems to be mounted read-only, a special udisks parameter needs to be set through udev . Create a new udev configuration file named, for example, 80-udisks.rules in the /etc/udev/rules.d/ directory with the following content (or add the following lines to this file if it already exists): Note that a default 80-udisks.rules file is installed with the udisks package in the /lib/udev/rules.d/ directory. This file contains the above rules, but they are commented out. The above udev rules instruct the udisks disk manager to only allow read-only mounting of file systems. Also, the noexec parameter forbids direct execution of any binaries on the mounted file systems. This policy is enforced regardless of the way the actual physical device is mounted. That is, file systems are mounted read-only even on read-write mounted devices. Applying New udev and udisks Settings For these settings to take effect, the new udev rules need to be applied. The udev service automatically detects changes to its configuration files, but new settings are not applied to already existing devices. Only newly connected devices are affected by the new settings. Therefore, you need to unmount and unplug all connected removable media to ensure that the new settings are applied to them when they are plugged in. To force udev to re-apply all rules to already existing devices, enter the following command as root : Note that forcing udev to re-apply all rules using the above command does not affect any storage devices that are already mounted. To force udev to reload all rules (in case the new rules are not automatically detected for some reason), use the following command: [3] Since system BIOSes differ between manufacturers, some may not support password protection of either type, while others may support one type but not the other. [4] GRUB also accepts unencrypted passwords, but it is recommended that an MD5 hash be used for added security.
[ "password --md5 <password-hash>", "title DOS lock", "title DOS lock password --md5 <password-hash>", "PROMPT=no", "randomword1 randomword2 randomword3 randomword4", "password required pam_cracklib.so retry=3 minlen=8 minclass=4", "password required pam_cracklib.so retry=3 maxsequence=3 maxrepeat=3", "~]# chage juan Changing the aging information for juan Enter the new value, or press ENTER for the default Minimum Password Age [0]: 10 Maximum Password Age [99999]: 90 Last Password Change (YYYY-MM-DD) [2006-08-18]: Password Expiration Warning [7]: Password Inactive [-1]: Account Expiration Date (YYYY-MM-DD) [1969-12-31]:", "session optional pam_lastlog.so silent noupdate showfailed", "auth required pam_lastlog.so inactive=10", "auth required pam_lastlog.so inactive=10", "account required pam_access.so", "- : john : ALL", "account required pam_access.so", "+ : ALL EXCEPT john : 1.2.3.4", "authconfig --enablepamaccess --update", "account required pam_time.so", "login ; tty* ; ALL ; !root ; !Wk1730-0800", "account required pam_time.so", "sshd ; tty* ; john ; Wk0800-1730", "* soft nproc 50", "@office - maxlogins 4", "session required pam_limits.so", "~]USD ls -l /bin/su -rwsr-xr-x. 1 root root 34904 Mar 10 2011 /bin/su", "echo > /etc/securetty", "auth [user_unknown=ignore success=ok ignore=ignore default=bad] pam_securetty.so", "#PermitRootLogin yes", "PermitRootLogin no", "auth required /lib/security/pam_listfile.so item=user sense=deny file=/etc/vsftpd.ftpusers onerr=succeed", "trap \"\" 1 2 3 15", "SCREENEXEC=\"screen\" if [ -w USD(tty) ]; then trap \"exec USDSCREENEXEC\" 1 2 3 15 echo -n 'Starting session in 10 seconds' sleep 10 exec USDSCREENEXEC fi", "idle 120 quit autodetach off", "idle 120 lockscreen autodetach off", "auth required pam_faillock.so preauth silent audit deny=3 unlock_time=600 auth sufficient pam_unix.so nullok try_first_pass auth [default=die] pam_faillock.so authfail audit deny=3 unlock_time=600", "account required pam_faillock.so", "auth required pam_faillock.so preauth silent audit deny=3 even_deny_root unlock_time=600 auth sufficient pam_unix.so nullok try_first_pass auth [default=die] pam_faillock.so authfail audit deny=3 even_deny_root unlock_time=600 account required pam_faillock.so", "[user@localhost ~]USD su - john Account locked due to 3 failed logins su: incorrect password", "auth [success=1 default=ignore] pam_succeed_if.so user in user1:user2:user3", "faillock john: When Type Source Valid 2013-03-05 11:44:14 TTY pts/0 V", "faillock --user <username> --reset", "~]# mv /etc/pam.d/system-auth /etc/pam.d/system-auth-local ~]# mv /etc/pam.d/password-auth /etc/pam.d/password-auth-local", "~]# ln -s /etc/pam.d/system-auth-local /etc/pam.d/system-auth ~]# ln -s /etc/pam.d/password-auth-local /etc/pam.d/password-auth", "auth required pam_faillock.so preauth silent audit deny=3 unlock_time=600 auth include system-auth-ac auth [default=die] pam_faillock.so authfail silent audit deny=3 unlock_time=600 account required pam_faillock.so account include system-auth-ac password include system-auth-ac session include system-auth-ac", "auth required pam_faillock.so preauth silent audit deny=3 unlock_time=600 auth include password-auth-ac auth [default=die] pam_faillock.so authfail silent audit deny=3 unlock_time=600 account required pam_faillock.so account include password-auth-ac password include system-auth-ac session include system-auth-ac", "gnome-screensaver-command -l", "pidof gnome-screensaver", "ssh -X <username> @ <server> \"export DISPLAY=:0; gnome-screensaver-command -l\"", "~]# yum install vlock", "vlock -a", "SUBSYSTEM==\"block\",ATTRS{removable}==\"1\",RUN{program}=\"/sbin/blockdev --setro %N\"", "ENV{UDISKS_MOUNT_OPTIONS}=\"ro,noexec\" ENV{UDISKS_MOUNT_OPTIONS_ALLOW}=\"noexec,nodev,nosuid,atime,noatime,nodiratime,ro,sync,dirsync\"", "~# udevadm trigger", "~# udevadm control --reload" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/chap-Security_Guide-Securing_Your_Network
17.15. Creating Tunnels
17.15. Creating Tunnels This section will demonstrate how to implement different tunneling scenarios. 17.15.1. Creating Multicast Tunnels A multicast group is setup to represent a virtual network. Any guest virtual machines whose network devices are in the same multicast group can talk to each other even across host physical machines. This mode is also available to unprivileged users. There is no default DNS or DHCP support and no outgoing network access. To provide outgoing network access, one of the guest virtual machines should have a second NIC which is connected to one of the first four network types thus providing appropriate routing. The multicast protocol is compatible the guest virtual machine user mode. Note that the source address that you provide must be from the address used for the multicast address block. To create a multicast tunnel place the following XML details into the <devices> element: ... <devices> <interface type='mcast'> <mac address='52:54:00:6d:90:01'> <source address='230.0.0.1' port='5558'/> </interface> </devices> ... Figure 17.28. Multicast tunnel domain XMl example 17.15.2. Creating TCP Tunnels A TCP client-server architecture provides a virtual network. In this configuration, one guest virtual machine provides the server end of the network while all other guest virtual machines are configured as clients. All network traffic is routed between the guest virtual machine clients via the guest virtual machine server. This mode is also available for unprivileged users. Note that this mode does not provide default DNS or DHCP support and it does not provide outgoing network access. To provide outgoing network access, one of the guest virtual machines should have a second NIC which is connected to one of the first four network types thus providing appropriate routing. To create a TCP tunnel place the following XML details into the <devices> element: ... <devices> <interface type='server'> <mac address='52:54:00:22:c9:42'> <source address='192.168.0.1' port='5558'/> </interface> ... <interface type='client'> <mac address='52:54:00:8b:c9:51'> <source address='192.168.0.1' port='5558'/> </interface> </devices> ... Figure 17.29. TCP tunnel domain XMl example
[ "<devices> <interface type='mcast'> <mac address='52:54:00:6d:90:01'> <source address='230.0.0.1' port='5558'/> </interface> </devices>", "<devices> <interface type='server'> <mac address='52:54:00:22:c9:42'> <source address='192.168.0.1' port='5558'/> </interface> <interface type='client'> <mac address='52:54:00:8b:c9:51'> <source address='192.168.0.1' port='5558'/> </interface> </devices>" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-virtual_networking-creating_tunnels
Chapter 4. Projects
Chapter 4. Projects Projects are a logical collection of rulebooks. They must be a git repository and only http protocol is supported. The rulebooks of a project must be located in the path defined for Event-Driven Ansible content in Ansible collections: /extensions/eda/rulebooks at the root of the project. Important To meet high availability demands, Event-Driven Ansible controller shares centralized Redis (REmote DIctionary Server) with the Ansible Automation Platform UI. When Redis is unavailable, you will not be able to create or sync projects. 4.1. Setting up a new project You can set up projects to manage and store your rulebooks in Event-Driven Ansible controller. Prerequisites You are logged in to the Ansible Automation Platform Dashboard as a Content Consumer. You have set up a credential, if necessary. For more information, see the Setting up credentials section. You have an existing repository containing rulebooks that are integrated with playbooks contained in a repository to be used by automation controller. Procedure Log in to the Ansible Automation Platform Dashboard. Navigate to Automation Decisions Projects . Click Create project . Insert the following: Name Enter project name. Description This field is optional. Source control type Git is the only source control type available for use. This field is optional. Source control URL Enter Git, SSH, or HTTP[S] protocol address of a repository, such as GitHub or GitLab. This field is not editable. Note This field accepts SSH private key or private key phrase. To enable the use of these private keys, your project URL must begin with git@ . Proxy This is used to access access HTTP or HTTPS servers. This field is optional. Source control branch/tag/commit This is the branch to checkout. In addition to branches, you can input tags, commit hashes, and arbitrary refs. Some commit hashes and refs may not be available unless you also provide a custom refspec. This field is optional. Source control refspec A refspec to fetch (passed to the Ansible git module). This parameter allows access to references via the branch field not otherwise available. This field is optional. For more information, see Examples . Source control credential You must have this credential to utilize the source control URL. This field is optional. Content signature validation credential Enable content signing to verify that the content has remained secure when a project is synced. If the content has been tampered with, the job will not run. This field is optional. Options The Verify SSL option is enabled by default. Enabling this option verifies the SSL with HTTPS when the project is imported. Note You can disable this option if you have a local repository that uses self-signed certificates. Select Create project . Your project is now created and can be managed in the Projects page. After saving the new project, the project's details page is displayed. From there or the Projects list view, you can edit or delete it. 4.2. Projects list view On the Projects page, you can view the projects that you have created along with the Status and the Git hash . Note If a rulebook changes in source control, you can re-sync a project by selecting the sync icon to the project from the Projects list view. The Git hash updates represent the latest commit on that repository. An activation must be restarted or recreated if you want to use the updated project. 4.3. Editing a project Procedure From the Projects list view, select the More Actions icon ... to the desired project. The Edit page is displayed. Enter the required changes and select Save project . 4.4. Deleting a project If you need to delete a project, the Event-Driven Ansible controller interface provides multiple options. Procedure To delete a project, complete one of the following: From the Projects list view, select the checkbox to the desired project, and click the More Actions icon ... from the page menu. From the Projects list view, click the More Actions icon ... to the desired project. Select Delete project . In the Permanently delete projects window, select Yes, I confirm that I want to delete this project . Select Delete project .
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/using_automation_decisions/eda-projects
2.2. Creating a Hierarchy and Attaching Subsystems
2.2. Creating a Hierarchy and Attaching Subsystems Warning The following instructions, which cover creating a new hierarchy and attaching subsystems to it, assume that cgroups are not already configured on your system. In this case, these instructions will not affect the operation of the system. Changing the tunable parameters in a cgroup with tasks, however, can immediately affect those tasks. This guide alerts you the first time it illustrates changing a tunable cgroup parameter that can affect one or more tasks. On a system on which cgroups are already configured (either manually, or by the cgconfig service) these commands fail unless you first unmount existing hierarchies, which affects the operation of the system. Do not experiment with these instructions on production systems. To create a hierarchy and attach subsystems to it, edit the mount section of the /etc/cgconfig.conf file as root. Entries in the mount section have the following format: When cgconfig starts, it will create the hierarchy and attach the subsystems to it. The following example creates a hierarchy called cpu_and_mem and attaches the cpu , cpuset , cpuacct , and memory subsystems to it. Alternative method You can also use shell commands and utilities to create hierarchies and attach subsystems to them. Create a mount point for the hierarchy as root. Include the name of the cgroup in the mount point: For example: , use the mount command to mount the hierarchy and simultaneously attach one or more subsystems. For example: Where subsystems is a comma‐separated list of subsystems and name is the name of the hierarchy. Brief descriptions of all available subsystems are listed in Available Subsystems in Red Hat Enterprise Linux , and Chapter 3, Subsystems and Tunable Parameters provides a detailed reference. Example 2.3. Using the mount command to attach subsystems In this example, a directory named /cgroup/cpu_and_mem already exists and will serve as the mount point for the hierarchy that you create. Attach the cpu , cpuset , and memory subsystems to a hierarchy named cpu_and_mem , and mount the cpu_and_mem hierarchy on /cgroup/cpu_and_mem : You can list all available subsystems along with their current mount points (i.e. where the hierarchy they are attached to is mounted) with the lssubsys [3] command: This output indicates that: the cpu , cpuset , and memory subsystems are attached to a hierarchy mounted on /cgroup/cpu_and_mem , and the net_cls , ns , cpuacct , devices , freezer , and blkio subsystems are as yet unattached to any hierarchy, as illustrated by the lack of a corresponding mount point. [3] The lssubsys command is one of the utilities provided by the libcgroup package. You have to install libcgroup to use it: refer to Chapter 2, Using Control Groups if you are unable to run lssubsys .
[ "subsystem = /cgroup/ hierarchy ;", "mount { cpuset = /cgroup/cpu_and_mem; cpu = /cgroup/cpu_and_mem; cpuacct = /cgroup/cpu_and_mem; memory = /cgroup/cpu_and_mem; }", "~]# mkdir /cgroup/ name", "~]# mkdir /cgroup/cpu_and_mem", "~]# mount -t cgroup -o subsystems name /cgroup/ name", "~]# mount -t cgroup -o cpu,cpuset,memory cpu_and_mem /cgroup/cpu_and_mem", "~]# lssubsys -am cpu,cpuset,memory /cgroup/cpu_and_mem net_cls ns cpuacct devices freezer blkio" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/resource_management_guide/sec-Creating_a_Hierarchy_and_Attaching_Subsystems
Chapter 2. Setting up a project and storage
Chapter 2. Setting up a project and storage 2.1. Navigating to the OpenShift AI dashboard Procedure How you open the OpenShift AI dashboard depends on your OpenShift environment: If you are using the Red Hat Developer Sandbox : After you log in to the Sandbox, click Getting Started Available services , and then, in the Red Hat OpenShift AI card, click Launch . If you are using your own OpenShift cluster : After you log in to the OpenShift console, click the application launcher icon on the header. When prompted, log in to the OpenShift AI dashboard by using your OpenShift credentials. OpenShift AI uses the same credentials as OpenShift for the dashboard, notebooks, and all other components. The OpenShift AI dashboard shows the Home page. Note You can navigate back to the OpenShift console by clicking the application launcher to access the OpenShift console. For now, stay in the OpenShift AI dashboard. step Setting up your data science project 2.2. Setting up your data science project To implement a data science workflow, you must create a data science project (as described in the following procedure). Projects allow you and your team to organize and collaborate on resources within separated namespaces. From a project you can create multiple workbenches, each with their own IDE environment (for example, JupyterLab), and each with their own connections and cluster storage. In addition, the workbenches can share models and data with pipelines and model servers. Prerequisites Before you begin, log in to Red Hat OpenShift AI . Procedure On the navigation menu, select Data Science Projects . This page lists any existing projects that you have access to. From this page, you can select an existing project (if any) or create a new one. Note It is possible to start a Jupyter notebook by clicking the Launch standalone workbench button, selecting a notebook image, and clicking Start server . However, it would be a one-off Jupyter notebook run in isolation. If you are using your own OpenShift cluster, click Create project . Note If you are using the Red Hat Developer Sandbox, you are provided with a default data science project (for example, myname-dev ). Select it and skip over the step to the Verification section. Enter a display name and description. Verification You can see your project's initial state. Individual tabs provide more information about the project components and project access permissions: Workbenches are instances of your development and experimentation environment. They typically contain IDEs, such as JupyterLab, RStudio, and Visual Studio Code. Pipelines contain the data science pipelines that are executed within the project. Models allow you to quickly serve a trained model for real-time inference. You can have multiple model servers per data science project. One model server can host multiple models. Cluster storage is a persistent volume that retains the files and data you're working on within a workbench. A workbench has access to one or more cluster storage instances. Connections contain configuration parameters that are required to connect to a data source, such as an S3 object bucket. Permissions define which users and groups can access the project. step Storing data with connections 2.3. Storing data with connections Add connections to workbenches to connect your project to data inputs and object storage buckets. A connection is a resource that contains the configuration parameters needed to connect to a data source or data sink, such as an AWS S3 object storage bucket. For this tutorial, you run a provided script that creates the following local Minio storage buckets for you: My Storage - Use this bucket for storing your models and data. You can reuse this bucket and its connection for your notebooks and model servers. Pipelines Artifacts - Use this bucket as storage for your pipeline artifacts. A pipeline artifacts bucket is required when you create a pipeline server. For this tutorial, create this bucket to separate it from the first storage bucket for clarity. Note While it is possible for you to use one storage bucket for both purposes (storing models and data as well as storing pipeline artifacts), this tutorial follows best practice and uses separate storage buckets for each purpose. The provided script also creates a connection to each storage bucket. To run the script that installs local MinIO storage buckets and creates connections to them, follow the steps in Running a script to install local object storage buckets and create connections . Note If you want to use your own S3-compatible object storage buckets (instead of using the provided script), follow the steps in Creating connections to your own S3-compatible object storage . 2.3.1. Running a script to install local object storage buckets and create connections For convenience, run a script (provided in the following procedure) that automatically completes these tasks: Creates a Minio instance in your project. Creates two storage buckets in that Minio instance. Generates a random user id and password for your Minio instance. Creates two connections in your project, one for each bucket and both using the same credentials. Installs required network policies for service mesh functionality. The script is based on a guide for deploying Minio . Important The Minio-based Object Storage that the script creates is not meant for production usage. Note If you want to connect to your own storage, see Creating connections to your own S3-compatible object storage . Prerequisites You must know the OpenShift resource name for your data science project so that you run the provided script in the correct project. To get the project's resource name: In the OpenShift AI dashboard, select Data Science Projects and then click the ? icon to the project name. A text box appears with information about the project, including its resource name: Note The following procedure describes how to run the script from the OpenShift console. If you are knowledgeable in OpenShift and can access the cluster from the command line, instead of following the steps in this procedure, you can use the following command to run the script: Procedure In the OpenShift AI dashboard, click the application launcher icon and then select the OpenShift Console option. In the OpenShift console, click + in the top navigation bar. Select your project from the list of projects. Verify that you selected the correct project. Copy the following code and paste it into the Import YAML editor. Note This code gets and applies the setup-s3-no-sa.yaml file. --- apiVersion: v1 kind: ServiceAccount metadata: name: demo-setup --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: demo-setup-edit roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: edit subjects: - kind: ServiceAccount name: demo-setup --- apiVersion: batch/v1 kind: Job metadata: name: create-s3-storage spec: selector: {} template: spec: containers: - args: - -ec - |- echo -n 'Setting up Minio instance and connections' oc apply -f https://github.com/rh-aiservices-bu/fraud-detection/raw/main/setup/setup-s3-no-sa.yaml command: - /bin/bash image: image-registry.openshift-image-registry.svc:5000/openshift/tools:latest imagePullPolicy: IfNotPresent name: create-s3-storage restartPolicy: Never serviceAccount: demo-setup serviceAccountName: demo-setup Click Create . Verification In the OpenShift console, you should see a "Resources successfully created" message and the following resources listed: demo-setup demo-setup-edit create-s3-storage In the OpenShift AI dashboard: Select Data Science Projects and then click the name of your project, Fraud detection . Click Connections . You should see two connections listed: My Storage and Pipeline Artifacts . step If you want to complete the pipelines section of this tutorial, go to Enabling data science pipelines . Otherwise, skip to Creating a workbench . 2.3.2. Creating connections to your own S3-compatible object storage If you have existing S3-compatible storage buckets that you want to use for this tutorial, you must create a connection to one storage bucket for saving your data and models. If you want to complete the pipelines section of this tutorial, create another connection to a different storage bucket for saving pipeline artifacts. Note If you do not have your own s3-compatible storage, or if you want to use a disposable local Minio instance instead, skip this section and follow the steps in Running a script to install local object storage buckets and create connections . The provided script automatically completes the following tasks for you: creates a Minio instance in your project, creates two storage buckets in that Minio instance, creates two connections in your project, one for each bucket and both using the same credentials, and installs required network policies for service mesh functionality. Prerequisites To create connections to your existing S3-compatible storage buckets, you need the following credential information for the storage buckets: Endpoint URL Access key Secret key Region Bucket name If you don't have this information, contact your storage administrator. Procedure Create a connection for saving your data and models: In the OpenShift AI dashboard, navigate to the page for your data science project. Click the Connections tab, and then click Create connection . In the Add connection modal, for the Connection type select S3 compatible object storage - v1 . Complete the Add connection form and name your connection My Storage . This connection is for saving your personal work, including data and models. Click Create . Create a connection for saving pipeline artifacts: Note If you do not intend to complete the pipelines section of the tutorial, you can skip this step. Click Add connection . Complete the form and name your connection Pipeline Artifacts . Click Create . Verification In the Connections tab for the project, check to see that your connections are listed. step If you want to complete the pipelines section of this tutorial, go to Enabling data science pipelines . Otherwise, skip to Creating a workbench . 2.4. Enabling data science pipelines Note If you do not intend to complete the pipelines section of this tutorial you can skip this step and move on to the section, Create a Workbench . In this section, you prepare your tutorial environment so that you can use data science pipelines. Later in this tutorial, you implement an example pipeline by using the JupyterLab Elyra extension. With Elyra, you can create a visual end-to-end pipeline workflow that can be executed in OpenShift AI. Prerequisites You have installed local object storage buckets and created connections, as described in Storing data with connections . Procedure In the OpenShift AI dashboard, on the Fraud Detection page, click the Pipelines tab. Click Configure pipeline server . In the Configure pipeline server form, in the Access key field to the key icon, click the dropdown menu and then click Pipeline Artifacts to populate the Configure pipeline server form with credentials for the connection. Leave the database configuration as the default. Click Configure pipeline server . Wait until the loading spinner disappears and Start by importing a pipeline is displayed. Important You must wait until the pipeline configuration is complete before you continue and create your workbench. If you create your workbench before the pipeline server is ready, your workbench will not be able to submit pipelines to it. If you have waited more than 5 minutes, and the pipeline server configuration does not complete, you can delete the pipeline server and create it again. You can also ask your OpenShift AI administrator to verify that self-signed certificates are added to your cluster as described in Working with certificates . Verification Navigate to the Pipelines tab for the project. to Import pipeline , click the action menu (...) and then select View pipeline server configuration . An information box opens and displays the object storage connection information for the pipeline server. step Creating a workbench and selecting a notebook image
[ "apply -n <your-project-name/> -f https://github.com/rh-aiservices-bu/fraud-detection/raw/main/setup/setup-s3.yaml", "--- apiVersion: v1 kind: ServiceAccount metadata: name: demo-setup --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: demo-setup-edit roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: edit subjects: - kind: ServiceAccount name: demo-setup --- apiVersion: batch/v1 kind: Job metadata: name: create-s3-storage spec: selector: {} template: spec: containers: - args: - -ec - |- echo -n 'Setting up Minio instance and connections' oc apply -f https://github.com/rh-aiservices-bu/fraud-detection/raw/main/setup/setup-s3-no-sa.yaml command: - /bin/bash image: image-registry.openshift-image-registry.svc:5000/openshift/tools:latest imagePullPolicy: IfNotPresent name: create-s3-storage restartPolicy: Never serviceAccount: demo-setup serviceAccountName: demo-setup" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/openshift_ai_tutorial_-_fraud_detection_example/setting-up-a-project-and-storage
3.3. Preparing Installation Sources
3.3. Preparing Installation Sources As explained in Chapter 2, Downloading Red Hat Enterprise Linux , two basic types of media are available for Red Hat Enterprise Linux: a minimal boot image and a full installation image (also known as a binary DVD). If you downloaded the binary DVD and created a boot DVD-ROM or USB drive from it, you can proceed with the installation immediately, as this image contains everything you need to install the system. However, if you use the minimal boot image, you must also configure an additional source of the installation. This is because the minimal boot image only contains the installation program itself and tools needed to boot your system and start the installation; it does not include the software packages to be installed on your system. The full installation DVD ISO image can be used as the source for the installation. If your system will require additional software not provided by Red Hat, you should configure additional repositories and install these packages after the installation is finished. For information about configuring additional Yum repositories on an installed system, see the Red Hat Enterprise Linux 7 System Administrator's Guide . The installation source can be any of the following: DVD : You can burn the binary DVD ISO image onto a DVD and configure the installation program to install packages from this disk. Hard drive : You can place the binary DVD ISO image on a hard drive and install packages from it. Network location : You can copy the binary DVD ISO image or the installation tree (extracted contents of the binary DVD ISO image) to a network location accessible from the installation system and perform the installation over the network using the following protocols: NFS : The binary DVD ISO image is placed into a Network File System (NFS) share. HTTPS , HTTP or FTP : The installation tree is placed on a network location accessible over HTTP , HTTPS , or FTP . When booting the installation from minimal boot media, you must always configure an additional installation source. When booting the installation from the full binary DVD, it is also possible to configure another installation source, but it is not necessary - the binary DVD ISO image itself contains all packages you need to install the system, and the installation program will automatically configure the binary DVD as the source. You can specify an installation source in any of the following ways: In the installation program's graphical interface: After the graphical installation begins and you select your preferred language, the Installation Summary screen will appear. Navigate to the Installation Source screen and select the source you want to configure. For details, see: Section 8.11, "Installation Source" for 64-bit AMD, Intel, and ARM systems Section 13.12, "Installation Source" for IBM Power Systems servers Section 18.12, "Installation Source" for IBM Z Using a boot option: You can specify custom boot options to configure the installation program before it starts. One of these options allows you to specify the installation source to be used. See the inst.repo= option in Section 23.1, "Configuring the Installation System at the Boot Menu" for details. Using a Kickstart file: You can use the install command in a Kickstart file and specify an installation source. See Section 27.3.1, "Kickstart Commands and Options" for details on the install Kickstart command, and Chapter 27, Kickstart Installations for information about Kickstart installations in general. 3.3.1. Installation Source on a DVD You can burn the binary DVD ISO image onto a DVD and configure the installation program to install packages from this disk while booting the installation from another drive (for example, a minimal boot ISO on a USB flash drive). This procedure is the same as creating bootable optical media - see Section 3.1, "Making an Installation CD or DVD" for more information. When using a DVD as an installation source, make sure the DVD is in the drive when the installation begins. The Anaconda installation program is not able to detect media inserted after the installation begins. 3.3.2. Installation Source on a Hard Drive Hard drive installations use an ISO image of the binary installation DVD. To use a hard drive as the installation source, transfer the binary DVD ISO image to the drive and connect it to the installation system. Then, boot the Anaconda installation program. You can use any type of hard drive accessible to the installation program, including USB flash drives. The binary ISO image can be in any directory of the hard drive, and it can have any name; however, if the ISO image is not in the top-level directory of the drive, or if there is more than one image in the top-level directory of the drive, you will be required to specify the image to be used. This can be done using a boot option, an entry in a Kickstart file, or manually in the Installation Source screen during a graphical installation. A limitation of using a hard drive as the installation source is that the binary DVD ISO image on the hard drive must be on a partition with a file system which Anaconda can mount. These file systems are xfs , ext2 , ext3 , ext4 , and vfat ( FAT32 ). Note that on Microsoft Windows systems, the default file system used when formatting hard drives is NTFS , and the exFAT file system is also available; however, neither of these file systems can be mounted during the installation. If you are creating a hard drive or a USB drive to be used as an installation source on Microsoft Windows, make sure to format the drive as FAT32 . Important The FAT32 file system does not support files larger than 4 GiB. Some Red Hat Enterprise Linux 7 installation media can be larger than that, which means you cannot copy them to a drive with this file system. When using a hard drive or a USB flash drive as an installation source, make sure it is connected to the system when the installation begins. The installation program is not able to detect media inserted after the installation begins. 3.3.3. Installation Source on a Network Placing the installation source on a network has the advantage of allowing you to install multiple systems from a single source, without having to connect and disconnect any physical media. Network-based installations can be especially useful when used together with a TFTP server, which allows you to boot the installation program from the network as well. This approach completely eliminates the need for creating physical media, allowing easy deployment of Red Hat Enterprise Linux on multiple systems at the same time. For further information about setting up a TFTP server, see Chapter 24, Preparing for a Network Installation . 3.3.3.1. Installation Source on an NFS Server The NFS installation method uses an ISO image of the Red Hat Enterprise Linux binary DVD placed in a Network File System server's exported directory , which the installation system must be able to read. To perform an NFS-based installation, you will need another running system which will act as the NFS host. For more information about NFS servers, see the Red Hat Enterprise Linux 7 Storage Administration Guide . The following procedure is only meant as a basic outline of the process. The precise steps you must take to set up an NFS server will vary based on the system's architecture, operating system, package manager, service manager, and other factors. On Red Hat Enterprise Linux 7 systems, the procedure can be followed exactly as documented. For procedures describing the installation source creation process on earlier releases of Red Hat Enterprise Linux, see the appropriate Installation Guide for that release. Procedure 3.4. Preparing for Installation Using NFS Install the nfs-utils package by running the following command as root : Copy the full Red Hat Enterprise Linux 7 binary DVD ISO image to a suitable directory on the NFS server. For example, you can create directory /rhel7-install/ for this purpose and save the ISO image here. Open the /etc/exports file using a text editor and add a line with the following syntax: Replace /exported_directory/ with the full path to the directory holding the ISO image. Instead of clients , use the host name or IP address of the computer which is to be installed from this NFS server, the subnetwork from which all computers are to have access the ISO image, or the asterisk sign ( * ) if you want to allow any computer with network access to the NFS server to use the ISO image. See the exports(5) man page for detailed information about the format of this field. The following is a basic configuration which makes the /rhel7-install/ directory available as read-only to all clients: Save the /etc/exports file after finishing the configuration and exit the text editor. Start the nfs service: If the service was already running before you changed the /etc/exports file, enter the following command instead, in order for the running NFS server to reload its configuration: After completing the procedure above, the ISO image is accessible over NFS and ready to be used as an installation source. When configuring the installation source before or during the installation, use nfs: as the protocol, the server's host name or IP address, the colon sign ( : ), and the directory holding the ISO image. For example, if the server's host name is myserver.example.com and you have saved the ISO image in /rhel7-install/ , specify nfs:myserver.example.com:/rhel7-install/ as the installation source. 3.3.3.2. Installation Source on an HTTP, HTTPS or FTP Server This installation method allows for a network-based installation using an installation tree, which is a directory containing extracted contents of the binary DVD ISO image and a valid .treeinfo file. The installation source is accessed over HTTP , HTTPS , or FTP . For more information about HTTP and FTP servers, see the Red Hat Enterprise Linux 7 System Administrator's Guide . The following procedure is only meant as a basic outline of the process. The precise steps you must take to set up an FTP server will vary based on the system's architecture, operating system, package manager, service manager, and other factors. On Red Hat Enterprise Linux 7 systems, the procedure can be followed exactly as documented. For procedures describing the installation source creation process on earlier releases of Red Hat Enterprise Linux, see the appropriate Installation Guide for that release. Procedure 3.5. Preparing Installation Using HTTP or HTTPS Install the httpd package by running the following command as root : An HTTPS server needs additional configuration. For detailed information, see section Setting Up an SSL Server in the Red Hat Enterprise Linux 7 System Administrator's Guide. However, HTTPS is not necessary in most cases, because no sensitive data is sent between the installation source and the installer, and HTTP is sufficient. Warning If your Apache web server configuration enables SSL security, make sure to only enable the TLSv1 protocol, and disable SSLv2 and SSLv3 . This is due to the POODLE SSL vulnerability (CVE-2014-3566). See https://access.redhat.com/solutions/1232413 for details. Important If you decide to use HTTPS and the server is using a self-signed certificate, you must boot the installer with the noverifyssl option. Copy the full Red Hat Enterprise Linux 7 binary DVD ISO image to the HTTP(S) server. Mount the binary DVD ISO image, using the mount command, to a suitable directory: Replace /image_directory/image.iso with the path to the binary DVD ISO image, and /mount_point/ with the path to the directory in which you want the content of the ISO image to appear. For example, you can create directory /mnt/rhel7-install/ for this purpose and use that as the parameter of the mount command. Copy the files from the mounted image to the HTTP server root. This command creates the /var/www/html/rhel7-install/ directory with the content of the image. Start the httpd service: After completing the procedure above, the installation tree is accessible and ready to be used as the installation source. When configuring the installation source before or during the installation, use http:// or https:// as the protocol, the server's host name or IP address, and the directory in which you have stored the files from the ISO image, relative to the HTTP server root. For example, if you are using HTTP , the server's host name is myserver.example.com , and you have copied the files from the image to /var/www/html/rhel7-install/ , specify http://myserver.example.com/rhel7-install/ as the installation source. Procedure 3.6. Preparing for Installation Using FTP Install the vsftpd package by running the following command as root : Optionally, open the /etc/vsftpd/vsftpd.conf configuration file in a text editor, and edit any options you want to change. For available options, see the vsftpd.conf(5) man page. The rest of this procedure assumes that default options are used; notably, to follow the rest of the procedure, anonymous users of the FTP server must be permitted to download files. Warning If you configured SSL/TLS security in your vsftpd.conf file, make sure to only enable the TLSv1 protocol, and disable SSLv2 and SSLv3 . This is due to the POODLE SSL vulnerability (CVE-2014-3566). See https://access.redhat.com/solutions/1234773 for details. Copy the full Red Hat Enterprise Linux 7 binary DVD ISO image to the FTP server. Mount the binary DVD ISO image, using the mount command, to a suitable directory: Replace /image_directory/image.iso with the path to the binary DVD ISO image, and /mount_point with the path to the directory in which you want the content of the ISO image to appear. For example, you can create directory /mnt/rhel7-install/ for this purpose and use that as the parameter of the mount command. Copy the files from the mounted image to the FTP server root: This command creates the /var/ftp/rhel7-install/ directory with the content of the image. Start the vsftpd service: If the service was already running before you changed the /etc/vsftpd/vsftpd.conf file, restart it to ensure the edited file is loaded. To restart, execute the following command: After completing the procedure above, the installation tree is accessible and ready to be used as the installation source. When configuring the installation source before or during the installation, use ftp:// as the protocol, the server's host name or IP address, and the directory in which you have stored the files from the ISO image, relative to the FTP server root. For example, if the server's host name is myserver.example.com and you have copied the files from the image to /var/ftp/rhel7-install/ , specify ftp://myserver.example.com/rhel7-install/ as the installation source. 3.3.3.3. Firewall Considerations for Network-based Installations When using a network-based installation source, make sure that your firewall allows the server you are installing to access the remote installation source. The following table shows which ports must be open for each type of network-based installation Table 3.1. Ports Used by Network Protocols Protocol used Ports to open FTP 21 HTTP 80 HTTPS 443 NFS 2049 , 111 , 20048 TFTP 69 For information about opening specific firewall ports, see the Red Hat Enterprise Linux 7 Security Guide .
[ "yum install nfs-utils", "/exported_directory/ clients", "/rhel7-install *", "systemctl start nfs.service", "systemctl reload nfs.service", "yum install httpd", "mount -o loop,ro -t iso9660 /image_directory/image.iso /mount_point/", "cp -r /mnt/rhel7-install/ /var/www/html/", "systemctl start httpd.service", "yum install vsftpd", "mount -o loop,ro -t iso9660 /image_directory/image.iso /mount_point", "cp -r /mnt/rhel7-install/ /var/ftp/", "systemctl start vsftpd.service", "systemctl restart vsftpd.service" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/sect-making-media-additional-sources
Logging
Logging OpenShift Container Platform 4.9 OpenShift Logging installation, usage, and release notes Red Hat OpenShift Documentation Team
[ "oc -n openshift-logging edit ClusterLogging instance", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: \"openshift-logging\" annotations: logging.openshift.io/preview-vector-collector: enabled spec: collection: logs: type: \"vector\" vector: {}", "oc delete pod -l component=collector", "oc delete pod -l component=collector", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" 1 namespace: \"openshift-logging\" spec: managementState: \"Managed\" 2 logStore: type: \"elasticsearch\" 3 retentionPolicy: 4 application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3 5 storage: storageClassName: \"<storage_class_name>\" 6 size: 200G resources: 7 limits: memory: \"16Gi\" requests: memory: \"16Gi\" proxy: 8 resources: limits: memory: 256Mi requests: memory: 256Mi redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" 9 kibana: replicas: 1 collection: logs: type: \"fluentd\" 10 fluentd: {}", "oc get deployment", "cluster-logging-operator 1/1 1 1 18h elasticsearch-cd-x6kdekli-1 0/1 1 0 6m54s elasticsearch-cdm-x6kdekli-1 1/1 1 1 18h elasticsearch-cdm-x6kdekli-2 0/1 1 0 6m49s elasticsearch-cdm-x6kdekli-3 0/1 1 0 6m44s", "apiVersion: v1 kind: Namespace metadata: name: openshift-operators-redhat 1 annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-monitoring: \"true\" 2", "oc create -f <file-name>.yaml", "oc create -f eo-namespace.yaml", "apiVersion: v1 kind: Namespace metadata: name: openshift-logging annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-monitoring: \"true\"", "oc create -f <file-name>.yaml", "oc create -f olo-namespace.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-operators-redhat namespace: openshift-operators-redhat 1 spec: {}", "oc create -f <file-name>.yaml", "oc create -f eo-og.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: \"elasticsearch-operator\" namespace: \"openshift-operators-redhat\" 1 spec: channel: \"stable-5.1\" 2 installPlanApproval: \"Automatic\" 3 source: \"redhat-operators\" 4 sourceNamespace: \"openshift-marketplace\" name: \"elasticsearch-operator\"", "oc create -f <file-name>.yaml", "oc create -f eo-sub.yaml", "oc get csv --all-namespaces", "NAMESPACE NAME DISPLAY VERSION REPLACES PHASE default elasticsearch-operator.5.1.0-202007012112.p0 OpenShift Elasticsearch Operator 5.1.0-202007012112.p0 Succeeded kube-node-lease elasticsearch-operator.5.1.0-202007012112.p0 OpenShift Elasticsearch Operator 5.1.0-202007012112.p0 Succeeded kube-public elasticsearch-operator.5.1.0-202007012112.p0 OpenShift Elasticsearch Operator 5.1.0-202007012112.p0 Succeeded kube-system elasticsearch-operator.5.1.0-202007012112.p0 OpenShift Elasticsearch Operator 5.1.0-202007012112.p0 Succeeded openshift-apiserver-operator elasticsearch-operator.5.1.0-202007012112.p0 OpenShift Elasticsearch Operator 5.1.0-202007012112.p0 Succeeded openshift-apiserver elasticsearch-operator.5.1.0-202007012112.p0 OpenShift Elasticsearch Operator 5.1.0-202007012112.p0 Succeeded openshift-authentication-operator elasticsearch-operator.5.1.0-202007012112.p0 OpenShift Elasticsearch Operator 5.1.0-202007012112.p0 Succeeded openshift-authentication elasticsearch-operator.5.1.0-202007012112.p0 OpenShift Elasticsearch Operator 5.1.0-202007012112.p0 Succeeded", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging 1 spec: targetNamespaces: - openshift-logging 2", "oc create -f <file-name>.yaml", "oc create -f olo-og.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging 1 spec: channel: \"stable\" 2 name: cluster-logging source: redhat-operators 3 sourceNamespace: openshift-marketplace", "oc create -f <file-name>.yaml", "oc create -f olo-sub.yaml", "oc get csv -n openshift-logging", "NAMESPACE NAME DISPLAY VERSION REPLACES PHASE openshift-logging clusterlogging.5.1.0-202007012112.p0 OpenShift Logging 5.1.0-202007012112.p0 Succeeded", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" 1 namespace: \"openshift-logging\" spec: managementState: \"Managed\" 2 logStore: type: \"elasticsearch\" 3 retentionPolicy: 4 application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3 5 storage: storageClassName: \"<storage-class-name>\" 6 size: 200G resources: 7 limits: memory: \"16Gi\" requests: memory: \"16Gi\" proxy: 8 resources: limits: memory: 256Mi requests: memory: 256Mi redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" 9 kibana: replicas: 1 collection: logs: type: \"fluentd\" 10 fluentd: {}", "oc get deployment", "cluster-logging-operator 1/1 1 1 18h elasticsearch-cd-x6kdekli-1 1/1 1 0 6m54s elasticsearch-cdm-x6kdekli-1 1/1 1 1 18h elasticsearch-cdm-x6kdekli-2 1/1 1 0 6m49s elasticsearch-cdm-x6kdekli-3 1/1 1 0 6m44s", "oc create -f <file-name>.yaml", "oc create -f olo-instance.yaml", "oc get pods -n openshift-logging", "NAME READY STATUS RESTARTS AGE cluster-logging-operator-66f77ffccb-ppzbg 1/1 Running 0 7m elasticsearch-cdm-ftuhduuw-1-ffc4b9566-q6bhp 2/2 Running 0 2m40s elasticsearch-cdm-ftuhduuw-2-7b4994dbfc-rd2gc 2/2 Running 0 2m36s elasticsearch-cdm-ftuhduuw-3-84b5ff7ff8-gqnm2 2/2 Running 0 2m4s collector-587vb 1/1 Running 0 2m26s collector-7mpb9 1/1 Running 0 2m30s collector-flm6j 1/1 Running 0 2m33s collector-gn4rn 1/1 Running 0 2m26s collector-nlgb6 1/1 Running 0 2m30s collector-snpkt 1/1 Running 0 2m28s kibana-d6d5668c5-rppqm 2/2 Running 0 2m39s", "oc auth can-i get pods/log -n <project>", "yes", "oc adm pod-network join-projects --to=openshift-operators-redhat openshift-logging", "oc label namespace openshift-operators-redhat project=openshift-operators-redhat", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-monitoring-ingress-operators-redhat spec: ingress: - from: - podSelector: {} - from: - namespaceSelector: matchLabels: project: \"openshift-operators-redhat\" - from: - namespaceSelector: matchLabels: name: \"openshift-monitoring\" - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" 1 namespace: \"openshift-logging\" 2 spec: managementState: \"Managed\" 3 logStore: type: \"elasticsearch\" 4 retentionPolicy: application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3 resources: limits: memory: 16Gi requests: cpu: 500m memory: 16Gi storage: storageClassName: \"gp2\" size: \"200G\" redundancyPolicy: \"SingleRedundancy\" visualization: 5 type: \"kibana\" kibana: resources: limits: memory: 736Mi requests: cpu: 100m memory: 736Mi replicas: 1 collection: 6 logs: type: \"fluentd\" fluentd: resources: limits: memory: 736Mi requests: cpu: 100m memory: 736Mi", "oc get pods --selector component=collector -o wide -n openshift-logging", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES fluentd-8d69v 1/1 Running 0 134m 10.130.2.30 master1.example.com <none> <none> fluentd-bd225 1/1 Running 0 134m 10.131.1.11 master2.example.com <none> <none> fluentd-cvrzs 1/1 Running 0 134m 10.130.0.21 master3.example.com <none> <none> fluentd-gpqg2 1/1 Running 0 134m 10.128.2.27 worker1.example.com <none> <none> fluentd-l9j7j 1/1 Running 0 134m 10.129.2.31 worker2.example.com <none> <none>", "oc -n openshift-logging edit ClusterLogging instance", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: openshift-logging spec: collection: logs: fluentd: resources: limits: 1 memory: 736Mi requests: cpu: 100m memory: 736Mi", "oc edit ClusterLogging instance", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: forwarder: fluentd: buffer: chunkLimitSize: 8m 1 flushInterval: 5s 2 flushMode: interval 3 flushThreadCount: 3 4 overflowAction: throw_exception 5 retryMaxInterval: \"300s\" 6 retryType: periodic 7 retryWait: 1s 8 totalLimitSize: 32m 9", "oc get pods -l component=collector -n openshift-logging", "oc extract configmap/fluentd --confirm", "<buffer> @type file path '/var/lib/fluentd/default' flush_mode interval flush_interval 5s flush_thread_count 3 retry_type periodic retry_wait 1s retry_max_interval 300s retry_timeout 60m queued_chunks_limit_size \"#{ENV['BUFFER_QUEUE_LIMIT'] || '32'}\" total_limit_size 32m chunk_limit_size 8m overflow_action throw_exception </buffer>", "outputRefs: - default", "oc edit ClusterLogging instance", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: \"openshift-logging\" spec: managementState: \"Managed\" collection: logs: type: \"fluentd\" fluentd: {}", "oc get pods -l component=collector -n openshift-logging", "apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: pipelines: 1 - name: all-to-default inputRefs: - infrastructure - application - audit outputRefs: - default", "apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: elasticsearch-insecure type: \"elasticsearch\" url: http://elasticsearch-insecure.messaging.svc.cluster.local insecure: true - name: elasticsearch-secure type: \"elasticsearch\" url: https://elasticsearch-secure.messaging.svc.cluster.local secret: name: es-audit - name: secureforward-offcluster type: \"fluentdForward\" url: https://secureforward.offcluster.com:24224 secret: name: secureforward pipelines: - name: container-logs inputRefs: - application outputRefs: - secureforward-offcluster - name: infra-logs inputRefs: - infrastructure outputRefs: - elasticsearch-insecure - name: audit-logs inputRefs: - audit outputRefs: - elasticsearch-secure - default 1", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" retentionPolicy: 1 application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3", "apiVersion: \"logging.openshift.io/v1\" kind: \"Elasticsearch\" metadata: name: \"elasticsearch\" spec: indexManagement: policies: 1 - name: infra-policy phases: delete: minAge: 7d 2 hot: actions: rollover: maxAge: 8h 3 pollInterval: 15m 4", "oc get cronjob", "NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE elasticsearch-im-app */15 * * * * False 0 <none> 4s elasticsearch-im-audit */15 * * * * False 0 <none> 4s elasticsearch-im-infra */15 * * * * False 0 <none> 4s", "oc edit ClusterLogging instance", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" . spec: logStore: type: \"elasticsearch\" elasticsearch: 1 resources: limits: 2 memory: \"32Gi\" requests: 3 cpu: \"1\" memory: \"16Gi\" proxy: 4 resources: limits: memory: 100Mi requests: memory: 100Mi", "resources: limits: 1 memory: \"32Gi\" requests: 2 cpu: \"8\" memory: \"32Gi\"", "oc edit clusterlogging instance", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" . spec: logStore: type: \"elasticsearch\" elasticsearch: redundancyPolicy: \"SingleRedundancy\" 1", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: \"gp2\" size: \"200G\"", "spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: {}", "oc project openshift-logging", "oc get pods -l component=elasticsearch-", "oc -n openshift-logging patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"nodeSelector\":{\"logging-infra-collector\": \"false\"}}}}}'", "oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=\"_flush/synced\" -XPOST", "oc exec -c elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=\"_flush/synced\" -XPOST", "{\"_shards\":{\"total\":4,\"successful\":4,\"failed\":0},\".security\":{\"total\":2,\"successful\":2,\"failed\":0},\".kibana_1\":{\"total\":2,\"successful\":2,\"failed\":0}}", "oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"primaries\" } }'", "oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"primaries\" } }'", "{\"acknowledged\":true,\"persistent\":{\"cluster\":{\"routing\":{\"allocation\":{\"enable\":\"primaries\"}}}},\"transient\":", "oc rollout resume deployment/<deployment-name>", "oc rollout resume deployment/elasticsearch-cdm-0-1", "deployment.extensions/elasticsearch-cdm-0-1 resumed", "oc get pods -l component=elasticsearch-", "NAME READY STATUS RESTARTS AGE elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6k 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-2-f799564cb-l9mj7 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-3-585968dc68-k7kjr 2/2 Running 0 22h", "oc rollout pause deployment/<deployment-name>", "oc rollout pause deployment/elasticsearch-cdm-0-1", "deployment.extensions/elasticsearch-cdm-0-1 paused", "oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=_cluster/health?pretty=true", "oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=_cluster/health?pretty=true", "{ \"cluster_name\" : \"elasticsearch\", \"status\" : \"yellow\", 1 \"timed_out\" : false, \"number_of_nodes\" : 3, \"number_of_data_nodes\" : 3, \"active_primary_shards\" : 8, \"active_shards\" : 16, \"relocating_shards\" : 0, \"initializing_shards\" : 0, \"unassigned_shards\" : 1, \"delayed_unassigned_shards\" : 0, \"number_of_pending_tasks\" : 0, \"number_of_in_flight_fetch\" : 0, \"task_max_waiting_in_queue_millis\" : 0, \"active_shards_percent_as_number\" : 100.0 }", "oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"all\" } }'", "oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"all\" } }'", "{ \"acknowledged\" : true, \"persistent\" : { }, \"transient\" : { \"cluster\" : { \"routing\" : { \"allocation\" : { \"enable\" : \"all\" } } } } }", "oc -n openshift-logging patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"nodeSelector\":{\"logging-infra-collector\": \"true\"}}}}}'", "oc get service elasticsearch -o jsonpath={.spec.clusterIP} -n openshift-logging", "172.30.183.229", "oc get service elasticsearch -n openshift-logging", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE elasticsearch ClusterIP 172.30.183.229 <none> 9200/TCP 22h", "oc exec elasticsearch-cdm-oplnhinv-1-5746475887-fj2f8 -n openshift-logging -- curl -tlsv1.2 --insecure -H \"Authorization: Bearer USD{token}\" \"https://172.30.183.229:9200/_cat/health\"", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 29 100 29 0 0 108 0 --:--:-- --:--:-- --:--:-- 108", "oc project openshift-logging", "oc extract secret/elasticsearch --to=. --keys=admin-ca", "admin-ca", "apiVersion: route.openshift.io/v1 kind: Route metadata: name: elasticsearch namespace: openshift-logging spec: host: to: kind: Service name: elasticsearch tls: termination: reencrypt destinationCACertificate: | 1", "cat ./admin-ca | sed -e \"s/^/ /\" >> <file-name>.yaml", "oc create -f <file-name>.yaml", "route.route.openshift.io/elasticsearch created", "token=USD(oc whoami -t)", "routeES=`oc get route elasticsearch -o jsonpath={.spec.host}`", "curl -tlsv1.2 --insecure -H \"Authorization: Bearer USD{token}\" \"https://USD{routeES}\"", "{ \"name\" : \"elasticsearch-cdm-i40ktba0-1\", \"cluster_name\" : \"elasticsearch\", \"cluster_uuid\" : \"0eY-tJzcR3KOdpgeMJo-MQ\", \"version\" : { \"number\" : \"6.8.1\", \"build_flavor\" : \"oss\", \"build_type\" : \"zip\", \"build_hash\" : \"Unknown\", \"build_date\" : \"Unknown\", \"build_snapshot\" : true, \"lucene_version\" : \"7.7.0\", \"minimum_wire_compatibility_version\" : \"5.6.0\", \"minimum_index_compatibility_version\" : \"5.0.0\" }, \"<tagline>\" : \"<for search>\" }", "oc -n openshift-logging edit ClusterLogging instance", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: openshift-logging spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 resources: 1 limits: memory: 16Gi requests: cpu: 200m memory: 16Gi storage: storageClassName: \"gp2\" size: \"200G\" redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" kibana: resources: 2 limits: memory: 1Gi requests: cpu: 500m memory: 1Gi proxy: resources: 3 limits: memory: 100Mi requests: cpu: 100m memory: 100Mi replicas: 2 collection: logs: type: \"fluentd\" fluentd: resources: 4 limits: memory: 736Mi requests: cpu: 200m memory: 736Mi", "oc edit ClusterLogging instance", "oc edit ClusterLogging instance apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" . spec: visualization: type: \"kibana\" kibana: replicas: 1 1", "oc -n openshift-logging edit ClusterLogging instance", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: openshift-logging spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 resources: 1 limits: memory: 16Gi requests: cpu: 200m memory: 16Gi storage: storageClassName: \"gp2\" size: \"200G\" redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" kibana: resources: 2 limits: memory: 1Gi requests: cpu: 500m memory: 1Gi proxy: resources: 3 limits: memory: 100Mi requests: cpu: 100m memory: 100Mi replicas: 2 collection: logs: type: \"fluentd\" fluentd: resources: 4 limits: memory: 736Mi requests: cpu: 200m memory: 736Mi", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: openshift-logging spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 tolerations: 1 - key: \"logging\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 6000 resources: limits: memory: 16Gi requests: cpu: 200m memory: 16Gi storage: {} redundancyPolicy: \"ZeroRedundancy\" visualization: type: \"kibana\" kibana: tolerations: 2 - key: \"logging\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 6000 resources: limits: memory: 2Gi requests: cpu: 100m memory: 1Gi replicas: 1 collection: logs: type: \"fluentd\" fluentd: tolerations: 3 - key: \"logging\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 6000 resources: limits: memory: 2Gi requests: cpu: 100m memory: 1Gi", "tolerations: - effect: \"NoExecute\" key: \"node.kubernetes.io/disk-pressure\" operator: \"Exists\"", "oc adm taint nodes <node-name> <key>=<value>:<effect>", "oc adm taint nodes node1 elasticsearch=node:NoExecute", "logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 1 tolerations: - key: \"elasticsearch\" 1 operator: \"Exists\" 2 effect: \"NoExecute\" 3 tolerationSeconds: 6000 4", "oc adm taint nodes <node-name> <key>=<value>:<effect>", "oc adm taint nodes node1 kibana=node:NoExecute", "visualization: type: \"kibana\" kibana: tolerations: - key: \"kibana\" 1 operator: \"Exists\" 2 effect: \"NoExecute\" 3 tolerationSeconds: 6000 4", "tolerations: - key: \"node-role.kubernetes.io/master\" operator: \"Exists\" effect: \"NoExecute\"", "oc adm taint nodes <node-name> <key>=<value>:<effect>", "oc adm taint nodes node1 collector=node:NoExecute", "collection: logs: type: \"fluentd\" fluentd: tolerations: - key: \"collector\" 1 operator: \"Exists\" 2 effect: \"NoExecute\" 3 tolerationSeconds: 6000 4", "oc edit ClusterLogging instance", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging spec: collection: logs: fluentd: resources: null type: fluentd logStore: elasticsearch: nodeCount: 3 nodeSelector: 1 node-role.kubernetes.io/infra: '' tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved redundancyPolicy: SingleRedundancy resources: limits: cpu: 500m memory: 16Gi requests: cpu: 500m memory: 16Gi storage: {} type: elasticsearch managementState: Managed visualization: kibana: nodeSelector: 2 node-role.kubernetes.io/infra: '' tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved proxy: resources: null replicas: 1 resources: null type: kibana", "oc get pod kibana-5b8bdf44f9-ccpq9 -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-5b8bdf44f9-ccpq9 2/2 Running 0 27s 10.129.2.18 ip-10-0-147-79.us-east-2.compute.internal <none> <none>", "oc get nodes", "NAME STATUS ROLES AGE VERSION ip-10-0-133-216.us-east-2.compute.internal Ready master 60m v1.22.1 ip-10-0-139-146.us-east-2.compute.internal Ready master 60m v1.22.1 ip-10-0-139-192.us-east-2.compute.internal Ready worker 51m v1.22.1 ip-10-0-139-241.us-east-2.compute.internal Ready worker 51m v1.22.1 ip-10-0-147-79.us-east-2.compute.internal Ready worker 51m v1.22.1 ip-10-0-152-241.us-east-2.compute.internal Ready master 60m v1.22.1 ip-10-0-139-48.us-east-2.compute.internal Ready infra 51m v1.22.1", "oc get node ip-10-0-139-48.us-east-2.compute.internal -o yaml", "kind: Node apiVersion: v1 metadata: name: ip-10-0-139-48.us-east-2.compute.internal selfLink: /api/v1/nodes/ip-10-0-139-48.us-east-2.compute.internal uid: 62038aa9-661f-41d7-ba93-b5f1b6ef8751 resourceVersion: '39083' creationTimestamp: '2020-04-13T19:07:55Z' labels: node-role.kubernetes.io/infra: ''", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging spec: visualization: kibana: nodeSelector: 1 node-role.kubernetes.io/infra: '' proxy: resources: null replicas: 1 resources: null type: kibana", "oc get pods", "NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 29m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 28m fluentd-42dzz 1/1 Running 0 28m fluentd-d74rq 1/1 Running 0 28m fluentd-m5vr9 1/1 Running 0 28m fluentd-nkxl7 1/1 Running 0 28m fluentd-pdvqb 1/1 Running 0 28m fluentd-tflh6 1/1 Running 0 28m kibana-5b8bdf44f9-ccpq9 2/2 Terminating 0 4m11s kibana-7d85dcffc8-bfpfp 2/2 Running 0 33s", "oc get pod kibana-7d85dcffc8-bfpfp -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-7d85dcffc8-bfpfp 2/2 Running 0 43s 10.131.0.22 ip-10-0-139-48.us-east-2.compute.internal <none> <none>", "oc get pods", "NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 30m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 29m fluentd-42dzz 1/1 Running 0 29m fluentd-d74rq 1/1 Running 0 29m fluentd-m5vr9 1/1 Running 0 29m fluentd-nkxl7 1/1 Running 0 29m fluentd-pdvqb 1/1 Running 0 29m fluentd-tflh6 1/1 Running 0 29m kibana-7d85dcffc8-bfpfp 2/2 Running 0 62s", "variant: openshift version: 4.9.0 metadata: name: 40-worker-custom-journald labels: machineconfiguration.openshift.io/role: \"worker\" storage: files: - path: /etc/systemd/journald.conf mode: 0644 1 overwrite: true contents: inline: | Compress=yes 2 ForwardToConsole=no 3 ForwardToSyslog=no MaxRetentionSec=1month 4 RateLimitBurst=10000 5 RateLimitIntervalSec=30s Storage=persistent 6 SyncIntervalSec=1s 7 SystemMaxUse=8G 8 SystemKeepFree=20% 9 SystemMaxFileSize=10M 10", "butane 40-worker-custom-journald.bu -o 40-worker-custom-journald.yaml", "oc apply -f 40-worker-custom-journald.yaml", "oc describe machineconfigpool/worker", "Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool Conditions: Message: Reason: All nodes are updating to rendered-worker-913514517bcea7c93bd446f4830bc64e", "Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing.", "oc logs -f <pod_name> -c <container_name>", "oc logs ruby-58cd97df55-mww7r", "oc logs -f ruby-57f7f4855b-znl92 -c ruby", "oc logs <object_type>/<resource_name> 1", "oc logs deployment/ruby", "oc auth can-i get pods/log -n <project>", "yes", "oc auth can-i get pods/log -n <project>", "yes", "{ \"_index\": \"infra-000001\", \"_type\": \"_doc\", \"_id\": \"YmJmYTBlNDkZTRmLTliMGQtMjE3NmFiOGUyOWM3\", \"_version\": 1, \"_score\": null, \"_source\": { \"docker\": { \"container_id\": \"f85fa55bbef7bb783f041066be1e7c267a6b88c4603dfce213e32c1\" }, \"kubernetes\": { \"container_name\": \"registry-server\", \"namespace_name\": \"openshift-marketplace\", \"pod_name\": \"redhat-marketplace-n64gc\", \"container_image\": \"registry.redhat.io/redhat/redhat-marketplace-index:v4.7\", \"container_image_id\": \"registry.redhat.io/redhat/redhat-marketplace-index@sha256:65fc0c45aabb95809e376feb065771ecda9e5e59cc8b3024c4545c168f\", \"pod_id\": \"8f594ea2-c866-4b5c-a1c8-a50756704b2a\", \"host\": \"ip-10-0-182-28.us-east-2.compute.internal\", \"master_url\": \"https://kubernetes.default.svc\", \"namespace_id\": \"3abab127-7669-4eb3-b9ef-44c04ad68d38\", \"namespace_labels\": { \"openshift_io/cluster-monitoring\": \"true\" }, \"flat_labels\": [ \"catalogsource_operators_coreos_com/update=redhat-marketplace\" ] }, \"message\": \"time=\\\"2020-09-23T20:47:03Z\\\" level=info msg=\\\"serving registry\\\" database=/database/index.db port=50051\", \"level\": \"unknown\", \"hostname\": \"ip-10-0-182-28.internal\", \"pipeline_metadata\": { \"collector\": { \"ipaddr4\": \"10.0.182.28\", \"inputname\": \"fluent-plugin-systemd\", \"name\": \"fluentd\", \"received_at\": \"2020-09-23T20:47:15.007583+00:00\", \"version\": \"1.7.4 1.6.0\" } }, \"@timestamp\": \"2020-09-23T20:47:03.422465+00:00\", \"viaq_msg_id\": \"YmJmYTBlNDktMDMGQtMjE3NmFiOGUyOWM3\", \"openshift\": { \"labels\": { \"logging\": \"infra\" } } }, \"fields\": { \"@timestamp\": [ \"2020-09-23T20:47:03.422Z\" ], \"pipeline_metadata.collector.received_at\": [ \"2020-09-23T20:47:15.007Z\" ] }, \"sort\": [ 1600894023422 ] }", "apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: elasticsearch-secure 3 type: \"elasticsearch\" url: https://elasticsearch.secure.com:9200 secret: name: elasticsearch - name: elasticsearch-insecure 4 type: \"elasticsearch\" url: http://elasticsearch.insecure.com:9200 - name: kafka-app 5 type: \"kafka\" url: tls://kafka.secure.com:9093/app-topic inputs: 6 - name: my-app-logs application: namespaces: - my-project pipelines: - name: audit-logs 7 inputRefs: - audit outputRefs: - elasticsearch-secure - default parse: json 8 labels: secure: \"true\" 9 datacenter: \"east\" - name: infrastructure-logs 10 inputRefs: - infrastructure outputRefs: - elasticsearch-insecure labels: datacenter: \"west\" - name: my-app 11 inputRefs: - my-app-logs outputRefs: - default - inputRefs: 12 - application outputRefs: - kafka-app labels: datacenter: \"south\"", "oc create secret generic -n openshift-logging <my-secret> --from-file=tls.key=<your_key_file> --from-file=tls.crt=<your_crt_file> --from-file=ca-bundle.crt=<your_bundle_file> --from-literal=username=<your_username> --from-literal=password=<your_password>", "apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: elasticsearch-insecure 3 type: \"elasticsearch\" 4 url: http://elasticsearch.insecure.com:9200 5 - name: elasticsearch-secure type: \"elasticsearch\" url: https://elasticsearch.secure.com:9200 6 secret: name: es-secret 7 pipelines: - name: application-logs 8 inputRefs: 9 - application - audit outputRefs: - elasticsearch-secure 10 - default 11 parse: json 12 labels: myLabel: \"myValue\" 13 - name: infrastructure-audit-logs 14 inputRefs: - infrastructure outputRefs: - elasticsearch-insecure labels: logs: \"audit-infra\"", "oc create -f <file-name>.yaml", "apiVersion: v1 kind: Secret metadata: name: openshift-test-secret data: username: dGVzdHVzZXJuYW1lCg== password: dGVzdHBhc3N3b3JkCg==", "oc create secret -n openshift-logging openshift-test-secret.yaml", "kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: elasticsearch type: \"elasticsearch\" url: https://elasticsearch.secure.com:9200 secret: name: openshift-test-secret", "oc create -f <file-name>.yaml", "apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: fluentd-server-secure 3 type: fluentdForward 4 url: 'tls://fluentdserver.security.example.com:24224' 5 secret: 6 name: fluentd-secret - name: fluentd-server-insecure type: fluentdForward url: 'tcp://fluentdserver.home.example.com:24224' pipelines: - name: forward-to-fluentd-secure 7 inputRefs: 8 - application - audit outputRefs: - fluentd-server-secure 9 - default 10 parse: json 11 labels: clusterId: \"C1234\" 12 - name: forward-to-fluentd-insecure 13 inputRefs: - infrastructure outputRefs: - fluentd-server-insecure labels: clusterId: \"C1234\"", "oc create -f <file-name>.yaml", "input { tcp { codec => fluent { nanosecond_precision => true } port => 24114 } } filter { } output { stdout { codec => rubydebug } }", "apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: rsyslog-east 3 type: syslog 4 syslog: 5 facility: local0 rfc: RFC3164 payloadKey: message severity: informational url: 'tls://rsyslogserver.east.example.com:514' 6 secret: 7 name: syslog-secret - name: rsyslog-west type: syslog syslog: appName: myapp facility: user msgID: mymsg procID: myproc rfc: RFC5424 severity: debug url: 'udp://rsyslogserver.west.example.com:514' pipelines: - name: syslog-east 8 inputRefs: 9 - audit - application outputRefs: 10 - rsyslog-east - default 11 parse: json 12 labels: secure: \"true\" 13 syslog: \"east\" - name: syslog-west 14 inputRefs: - infrastructure outputRefs: - rsyslog-west - default labels: syslog: \"west\"", "oc create -f <file-name>.yaml", "spec: outputs: - name: syslogout syslog: addLogSource: true facility: user payloadKey: message rfc: RFC3164 severity: debug tag: mytag type: syslog url: tls://syslog-receiver.openshift-logging.svc:24224 pipelines: - inputRefs: - application name: test-app outputRefs: - syslogout", "<15>1 2020-11-15T17:06:14+00:00 fluentd-9hkb4 mytag - - - {\"msgcontent\"=>\"Message Contents\", \"timestamp\"=>\"2020-11-15 17:06:09\", \"tag_key\"=>\"rec_tag\", \"index\"=>56}", "<15>1 2020-11-16T10:49:37+00:00 crc-j55b9-master-0 mytag - - - namespace_name=clo-test-6327,pod_name=log-generator-ff9746c49-qxm7l,container_name=log-generator,message={\"msgcontent\":\"My life is my message\", \"timestamp\":\"2020-11-16 10:49:36\", \"tag_key\":\"rec_tag\", \"index\":76}", "apiVersion: v1 kind: Secret metadata: name: cw-secret namespace: openshift-logging data: aws_access_key_id: QUtJQUlPU0ZPRE5ON0VYQU1QTEUK aws_secret_access_key: d0phbHJYVXRuRkVNSS9LN01ERU5HL2JQeFJmaUNZRVhBTVBMRUtFWQo=", "oc apply -f cw-secret.yaml", "apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: cw 3 type: cloudwatch 4 cloudwatch: groupBy: logType 5 groupPrefix: <group prefix> 6 region: us-east-2 7 secret: name: cw-secret 8 pipelines: - name: infra-logs 9 inputRefs: 10 - infrastructure - audit - application outputRefs: - cw 11", "oc create -f <file-name>.yaml", "oc get Infrastructure/cluster -ojson | jq .status.infrastructureName \"mycluster-7977k\"", "oc run busybox --image=busybox -- sh -c 'while true; do echo \"My life is my message\"; sleep 3; done' oc logs -f busybox My life is my message My life is my message My life is my message", "oc get ns/app -ojson | jq .metadata.uid \"794e1e1a-b9f5-4958-a190-e76a9b53d7bf\"", "apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: cw type: cloudwatch cloudwatch: groupBy: logType region: us-east-2 secret: name: cw-secret pipelines: - name: all-logs inputRefs: - infrastructure - audit - application outputRefs: - cw", "aws --output json logs describe-log-groups | jq .logGroups[].logGroupName \"mycluster-7977k.application\" \"mycluster-7977k.audit\" \"mycluster-7977k.infrastructure\"", "aws --output json logs describe-log-streams --log-group-name mycluster-7977k.application | jq .logStreams[].logStreamName \"kubernetes.var.log.containers.busybox_app_busybox-da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76.log\"", "aws --output json logs describe-log-streams --log-group-name mycluster-7977k.audit | jq .logStreams[].logStreamName \"ip-10-0-131-228.us-east-2.compute.internal.k8s-audit.log\" \"ip-10-0-131-228.us-east-2.compute.internal.linux-audit.log\" \"ip-10-0-131-228.us-east-2.compute.internal.openshift-audit.log\"", "aws --output json logs describe-log-streams --log-group-name mycluster-7977k.infrastructure | jq .logStreams[].logStreamName \"ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-69f9fd9b58-zqzw5_openshift-oauth-apiserver_oauth-apiserver-453c5c4ee026fe20a6139ba6b1cdd1bed25989c905bf5ac5ca211b7cbb5c3d7b.log\" \"ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-797774f7c5-lftrx_openshift-apiserver_openshift-apiserver-ce51532df7d4e4d5f21c4f4be05f6575b93196336be0027067fd7d93d70f66a4.log\" \"ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-797774f7c5-lftrx_openshift-apiserver_openshift-apiserver-check-endpoints-82a9096b5931b5c3b1d6dc4b66113252da4a6472c9fff48623baee761911a9ef.log\"", "aws logs get-log-events --log-group-name mycluster-7977k.application --log-stream-name kubernetes.var.log.containers.busybox_app_busybox-da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76.log { \"events\": [ { \"timestamp\": 1629422704178, \"message\": \"{\\\"docker\\\":{\\\"container_id\\\":\\\"da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76\\\"},\\\"kubernetes\\\":{\\\"container_name\\\":\\\"busybox\\\",\\\"namespace_name\\\":\\\"app\\\",\\\"pod_name\\\":\\\"busybox\\\",\\\"container_image\\\":\\\"docker.io/library/busybox:latest\\\",\\\"container_image_id\\\":\\\"docker.io/library/busybox@sha256:0f354ec1728d9ff32edcd7d1b8bbdfc798277ad36120dc3dc683be44524c8b60\\\",\\\"pod_id\\\":\\\"870be234-90a3-4258-b73f-4f4d6e2777c7\\\",\\\"host\\\":\\\"ip-10-0-216-3.us-east-2.compute.internal\\\",\\\"labels\\\":{\\\"run\\\":\\\"busybox\\\"},\\\"master_url\\\":\\\"https://kubernetes.default.svc\\\",\\\"namespace_id\\\":\\\"794e1e1a-b9f5-4958-a190-e76a9b53d7bf\\\",\\\"namespace_labels\\\":{\\\"kubernetes_io/metadata_name\\\":\\\"app\\\"}},\\\"message\\\":\\\"My life is my message\\\",\\\"level\\\":\\\"unknown\\\",\\\"hostname\\\":\\\"ip-10-0-216-3.us-east-2.compute.internal\\\",\\\"pipeline_metadata\\\":{\\\"collector\\\":{\\\"ipaddr4\\\":\\\"10.0.216.3\\\",\\\"inputname\\\":\\\"fluent-plugin-systemd\\\",\\\"name\\\":\\\"fluentd\\\",\\\"received_at\\\":\\\"2021-08-20T01:25:08.085760+00:00\\\",\\\"version\\\":\\\"1.7.4 1.6.0\\\"}},\\\"@timestamp\\\":\\\"2021-08-20T01:25:04.178986+00:00\\\",\\\"viaq_index_name\\\":\\\"app-write\\\",\\\"viaq_msg_id\\\":\\\"NWRjZmUyMWQtZjgzNC00MjI4LTk3MjMtNTk3NmY3ZjU4NDk1\\\",\\\"log_type\\\":\\\"application\\\",\\\"time\\\":\\\"2021-08-20T01:25:04+00:00\\\"}\", \"ingestionTime\": 1629422744016 },", "cloudwatch: groupBy: logType groupPrefix: demo-group-prefix region: us-east-2", "aws --output json logs describe-log-groups | jq .logGroups[].logGroupName \"demo-group-prefix.application\" \"demo-group-prefix.audit\" \"demo-group-prefix.infrastructure\"", "cloudwatch: groupBy: namespaceName region: us-east-2", "aws --output json logs describe-log-groups | jq .logGroups[].logGroupName \"mycluster-7977k.app\" \"mycluster-7977k.audit\" \"mycluster-7977k.infrastructure\"", "cloudwatch: groupBy: namespaceUUID region: us-east-2", "aws --output json logs describe-log-groups | jq .logGroups[].logGroupName \"mycluster-7977k.794e1e1a-b9f5-4958-a190-e76a9b53d7bf\" // uid of the \"app\" namespace \"mycluster-7977k.audit\" \"mycluster-7977k.infrastructure\"", "apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: loki-insecure 3 type: \"loki\" 4 url: http://loki.insecure.com:3100 5 - name: loki-secure type: \"loki\" url: https://loki.secure.com:3100 6 secret: name: loki-secret 7 pipelines: - name: application-logs 8 inputRefs: 9 - application - audit outputRefs: - loki-secure 10 loki: tenantKey: kubernetes.namespace_name 11 labelKeys: kubernetes.labels.foo 12", "oc create -f <file-name>.yaml", "\"values\":[[\"1630410392689800468\",\"{\\\"kind\\\":\\\"Event\\\",\\\"apiVersion\\\": .... ... ... ... \\\"received_at\\\":\\\"2021-08-31T11:46:32.800278+00:00\\\",\\\"version\\\":\\\"1.7.4 1.6.0\\\"}},\\\"@timestamp\\\":\\\"2021-08-31T11:46:32.799692+00:00\\\",\\\"viaq_index_name\\\":\\\"audit-write\\\",\\\"viaq_msg_id\\\":\\\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\\\",\\\"log_type\\\":\\\"audit\\\"}\"]]}]}", "429 Too Many Requests Ingestion rate limit exceeded (limit: 8388608 bytes/sec) while attempting to ingest '2140' lines totaling '3285284' bytes 429 Too Many Requests Ingestion rate limit exceeded' or '500 Internal Server Error rpc error: code = ResourceExhausted desc = grpc: received message larger than max (5277702 vs. 4194304)'", ",\\nentry with timestamp 2021-08-18 05:58:55.061936 +0000 UTC ignored, reason: 'entry out of order' for stream: {fluentd_thread=\\\"flush_thread_0\\\", log_type=\\\"audit\\\"},\\nentry with timestamp 2021-08-18 06:01:18.290229 +0000 UTC ignored, reason: 'entry out of order' for stream: {fluentd_thread=\"flush_thread_0\", log_type=\"audit\"}", "auth_enabled: false server: http_listen_port: 3100 grpc_listen_port: 9096 grpc_server_max_recv_msg_size: 8388608 ingester: wal: enabled: true dir: /tmp/wal lifecycler: address: 127.0.0.1 ring: kvstore: store: inmemory replication_factor: 1 final_sleep: 0s chunk_idle_period: 1h # Any chunk not receiving new logs in this time will be flushed chunk_target_size: 8388608 max_chunk_age: 1h # All chunks will be flushed when they hit this age, default is 1h chunk_retain_period: 30s # Must be greater than index read cache TTL if using an index cache (Default index read cache TTL is 5m) max_transfer_retries: 0 # Chunk transfers disabled schema_config: configs: - from: 2020-10-24 store: boltdb-shipper object_store: filesystem schema: v11 index: prefix: index_ period: 24h storage_config: boltdb_shipper: active_index_directory: /tmp/loki/boltdb-shipper-active cache_location: /tmp/loki/boltdb-shipper-cache cache_ttl: 24h # Can be increased for faster performance over longer query periods, uses more disk space shared_store: filesystem filesystem: directory: /tmp/loki/chunks compactor: working_directory: /tmp/loki/boltdb-shipper-compactor shared_store: filesystem limits_config: reject_old_samples: true reject_old_samples_max_age: 12h ingestion_rate_mb: 8 ingestion_burst_size_mb: 16 chunk_store_config: max_look_back_period: 0s table_manager: retention_deletes_enabled: false retention_period: 0s ruler: storage: type: local local: directory: /tmp/loki/rules rule_path: /tmp/loki/rules-temp alertmanager_url: http://localhost:9093 ring: kvstore: store: inmemory enable_api: true", "apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: fluentd-server-secure 3 type: fluentdForward 4 url: 'tls://fluentdserver.security.example.com:24224' 5 secret: 6 name: fluentd-secret - name: fluentd-server-insecure type: fluentdForward url: 'tcp://fluentdserver.home.example.com:24224' inputs: 7 - name: my-app-logs application: namespaces: - my-project pipelines: - name: forward-to-fluentd-insecure 8 inputRefs: 9 - my-app-logs outputRefs: 10 - fluentd-server-insecure parse: json 11 labels: project: \"my-project\" 12 - name: forward-to-fluentd-secure 13 inputRefs: - application - audit - infrastructure outputRefs: - fluentd-server-secure - default labels: clusterId: \"C1234\"", "oc create -f <file-name>.yaml", "apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: pipelines: - inputRefs: [ myAppLogData ] 3 outputRefs: [ default ] 4 parse: json 5 inputs: 6 - name: myAppLogData application: selector: matchLabels: 7 environment: production app: nginx namespaces: 8 - app1 - app2 outputs: 9 - default", "- inputRefs: [ myAppLogData, myOtherAppLogData ]", "oc create -f <file-name>.yaml", "oc delete pod --selector logging-infra=collector", "{\"level\":\"info\",\"name\":\"fred\",\"home\":\"bedrock\"}", "{\"message\":\"{\\\"level\\\":\\\"info\\\",\\\"name\\\":\\\"fred\\\",\\\"home\\\":\\\"bedrock\\\"\", \"more fields...\"}", "pipelines: - inputRefs: [ application ] outputRefs: myFluentd parse: json", "{\"structured\": { \"level\": \"info\", \"name\": \"fred\", \"home\": \"bedrock\" }, \"more fields...\"}", "outputDefaults: elasticsearch: structuredTypeKey: kubernetes.labels.logFormat 1 structuredTypeName: nologformat pipelines: - inputRefs: <application> outputRefs: default parse: json 2", "{ \"structured\":{\"name\":\"fred\",\"home\":\"bedrock\"}, \"kubernetes\":{\"labels\":{\"logFormat\": \"apache\", ...}} }", "{ \"structured\":{\"name\":\"wilma\",\"home\":\"bedrock\"}, \"kubernetes\":{\"labels\":{\"logFormat\": \"google\", ...}} }", "outputDefaults: elasticsearch: structuredTypeKey: openshift.labels.myLabel 1 structuredTypeName: nologformat pipelines: - name: application-logs inputRefs: - application - audit outputRefs: - elasticsearch-secure - default parse: json labels: myLabel: myValue 2", "{ \"structured\":{\"name\":\"fred\",\"home\":\"bedrock\"}, \"openshift\":{\"labels\":{\"myLabel\": \"myValue\", ...}} }", "outputDefaults: elasticsearch: structuredTypeKey: <log record field> structuredTypeName: <name> pipelines: - inputRefs: - application outputRefs: default parse: json", "oc create -f <file-name>.yaml", "oc delete pod --selector logging-infra=collector", "kind: Template apiVersion: template.openshift.io/v1 metadata: name: eventrouter-template annotations: description: \"A pod forwarding kubernetes events to OpenShift Logging stack.\" tags: \"events,EFK,logging,cluster-logging\" objects: - kind: ServiceAccount 1 apiVersion: v1 metadata: name: eventrouter namespace: USD{NAMESPACE} - kind: ClusterRole 2 apiVersion: rbac.authorization.k8s.io/v1 metadata: name: event-reader rules: - apiGroups: [\"\"] resources: [\"events\"] verbs: [\"get\", \"watch\", \"list\"] - kind: ClusterRoleBinding 3 apiVersion: rbac.authorization.k8s.io/v1 metadata: name: event-reader-binding subjects: - kind: ServiceAccount name: eventrouter namespace: USD{NAMESPACE} roleRef: kind: ClusterRole name: event-reader - kind: ConfigMap 4 apiVersion: v1 metadata: name: eventrouter namespace: USD{NAMESPACE} data: config.json: |- { \"sink\": \"stdout\" } - kind: Deployment 5 apiVersion: apps/v1 metadata: name: eventrouter namespace: USD{NAMESPACE} labels: component: \"eventrouter\" logging-infra: \"eventrouter\" provider: \"openshift\" spec: selector: matchLabels: component: \"eventrouter\" logging-infra: \"eventrouter\" provider: \"openshift\" replicas: 1 template: metadata: labels: component: \"eventrouter\" logging-infra: \"eventrouter\" provider: \"openshift\" name: eventrouter spec: serviceAccount: eventrouter containers: - name: kube-eventrouter image: USD{IMAGE} imagePullPolicy: IfNotPresent resources: requests: cpu: USD{CPU} memory: USD{MEMORY} volumeMounts: - name: config-volume mountPath: /etc/eventrouter volumes: - name: config-volume configMap: name: eventrouter parameters: - name: IMAGE 6 displayName: Image value: \"registry.redhat.io/openshift-logging/eventrouter-rhel8:v0.4\" - name: CPU 7 displayName: CPU value: \"100m\" - name: MEMORY 8 displayName: Memory value: \"128Mi\" - name: NAMESPACE displayName: Namespace value: \"openshift-logging\" 9", "oc process -f <templatefile> | oc apply -n openshift-logging -f -", "oc process -f eventrouter.yaml | oc apply -n openshift-logging -f -", "serviceaccount/eventrouter created clusterrole.authorization.openshift.io/event-reader created clusterrolebinding.authorization.openshift.io/event-reader-binding created configmap/eventrouter created deployment.apps/eventrouter created", "oc get pods --selector component=eventrouter -o name -n openshift-logging", "pod/cluster-logging-eventrouter-d649f97c8-qvv8r", "oc logs <cluster_logging_eventrouter_pod> -n openshift-logging", "oc logs cluster-logging-eventrouter-d649f97c8-qvv8r -n openshift-logging", "{\"verb\":\"ADDED\",\"event\":{\"metadata\":{\"name\":\"openshift-service-catalog-controller-manager-remover.1632d931e88fcd8f\",\"namespace\":\"openshift-service-catalog-removed\",\"selfLink\":\"/api/v1/namespaces/openshift-service-catalog-removed/events/openshift-service-catalog-controller-manager-remover.1632d931e88fcd8f\",\"uid\":\"787d7b26-3d2f-4017-b0b0-420db4ae62c0\",\"resourceVersion\":\"21399\",\"creationTimestamp\":\"2020-09-08T15:40:26Z\"},\"involvedObject\":{\"kind\":\"Job\",\"namespace\":\"openshift-service-catalog-removed\",\"name\":\"openshift-service-catalog-controller-manager-remover\",\"uid\":\"fac9f479-4ad5-4a57-8adc-cb25d3d9cf8f\",\"apiVersion\":\"batch/v1\",\"resourceVersion\":\"21280\"},\"reason\":\"Completed\",\"message\":\"Job completed\",\"source\":{\"component\":\"job-controller\"},\"firstTimestamp\":\"2020-09-08T15:40:26Z\",\"lastTimestamp\":\"2020-09-08T15:40:26Z\",\"count\":1,\"type\":\"Normal\"}}", "oc get pod -n openshift-logging --selector component=elasticsearch", "NAME READY STATUS RESTARTS AGE elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk 2/2 Running 0 31m elasticsearch-cdm-1pbrl44l-2-5c6d87589f-gx5hk 2/2 Running 0 30m elasticsearch-cdm-1pbrl44l-3-88df5d47-m45jc 2/2 Running 0 29m", "oc exec -n openshift-logging -c elasticsearch elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk -- health", "{ \"cluster_name\" : \"elasticsearch\", \"status\" : \"green\", }", "oc project openshift-logging", "oc get cronjob", "NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE elasticsearch-im-app */15 * * * * False 0 <none> 56s elasticsearch-im-audit */15 * * * * False 0 <none> 56s elasticsearch-im-infra */15 * * * * False 0 <none> 56s", "oc exec -c elasticsearch <any_es_pod_in_the_cluster> -- indices", "Tue Jun 30 14:30:54 UTC 2020 health status index uuid pri rep docs.count docs.deleted store.size pri.store.size green open infra-000008 bnBvUFEXTWi92z3zWAzieQ 3 1 222195 0 289 144 green open infra-000004 rtDSzoqsSl6saisSK7Au1Q 3 1 226717 0 297 148 green open infra-000012 RSf_kUwDSR2xEuKRZMPqZQ 3 1 227623 0 295 147 green open .kibana_7 1SJdCqlZTPWlIAaOUd78yg 1 1 4 0 0 0 green open infra-000010 iXwL3bnqTuGEABbUDa6OVw 3 1 248368 0 317 158 green open infra-000009 YN9EsULWSNaxWeeNvOs0RA 3 1 258799 0 337 168 green open infra-000014 YP0U6R7FQ_GVQVQZ6Yh9Ig 3 1 223788 0 292 146 green open infra-000015 JRBbAbEmSMqK5X40df9HbQ 3 1 224371 0 291 145 green open .orphaned.2020.06.30 n_xQC2dWQzConkvQqei3YA 3 1 9 0 0 0 green open infra-000007 llkkAVSzSOmosWTSAJM_hg 3 1 228584 0 296 148 green open infra-000005 d9BoGQdiQASsS3BBFm2iRA 3 1 227987 0 297 148 green open infra-000003 1-goREK1QUKlQPAIVkWVaQ 3 1 226719 0 295 147 green open .security zeT65uOuRTKZMjg_bbUc1g 1 1 5 0 0 0 green open .kibana-377444158_kubeadmin wvMhDwJkR-mRZQO84K0gUQ 3 1 1 0 0 0 green open infra-000006 5H-KBSXGQKiO7hdapDE23g 3 1 226676 0 295 147 green open infra-000001 eH53BQ-bSxSWR5xYZB6lVg 3 1 341800 0 443 220 green open .kibana-6 RVp7TemSSemGJcsSUmuf3A 1 1 4 0 0 0 green open infra-000011 J7XWBauWSTe0jnzX02fU6A 3 1 226100 0 293 146 green open app-000001 axSAFfONQDmKwatkjPXdtw 3 1 103186 0 126 57 green open infra-000016 m9c1iRLtStWSF1GopaRyCg 3 1 13685 0 19 9 green open infra-000002 Hz6WvINtTvKcQzw-ewmbYg 3 1 228994 0 296 148 green open infra-000013 KR9mMFUpQl-jraYtanyIGw 3 1 228166 0 298 148 green open audit-000001 eERqLdLmQOiQDFES1LBATQ 3 1 0 0 0 0", "oc get ds collector -o json | grep collector", "\"containerName\": \"collector\"", "oc get kibana kibana -o json", "[ { \"clusterCondition\": { \"kibana-5fdd766ffd-nb2jj\": [ { \"lastTransitionTime\": \"2020-06-30T14:11:07Z\", \"reason\": \"ContainerCreating\", \"status\": \"True\", \"type\": \"\" }, { \"lastTransitionTime\": \"2020-06-30T14:11:07Z\", \"reason\": \"ContainerCreating\", \"status\": \"True\", \"type\": \"\" } ] }, \"deployment\": \"kibana\", \"pods\": { \"failed\": [], \"notReady\": [] \"ready\": [] }, \"replicaSets\": [ \"kibana-5fdd766ffd\" ], \"replicas\": 1 } ]", "oc project openshift-logging", "oc get clusterlogging instance -o yaml", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging . status: 1 collection: logs: fluentdStatus: daemonSet: fluentd 2 nodes: fluentd-2rhqp: ip-10-0-169-13.ec2.internal fluentd-6fgjh: ip-10-0-165-244.ec2.internal fluentd-6l2ff: ip-10-0-128-218.ec2.internal fluentd-54nx5: ip-10-0-139-30.ec2.internal fluentd-flpnn: ip-10-0-147-228.ec2.internal fluentd-n2frh: ip-10-0-157-45.ec2.internal pods: failed: [] notReady: [] ready: - fluentd-2rhqp - fluentd-54nx5 - fluentd-6fgjh - fluentd-6l2ff - fluentd-flpnn - fluentd-n2frh logstore: 3 elasticsearchStatus: - ShardAllocationEnabled: all cluster: activePrimaryShards: 5 activeShards: 5 initializingShards: 0 numDataNodes: 1 numNodes: 1 pendingTasks: 0 relocatingShards: 0 status: green unassignedShards: 0 clusterName: elasticsearch nodeConditions: elasticsearch-cdm-mkkdys93-1: nodeCount: 1 pods: client: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c data: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c master: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c visualization: 4 kibanaStatus: - deployment: kibana pods: failed: [] notReady: [] ready: - kibana-7fb4fd4cc9-f2nls replicaSets: - kibana-7fb4fd4cc9 replicas: 1", "nodes: - conditions: - lastTransitionTime: 2019-03-15T15:57:22Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be not be allocated on this node. reason: Disk Watermark Low status: \"True\" type: NodeStorage deploymentName: example-elasticsearch-clientdatamaster-0-1 upgradeStatus: {}", "nodes: - conditions: - lastTransitionTime: 2019-03-15T16:04:45Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be relocated from this node. reason: Disk Watermark High status: \"True\" type: NodeStorage deploymentName: cluster-logging-operator upgradeStatus: {}", "Elasticsearch Status: Shard Allocation Enabled: shard allocation unknown Cluster: Active Primary Shards: 0 Active Shards: 0 Initializing Shards: 0 Num Data Nodes: 0 Num Nodes: 0 Pending Tasks: 0 Relocating Shards: 0 Status: cluster health unknown Unassigned Shards: 0 Cluster Name: elasticsearch Node Conditions: elasticsearch-cdm-mkkdys93-1: Last Transition Time: 2019-06-26T03:37:32Z Message: 0/5 nodes are available: 5 node(s) didn't match node selector. Reason: Unschedulable Status: True Type: Unschedulable elasticsearch-cdm-mkkdys93-2: Node Count: 2 Pods: Client: Failed: Not Ready: elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49 elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl Ready: Data: Failed: Not Ready: elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49 elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl Ready: Master: Failed: Not Ready: elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49 elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl Ready:", "Node Conditions: elasticsearch-cdm-mkkdys93-1: Last Transition Time: 2019-06-26T03:37:32Z Message: pod has unbound immediate PersistentVolumeClaims (repeated 5 times) Reason: Unschedulable Status: True Type: Unschedulable", "Status: Collection: Logs: Fluentd Status: Daemon Set: fluentd Nodes: Pods: Failed: Not Ready: Ready:", "oc project openshift-logging", "oc describe deployment cluster-logging-operator", "Name: cluster-logging-operator . Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable . Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 62m deployment-controller Scaled up replica set cluster-logging-operator-574b8987df to 1----", "oc get replicaset", "NAME DESIRED CURRENT READY AGE cluster-logging-operator-574b8987df 1 1 1 159m elasticsearch-cdm-uhr537yu-1-6869694fb 1 1 1 157m elasticsearch-cdm-uhr537yu-2-857b6d676f 1 1 1 156m elasticsearch-cdm-uhr537yu-3-5b6fdd8cfd 1 1 1 155m kibana-5bd5544f87 1 1 1 157m", "oc describe replicaset cluster-logging-operator-574b8987df", "Name: cluster-logging-operator-574b8987df . Replicas: 1 current / 1 desired Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed . Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 66m replicaset-controller Created pod: cluster-logging-operator-574b8987df-qjhqv----", "oc project openshift-logging", "oc get Elasticsearch", "NAME AGE elasticsearch 5h9m", "oc get Elasticsearch <Elasticsearch-instance> -o yaml", "oc get Elasticsearch elasticsearch -n openshift-logging -o yaml", "status: 1 cluster: 2 activePrimaryShards: 30 activeShards: 60 initializingShards: 0 numDataNodes: 3 numNodes: 3 pendingTasks: 0 relocatingShards: 0 status: green unassignedShards: 0 clusterHealth: \"\" conditions: [] 3 nodes: 4 - deploymentName: elasticsearch-cdm-zjf34ved-1 upgradeStatus: {} - deploymentName: elasticsearch-cdm-zjf34ved-2 upgradeStatus: {} - deploymentName: elasticsearch-cdm-zjf34ved-3 upgradeStatus: {} pods: 5 client: failed: [] notReady: [] ready: - elasticsearch-cdm-zjf34ved-1-6d7fbf844f-sn422 - elasticsearch-cdm-zjf34ved-2-dfbd988bc-qkzjz - elasticsearch-cdm-zjf34ved-3-c8f566f7c-t7zkt data: failed: [] notReady: [] ready: - elasticsearch-cdm-zjf34ved-1-6d7fbf844f-sn422 - elasticsearch-cdm-zjf34ved-2-dfbd988bc-qkzjz - elasticsearch-cdm-zjf34ved-3-c8f566f7c-t7zkt master: failed: [] notReady: [] ready: - elasticsearch-cdm-zjf34ved-1-6d7fbf844f-sn422 - elasticsearch-cdm-zjf34ved-2-dfbd988bc-qkzjz - elasticsearch-cdm-zjf34ved-3-c8f566f7c-t7zkt shardAllocationEnabled: all", "status: nodes: - conditions: - lastTransitionTime: 2019-03-15T15:57:22Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be not be allocated on this node. reason: Disk Watermark Low status: \"True\" type: NodeStorage deploymentName: example-elasticsearch-cdm-0-1 upgradeStatus: {}", "status: nodes: - conditions: - lastTransitionTime: 2019-03-15T16:04:45Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be relocated from this node. reason: Disk Watermark High status: \"True\" type: NodeStorage deploymentName: example-elasticsearch-cdm-0-1 upgradeStatus: {}", "status: nodes: - conditions: - lastTransitionTime: 2019-04-10T02:26:24Z message: '0/8 nodes are available: 8 node(s) didn''t match node selector.' reason: Unschedulable status: \"True\" type: Unschedulable", "status: nodes: - conditions: - last Transition Time: 2019-04-10T05:55:51Z message: pod has unbound immediate PersistentVolumeClaims (repeated 5 times) reason: Unschedulable status: True type: Unschedulable", "status: clusterHealth: \"\" conditions: - lastTransitionTime: 2019-04-17T20:01:31Z message: Wrong RedundancyPolicy selected. Choose different RedundancyPolicy or add more nodes with data roles reason: Invalid Settings status: \"True\" type: InvalidRedundancy", "status: clusterHealth: green conditions: - lastTransitionTime: '2019-04-17T20:12:34Z' message: >- Invalid master nodes count. Please ensure there are no more than 3 total nodes with master roles reason: Invalid Settings status: 'True' type: InvalidMasters", "status: clusterHealth: green conditions: - lastTransitionTime: \"2021-05-07T01:05:13Z\" message: Changing the storage structure for a custom resource is not supported reason: StorageStructureChangeIgnored status: 'True' type: StorageStructureChangeIgnored", "oc get pods --selector component=elasticsearch -o name", "pod/elasticsearch-cdm-1godmszn-1-6f8495-vp4lw pod/elasticsearch-cdm-1godmszn-2-5769cf-9ms2n pod/elasticsearch-cdm-1godmszn-3-f66f7d-zqkz7", "oc exec elasticsearch-cdm-4vjor49p-2-6d4d7db474-q2w7z -- indices", "Defaulting container name to elasticsearch. Use 'oc describe pod/elasticsearch-cdm-4vjor49p-2-6d4d7db474-q2w7z -n openshift-logging' to see all of the containers in this pod. green open infra-000002 S4QANnf1QP6NgCegfnrnbQ 3 1 119926 0 157 78 green open audit-000001 8_EQx77iQCSTzFOXtxRqFw 3 1 0 0 0 0 green open .security iDjscH7aSUGhIdq0LheLBQ 1 1 5 0 0 0 green open .kibana_-377444158_kubeadmin yBywZ9GfSrKebz5gWBZbjw 3 1 1 0 0 0 green open infra-000001 z6Dpe__ORgiopEpW6Yl44A 3 1 871000 0 874 436 green open app-000001 hIrazQCeSISewG3c2VIvsQ 3 1 2453 0 3 1 green open .kibana_1 JCitcBMSQxKOvIq6iQW6wg 1 1 0 0 0 0 green open .kibana_-1595131456_user1 gIYFIEGRRe-ka0W3okS-mQ 3 1 1 0 0 0", "oc get pods --selector component=elasticsearch -o name", "pod/elasticsearch-cdm-1godmszn-1-6f8495-vp4lw pod/elasticsearch-cdm-1godmszn-2-5769cf-9ms2n pod/elasticsearch-cdm-1godmszn-3-f66f7d-zqkz7", "oc describe pod elasticsearch-cdm-1godmszn-1-6f8495-vp4lw", ". Status: Running . Containers: elasticsearch: Container ID: cri-o://b7d44e0a9ea486e27f47763f5bb4c39dfd2 State: Running Started: Mon, 08 Jun 2020 10:17:56 -0400 Ready: True Restart Count: 0 Readiness: exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3 . proxy: Container ID: cri-o://3f77032abaddbb1652c116278652908dc01860320b8a4e741d06894b2f8f9aa1 State: Running Started: Mon, 08 Jun 2020 10:18:38 -0400 Ready: True Restart Count: 0 . Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True . Events: <none>", "oc get deployment --selector component=elasticsearch -o name", "deployment.extensions/elasticsearch-cdm-1gon-1 deployment.extensions/elasticsearch-cdm-1gon-2 deployment.extensions/elasticsearch-cdm-1gon-3", "oc describe deployment elasticsearch-cdm-1gon-1", ". Containers: elasticsearch: Image: registry.redhat.io/openshift-logging/elasticsearch6-rhel8 Readiness: exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3 . Conditions: Type Status Reason ---- ------ ------ Progressing Unknown DeploymentPaused Available True MinimumReplicasAvailable . Events: <none>", "oc get replicaSet --selector component=elasticsearch -o name replicaset.extensions/elasticsearch-cdm-1gon-1-6f8495 replicaset.extensions/elasticsearch-cdm-1gon-2-5769cf replicaset.extensions/elasticsearch-cdm-1gon-3-f66f7d", "oc describe replicaSet elasticsearch-cdm-1gon-1-6f8495", ". Containers: elasticsearch: Image: registry.redhat.io/openshift-logging/elasticsearch6-rhel8@sha256:4265742c7cdd85359140e2d7d703e4311b6497eec7676957f455d6908e7b1c25 Readiness: exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3 . Events: <none>", "eo_elasticsearch_cr_cluster_management_state{state=\"managed\"} 1 eo_elasticsearch_cr_cluster_management_state{state=\"unmanaged\"} 0", "eo_elasticsearch_cr_restart_total{reason=\"cert_restart\"} 1 eo_elasticsearch_cr_restart_total{reason=\"rolling_restart\"} 1 eo_elasticsearch_cr_restart_total{reason=\"scheduled_restart\"} 3", "Total number of Namespaces. es_index_namespaces_total 5", "es_index_document_count{namespace=\"namespace_1\"} 25 es_index_document_count{namespace=\"namespace_2\"} 10 es_index_document_count{namespace=\"namespace_3\"} 5", "message\": \"Secret \\\"elasticsearch\\\" fields are either missing or empty: [admin-cert, admin-key, logging-es.crt, logging-es.key]\", \"reason\": \"Missing Required Secrets\",", "oc adm must-gather --image=USD(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == \"cluster-logging-operator\")].image}')", "tar -cvaf must-gather.tar.gz must-gather.local.4157245944708210408", "exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- health", "exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cat/nodes?v", "-n openshift-logging get pods -l component=elasticsearch", "exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cat/master?v", "logs <elasticsearch_master_pod_name> -c elasticsearch -n openshift-logging", "logs <elasticsearch_node_name> -c elasticsearch -n openshift-logging", "exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cat/recovery?active_only=true", "exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- health |grep number_of_pending_tasks", "exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cluster/settings?pretty", "exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cluster/settings?pretty -X PUT -d '{\"persistent\": {\"cluster.routing.allocation.enable\":\"all\"}}'", "exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cat/indices?v", "exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name>/_cache/clear?pretty", "exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name>/_settings?pretty -X PUT -d '{\"index.allocation.max_retries\":10}'", "exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_search/scroll/_all -X DELETE", "exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name>/_settings?pretty -X PUT -d '{\"index.unassigned.node_left.delayed_timeout\":\"10m\"}'", "exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cat/indices?v", "exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_red_index_name> -X DELETE", "exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_nodes/stats?pretty", "-n openshift-logging get po -o wide", "exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cluster/health?pretty | grep unassigned_shards", "for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done", "-n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}'", "-n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'", "exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- indices", "exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name> -X DELETE", "-n openshift-logging get po -o wide", "for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done", "exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cluster/health?pretty | grep relocating_shards", "-n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}'", "-n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'", "exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- indices", "exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name> -X DELETE", "for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done", "-n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}'", "-n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'", "exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- indices", "exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name> -X DELETE", "exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_all/_settings?pretty -X PUT -d '{\"index.blocks.read_only_allow_delete\": null}'", "for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done", "-n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}'", "-n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'", "exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- indices", "exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name> -X DELETE" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html-single/logging/cluster-logging-curator
Operators
Operators OpenShift Container Platform 4.14 Working with Operators in OpenShift Container Platform Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/operators/index
Developing and compiling your Red Hat build of Quarkus applications with Apache Maven
Developing and compiling your Red Hat build of Quarkus applications with Apache Maven Red Hat build of Quarkus 3.8 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.8/html/developing_and_compiling_your_red_hat_build_of_quarkus_applications_with_apache_maven/index
Chapter 15. Uninstalling a cluster on GCP
Chapter 15. Uninstalling a cluster on GCP You can remove a cluster that you deployed to Google Cloud Platform (GCP). 15.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. For example, some Google Cloud resources require IAM permissions in shared VPC host projects, or there might be unused health checks that must be deleted . Prerequisites You have a copy of the installation program that you used to deploy the cluster. You have the files that the installation program generated when you created your cluster. Procedure From the directory that contains the installation program on the computer that you used to install the cluster, run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program. 15.2. Deleting Google Cloud Platform resources with the Cloud Credential Operator utility After uninstalling an OpenShift Container Platform cluster that uses short-term credentials managed outside the cluster, you can use the CCO utility ( ccoctl ) to remove the Google Cloud Platform (GCP) resources that ccoctl created during installation. Prerequisites Extract and prepare the ccoctl binary. Uninstall an OpenShift Container Platform cluster on GCP that uses short-term credentials. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --to=<path_to_directory_for_credentials_requests> 2 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Delete the GCP resources that ccoctl created by running the following command: USD ccoctl gcp delete \ --name=<name> \ 1 --project=<gcp_project_id> \ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> 1 <name> matches the name that was originally used to create and tag the cloud resources. 2 <gcp_project_id> is the GCP project ID in which to delete cloud resources. Verification To verify that the resources are deleted, query GCP. For more information, refer to GCP documentation.
[ "./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --to=<path_to_directory_for_credentials_requests> 2", "ccoctl gcp delete --name=<name> \\ 1 --project=<gcp_project_id> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_gcp/uninstalling-cluster-gcp
Chapter 11. Deploying ROSA without AWS STS
Chapter 11. Deploying ROSA without AWS STS 11.1. AWS prerequisites for ROSA Red Hat OpenShift Service on AWS (ROSA) provides a model that allows Red Hat to deploy clusters into a customer's existing Amazon Web Service (AWS) account. You must ensure that the prerequisites are met before installing ROSA. This requirements document does not apply to AWS Security Token Service (STS). If you are using STS, see the STS-specific requirements . Tip AWS Security Token Service (STS) is the recommended credential mode for installing and interacting with clusters on Red Hat OpenShift Service on AWS because it provides enhanced security. 11.1.1. Customer Requirements Red Hat OpenShift Service on AWS (ROSA) clusters must meet several prerequisites before they can be deployed. Note In order to create the cluster, the user must be logged in as an IAM user and not an assumed role or STS user. 11.1.1.1. Account The customer ensures that the AWS limits are sufficient to support Red Hat OpenShift Service on AWS provisioned within the customer's AWS account. The customer's AWS account should be in the customer's AWS Organizations with the applicable service control policy (SCP) applied. Note It is not a requirement that the customer's account be within the AWS Organizations or for the SCP to be applied, however Red Hat must be able to perform all the actions listed in the SCP without restriction. The customer's AWS account should not be transferable to Red Hat. The customer may not impose AWS usage restrictions on Red Hat activities. Imposing restrictions will severely hinder Red Hat's ability to respond to incidents. The customer may deploy native AWS services within the same AWS account. Note Customers are encouraged, but not mandated, to deploy resources in a Virtual Private Cloud (VPC) separate from the VPC hosting Red Hat OpenShift Service on AWS and other Red Hat supported services. 11.1.1.2. Access requirements To appropriately manage the Red Hat OpenShift Service on AWS service, Red Hat must have the AdministratorAccess policy applied to the administrator role at all times. This requirement does not apply if you are using AWS Security Token Service (STS). Note This policy only provides Red Hat with permissions and capabilities to change resources in the customer-provided AWS account. Red Hat must have AWS console access to the customer-provided AWS account. This access is protected and managed by Red Hat. The customer must not utilize the AWS account to elevate their permissions within the Red Hat OpenShift Service on AWS cluster. Actions available in the Red Hat OpenShift Service on AWS (ROSA) CLI, rosa , or OpenShift Cluster Manager console must not be directly performed in the customer's AWS account. 11.1.1.3. Support requirements Red Hat recommends that the customer have at least Business Support from AWS. Red Hat has authority from the customer to request AWS support on their behalf. Red Hat has authority from the customer to request AWS resource limit increases on the customer's account. Red Hat manages the restrictions, limitations, expectations, and defaults for all Red Hat OpenShift Service on AWS clusters in the same manner, unless otherwise specified in this requirements section. 11.1.1.4. Security requirements Volume snapshots will remain within the customer's AWS account and customer-specified region. Red Hat must have ingress access to EC2 hosts and the API server from allow-listed IP addresses. Red Hat must have egress allowed to forward system and audit logs to a Red Hat managed central logging stack. 11.1.2. Required customer procedure Complete these steps before deploying Red Hat OpenShift Service on AWS (ROSA). Procedure If you, as the customer, are utilizing AWS Organizations, then you must use an AWS account within your organization or create a new one . To ensure that Red Hat can perform necessary actions, you must either create a service control policy (SCP) or ensure that none is applied to the AWS account. Attach the SCP to the AWS account. Follow the ROSA procedures for setting up the environment. 11.1.2.1. Minimum set of effective permissions for service control policies (SCP) Service control policies (SCP) are a type of organization policy that manages permissions within your organization. SCPs ensure that accounts within your organization stay within your defined access control guidelines. These policies are maintained in AWS Organizations and control the services that are available within the attached AWS accounts. SCP management is the responsibility of the customer. Note The minimum SCP requirement does not apply when using AWS Security Token Service (STS). For more information about STS, see AWS prerequisites for ROSA with STS . Verify that your service control policy (SCP) does not restrict any of these required permissions. Service Actions Effect Required Amazon EC2 All Allow Amazon EC2 Auto Scaling All Allow Amazon S3 All Allow Identity And Access Management All Allow Elastic Load Balancing All Allow Elastic Load Balancing V2 All Allow Amazon CloudWatch All Allow Amazon CloudWatch Events All Allow Amazon CloudWatch Logs All Allow AWS EC2 Instance Connect SendSerialConsoleSSHPublicKey Allow AWS Support All Allow AWS Key Management Service All Allow AWS Security Token Service All Allow AWS Tiro CreateQuery GetQueryAnswer GetQueryExplanation Allow AWS Marketplace Subscribe Unsubscribe View Subscriptions Allow AWS Resource Tagging All Allow AWS Route53 DNS All Allow AWS Service Quotas ListServices GetRequestedServiceQuotaChange GetServiceQuota RequestServiceQuotaIncrease ListServiceQuotas Allow Optional AWS Billing ViewAccount Viewbilling ViewUsage Allow AWS Cost and Usage Report All Allow AWS Cost Explorer Services All Allow Additional resources Service control policies SCP effects on permissions 11.1.3. Red Hat managed IAM references for AWS Red Hat is responsible for creating and managing the following Amazon Web Services (AWS) resources: IAM policies, IAM users, and IAM roles. 11.1.3.1. IAM Policies Note IAM policies are subject to modification as the capabilities of Red Hat OpenShift Service on AWS change. The AdministratorAccess policy is used by the administration role. This policy provides Red Hat the access necessary to administer the Red Hat OpenShift Service on AWS (ROSA) cluster in the customer's AWS account. 11.1.3.2. IAM users The osdManagedAdmin user is created immediately after installing ROSA into the customer's AWS account. 11.1.4. Provisioned AWS Infrastructure This is an overview of the provisioned Amazon Web Services (AWS) components on a deployed Red Hat OpenShift Service on AWS (ROSA) cluster. 11.1.4.1. EC2 instances AWS EC2 instances are required to deploy the control plane and data plane functions for Red Hat OpenShift Service on AWS. Instance types can vary for control plane and infrastructure nodes, depending on the worker node count. At a minimum, the following EC2 instances are deployed: Three m5.2xlarge control plane nodes Two r5.xlarge infrastructure nodes Two m5.xlarge worker nodes The instance type shown for worker nodes is the default value, but you can customize the instance type for worker nodes according to the needs of your workload. For further guidance on worker node counts, see the information about initial planning considerations in the "Limits and scalability" topic listed in the "Additional resources" section of this page. 11.1.4.2. Amazon Elastic Block Store storage Amazon Elastic Block Store (Amazon EBS) block storage is used for both local node storage and persistent volume storage. The following values are the default size of the local, ephemeral storage provisioned for each EC2 instance. Volume requirements for each EC2 instance: Control Plane Volume Size: 350GB Type: gp3 Input/Output Operations Per Second: 1000 Infrastructure Volume Size: 300GB Type: gp3 Input/Output Operations Per Second: 900 Worker Volume Default size: 300GB Minimum size: 128GB Minimum size: 75GB Type: gp3 Input/Output Operations Per Second: 900 Note Clusters deployed before the release of OpenShift Container Platform 4.11 use gp2 type storage by default. 11.1.4.3. Elastic Load Balancing Each cluster can use up to two Classic Load Balancers for application router and up to two Network Load Balancers for API. For more information, see the ELB documentation for AWS . 11.1.4.4. S3 storage The image registry is backed by AWS S3 storage. Resources Pruning of resources is performed regularly to optimize S3 usage and cluster performance. Note Two buckets are required with a typical size of 2TB each. 11.1.4.5. VPC Configure your VPC according to the following requirements: Subnets : Two subnets for a cluster with a single availability zone, or six subnets for a cluster with multiple availability zones. Red Hat strongly recommends using unique subnets for each cluster. Sharing subnets between multiple clusters is not recommended. Note A public subnet connects directly to the internet through an internet gateway. A private subnet connects to the internet through a network address translation (NAT) gateway. Route tables : One route table per private subnet, and one additional table per cluster. Internet gateways : One Internet Gateway per cluster. NAT gateways : One NAT Gateway per public subnet. Figure 11.1. Sample VPC Architecture 11.1.4.6. Security groups AWS security groups provide security at the protocol and port access level; they are associated with EC2 instances and Elastic Load Balancing (ELB) load balancers. Each security group contains a set of rules that filter traffic coming in and out of one or more EC2 instances. Ensure that the ports required for cluster installation and operation are open on your network and configured to allow access between hosts. The requirements for the default security groups are listed in Required ports for default security groups . Table 11.1. Required ports for default security groups Group Type IP Protocol Port range MasterSecurityGroup AWS::EC2::SecurityGroup icmp 0 tcp 22 tcp 6443 tcp 22623 WorkerSecurityGroup AWS::EC2::SecurityGroup icmp 0 tcp 22 BootstrapSecurityGroup AWS::EC2::SecurityGroup tcp 22 tcp 19531 11.1.4.6.1. Additional custom security groups When you create a cluster using an existing non-managed VPC, you can add additional custom security groups during cluster creation. Custom security groups are subject to the following limitations: You must create the custom security groups in AWS before you create the cluster. For more information, see Amazon EC2 security groups for Linux instances . You must associate the custom security groups with the VPC that the cluster will be installed into. Your custom security groups cannot be associated with another VPC. You might need to request additional quota for your VPC if you are adding additional custom security groups. For information on AWS quota requirements for ROSA, see Required AWS service quotas in Prepare your environment . For information on requesting an AWS quota increase, see Requesting a quota increase . 11.1.5. Networking prerequisites 11.1.5.1. Minimum bandwidth During cluster deployment, Red Hat OpenShift Service on AWS requires a minimum bandwidth of 120 Mbps between cluster resources and public internet resources. When network connectivity is slower than 120 Mbps (for example, when connecting through a proxy) the cluster installation process times out and deployment fails. After deployment, network requirements are determined by your workload. However, a minimum bandwidth of 120 Mbps helps to ensure timely cluster and operator upgrades. 11.1.5.2. AWS firewall prerequisites If you are using a firewall to control egress traffic from Red Hat OpenShift Service on AWS, you must configure your firewall to grant access to the certain domain and port combinations below. Red Hat OpenShift Service on AWS requires this access to provide a fully managed OpenShift service. Important Only ROSA clusters deployed with PrivateLink can use a firewall to control egress traffic. Prerequisites You have configured an Amazon S3 gateway endpoint in your AWS Virtual Private Cloud (VPC). This endpoint is required to complete requests from the cluster to the Amazon S3 service. Procedure Allowlist the following URLs that are used to install and download packages and tools: Domain Port Function registry.redhat.io 443 Provides core container images. quay.io 443 Provides core container images. cdn01.quay.io 443 Provides core container images. cdn02.quay.io 443 Provides core container images. cdn03.quay.io 443 Provides core container images. cdn04.quay.io 443 Provides core container images. cdn05.quay.io 443 Provides core container images. cdn06.quay.io 443 Provides core container images. sso.redhat.com 443 Required. The https://console.redhat.com/openshift site uses authentication from sso.redhat.com to download the pull secret and use Red Hat SaaS solutions to facilitate monitoring of your subscriptions, cluster inventory, chargeback reporting, and so on. quay-registry.s3.amazonaws.com 443 Provides core container images. quayio-production-s3.s3.amazonaws.com 443 Provides core container images. registry.access.redhat.com 443 Hosts all the container images that are stored on the Red Hat Ecosytem Catalog. Additionally, the registry provides access to the odo CLI tool that helps developers build on OpenShift and Kubernetes. access.redhat.com 443 Required. Hosts a signature store that a container client requires for verifying images when pulling them from registry.access.redhat.com . registry.connect.redhat.com 443 Required for all third-party images and certified Operators. console.redhat.com 443 Required. Allows interactions between the cluster and OpenShift Console Manager to enable functionality, such as scheduling upgrades. sso.redhat.com 443 The https://console.redhat.com/openshift site uses authentication from sso.redhat.com . pull.q1w2.quay.rhcloud.com 443 Provides core container images as a fallback when quay.io is not available. catalog.redhat.com 443 The registry.access.redhat.com and https://registry.redhat.io sites redirect through catalog.redhat.com . oidc.op1.openshiftapps.com 443 Used by ROSA for STS implementation with managed OIDC configuration. Allowlist the following telemetry URLs: Domain Port Function cert-api.access.redhat.com 443 Required for telemetry. api.access.redhat.com 443 Required for telemetry. infogw.api.openshift.com 443 Required for telemetry. console.redhat.com 443 Required for telemetry and Red Hat Insights. observatorium-mst.api.openshift.com 443 Required for managed OpenShift-specific telemetry. observatorium.api.openshift.com 443 Required for managed OpenShift-specific telemetry. Managed clusters require enabling telemetry to allow Red Hat to react more quickly to problems, better support the customers, and better understand how product upgrades impact clusters. For more information about how remote health monitoring data is used by Red Hat, see About remote health monitoring in the Additional resources section. Allowlist the following Amazon Web Services (AWS) API URls: Domain Port Function .amazonaws.com 443 Required to access AWS services and resources. Alternatively, if you choose to not use a wildcard for Amazon Web Services (AWS) APIs, you must allowlist the following URLs: Domain Port Function ec2.amazonaws.com 443 Used to install and manage clusters in an AWS environment. events.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. iam.amazonaws.com 443 Used to install and manage clusters in an AWS environment. route53.amazonaws.com 443 Used to install and manage clusters in an AWS environment. sts.amazonaws.com 443 Used to install and manage clusters in an AWS environment, for clusters configured to use the global endpoint for AWS STS. sts.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment, for clusters configured to use regionalized endpoints for AWS STS. See AWS STS regionalized endpoints for more information. tagging.us-east-1.amazonaws.com 443 Used to install and manage clusters in an AWS environment. This endpoint is always us-east-1, regardless of the region the cluster is deployed in. ec2.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. elasticloadbalancing.<aws_region>.amazonaws.com 443 Used to install and manage clusters in an AWS environment. tagging.<aws_region>.amazonaws.com 443 Allows the assignment of metadata about AWS resources in the form of tags. Allowlist the following OpenShift URLs: Domain Port Function mirror.openshift.com 443 Used to access mirrored installation content and images. This site is also a source of release image signatures. api.openshift.com 443 Used to check if updates are available for the cluster. Allowlist the following site reliability engineering (SRE) and management URLs: Domain Port Function api.pagerduty.com 443 This alerting service is used by the in-cluster alertmanager to send alerts notifying Red Hat SRE of an event to take action on. events.pagerduty.com 443 This alerting service is used by the in-cluster alertmanager to send alerts notifying Red Hat SRE of an event to take action on. api.deadmanssnitch.com 443 Alerting service used by Red Hat OpenShift Service on AWS to send periodic pings that indicate whether the cluster is available and running. nosnch.in 443 Alerting service used by Red Hat OpenShift Service on AWS to send periodic pings that indicate whether the cluster is available and running. http-inputs-osdsecuritylogs.splunkcloud.com 443 Required. Used by the splunk-forwarder-operator as a logging forwarding endpoint to be used by Red Hat SRE for log-based alerting. sftp.access.redhat.com (Recommended) 22 The SFTP server used by must-gather-operator to upload diagnostic logs to help troubleshoot issues with the cluster. Additional resources About remote health monitoring Security groups Required AWS service quotas 11.1.6. steps Review the required AWS service quotas 11.1.7. Additional resources Limits and scalability SRE access to all Red Hat OpenShift Service on AWS clusters Understanding the ROSA deployment workflow 11.2. Understanding the ROSA deployment workflow Before you create a Red Hat OpenShift Service on AWS (ROSA) cluster, you must complete the AWS prerequisites, verify that the required AWS service quotas are available, and set up your environment. This document provides an overview of the ROSA workflow stages and refers to detailed resources for each stage. Tip AWS Security Token Service (STS) is the recommended credential mode for installing and interacting with clusters on Red Hat OpenShift Service on AWS because it provides enhanced security. 11.2.1. Overview of the ROSA deployment workflow You can follow the workflow stages outlined in this section to set up and access a Red Hat OpenShift Service on AWS (ROSA) cluster. Perform the AWS prerequisites . To deploy a ROSA cluster, your AWS account must meet the prerequisite requirements. Review the required AWS service quotas . To prepare for your cluster deployment, review the AWS service quotas that are required to run a ROSA cluster. Configure your AWS account . Before you create a ROSA cluster, you must enable ROSA in your AWS account, install and configure the AWS CLI ( aws ) tool, and verify the AWS CLI tool configuration. Install the ROSA and OpenShift CLI tools and verify the AWS servce quotas . Install and configure the ROSA CLI ( rosa ) and the OpenShift CLI ( oc ). You can verify if the required AWS resource quotas are available by using the ROSA CLI. Create a ROSA cluster or Create a ROSA cluster using AWS PrivateLink . Use the ROSA CLI ( rosa ) to create a cluster. You can optionally create a ROSA cluster with AWS PrivateLink. Access a cluster . You can configure an identity provider and grant cluster administrator privileges to the identity provider users as required. You can also access a newly deployed cluster quickly by configuring a cluster-admin user. Revoke access to a ROSA cluster for a user . You can revoke access to a ROSA cluster from a user by using the ROSA CLI or the web console. Delete a ROSA cluster . You can delete a ROSA cluster by using the ROSA CLI ( rosa ). 11.2.2. Additional resources For information about using the ROSA deployment workflow to create a cluster that uses the AWS Security Token Service (STS), see Understanding the ROSA with STS deployment workflow . Configuring identity providers Deleting a cluster Deleting access to a cluster Command quick reference for creating clusters and users 11.3. Required AWS service quotas Review this list of the required Amazon Web Service (AWS) service quotas that are required to run an Red Hat OpenShift Service on AWS cluster. Tip AWS Security Token Service (STS) is the recommended credential mode for installing and interacting with clusters on Red Hat OpenShift Service on AWS because it provides enhanced security. 11.3.1. Required AWS service quotas The table below describes the AWS service quotas and levels required to create and run one Red Hat OpenShift Service on AWS cluster. Although most default values are suitable for most workloads, you might need to request additional quota for the following cases: ROSA clusters require a minimum AWS EC2 service quota of 100 vCPUs to provide for cluster creation, availability, and upgrades. The default maximum value for vCPUs assigned to Running On-Demand Standard Amazon EC2 instances is 5 . Therefore if you have not created a ROSA cluster using the same AWS account previously, you must request additional EC2 quota for Running On-Demand Standard (A, C, D, H, I, M, R, T, Z) instances . Some optional cluster configuration features, such as custom security groups, might require you to request additional quota. For example, because ROSA associates 1 security group with network interfaces in worker machine pools by default, and the default quota for Security groups per network interface is 5 , if you want to add 5 custom security groups, you must request additional quota, because this would bring the total number of security groups on worker network interfaces to 6. Note The AWS SDK allows ROSA to check quotas, but the AWS SDK calculation does not account for your existing usage. Therefore, it is possible that the quota check can pass in the AWS SDK yet the cluster creation can fail. To fix this issue, increase your quota. If you need to modify or increase a specific quota, see Amazon's documentation on requesting a quota increase . Large quota requests are submitted to Amazon Support for review, and take some time to be approved. If your quota request is urgent, contact AWS Support. Table 11.2. ROSA-required service quota Quota name Service code Quota code AWS default Minimum required Description Running On-Demand Standard (A, C, D, H, I, M, R, T, Z) instances ec2 L-1216C47A 5 100 Maximum number of vCPUs assigned to the Running On-Demand Standard (A, C, D, H, I, M, R, T, Z) instances. The default value of 5 vCPUs is not sufficient to create ROSA clusters. Storage for General Purpose SSD (gp2) volume storage in TiB ebs L-D18FCD1D 50 300 The maximum aggregated amount of storage, in TiB, that can be provisioned across General Purpose SSD (gp2) volumes in this Region. Storage for General Purpose SSD (gp3) volume storage in TiB ebs L-7A658B76 50 300 The maximum aggregated amount of storage, in TiB, that can be provisioned across General Purpose SSD (gp3) volumes in this Region. 300 TiB of storage is the required minimum for optimal performance. Storage for Provisioned IOPS SSD (io1) volumes in TiB ebs L-FD252861 50 300 The maximum aggregated amount of storage, in TiB, that can be provisioned across Provisioned IOPS SSD (io1) volumes in this Region. 300 TiB of storage is the required minimum for optimal performance. Table 11.3. General AWS service quotas Quota name Service code Quota code AWS default Minimum required Description EC2-VPC Elastic IPs ec2 L-0263D0A3 5 5 The maximum number of Elastic IP addresses that you can allocate for EC2-VPC in this Region. VPCs per Region vpc L-F678F1CE 5 5 The maximum number of VPCs per Region. This quota is directly tied to the maximum number of internet gateways per Region. Internet gateways per Region vpc L-A4707A72 5 5 The maximum number of internet gateways per Region. This quota is directly tied to the maximum number of VPCs per Region. To increase this quota, increase the number of VPCs per Region. Network interfaces per Region vpc L-DF5E4CA3 5,000 5,000 The maximum number of network interfaces per Region. Security groups per network interface vpc L-2AFB9258 5 5 The maximum number of security groups per network interface. This quota, multiplied by the quota for rules per security group, cannot exceed 1000. Snapshots per Region ebs L-309BACF6 10,000 10,000 The maximum number of snapshots per Region IOPS for Provisioned IOPS SSD (Io1) volumes ebs L-B3A130E6 300,000 300,000 The maximum aggregated number of IOPS that can be provisioned across Provisioned IOPS SDD (io1) volumes in this Region. Application Load Balancers per Region elasticloadbalancing L-53DA6B97 50 50 The maximum number of Application Load Balancers that can exist in each region. Classic Load Balancers per Region elasticloadbalancing L-E9E9831D 20 20 The maximum number of Classic Load Balancers that can exist in each region. 11.3.1.1. Additional resources How can I request, view, and manage service quota increase requests using AWS CLI commands? ROSA service quotas Request a quota increase IAM and AWS STS quotas (AWS documentation) 11.3.2. steps Configure your AWS account 11.3.3. Additional resources Understanding the ROSA deployment workflow 11.4. Configuring your AWS account After you complete the AWS prerequisites, configure your AWS account and enable the Red Hat OpenShift Service on AWS (ROSA) service. Tip AWS Security Token Service (STS) is the recommended credential mode for installing and interacting with clusters on Red Hat OpenShift Service on AWS because it provides enhanced security. 11.4.1. Configuring your AWS account To configure your AWS account to use the ROSA service, complete the following steps. Prerequisites Review and complete the deployment prerequisites and policies. Create a Red Hat account , if you do not already have one. Then, check your email for a verification link. You will need these credentials to install ROSA. Procedure Log in to the Amazon Web Services (AWS) account that you want to use. A dedicated AWS account is recommended to run production clusters. If you are using AWS Organizations, you can use an AWS account within your organization or create a new one . If you are using AWS Organizations and you need to have a service control policy (SCP) applied to the AWS account you plan to use, see AWS Prerequisites for details on the minimum required SCP. As part of the cluster creation process, rosa establishes an osdCcsAdmin IAM user. This user uses the IAM credentials you provide when configuring the AWS CLI. Note This user has Programmatic access enabled and the AdministratorAccess policy attached to it. Enable the ROSA service in the AWS Console. Sign in to your AWS account . To enable ROSA, go to the ROSA service and select Enable OpenShift . Install and configure the AWS CLI. Follow the AWS command-line interface documentation to install and configure the AWS CLI for your operating system. Specify the correct aws_access_key_id and aws_secret_access_key in the .aws/credentials file. See AWS Configuration basics in the AWS documentation. Set a default AWS region. Note It is recommended to set the default AWS region by using the environment variable. The ROSA service evaluates regions in the following priority order: The region specified when running the rosa command with the --region flag. The region set in the AWS_DEFAULT_REGION environment variable. See Environment variables to configure the AWS CLI in the AWS documentation. The default region set in your AWS configuration file. See Quick configuration with aws configure in the AWS documentation. Optional: Configure your AWS CLI settings and credentials by using an AWS named profile. rosa evaluates AWS named profiles in the following priority order: The profile specified when running the rosa command with the --profile flag. The profile set in the AWS_PROFILE environment variable. See Named profiles in the AWS documentation. Verify the AWS CLI is installed and configured correctly by running the following command to query the AWS API: USD aws sts get-caller-identity --output text Example output <aws_account_id> arn:aws:iam::<aws_account_id>:user/<username> <aws_user_id> After completing these steps, install ROSA. 11.4.2. steps Installing the ROSA CLI 11.4.3. Additional resources AWS prerequisites Required AWS service quotas and requesting increases Understanding the ROSA deployment workflow 11.5. Installing the Red Hat OpenShift Service on AWS (ROSA) CLI, rosa After you configure your AWS account, install and configure the Red Hat OpenShift Service on AWS (ROSA) CLI, rosa . Tip AWS Security Token Service (STS) is the recommended credential mode for installing and interacting with clusters on Red Hat OpenShift Service on AWS because it provides enhanced security. 11.5.1. Installing and configuring the ROSA CLI Install and configure the Red Hat OpenShift Service on AWS (ROSA) CLI, rosa . You can also install the OpenShift CLI ( oc ) and verify if the required AWS resource quotas are available by using the ROSA CLI ( rosa ). Prerequisites Review and complete the AWS prerequisites and ROSA policies. Create a Red Hat account , if you do not already have one. Then, check your email for a verification link. You will need these credentials to install ROSA. Configure your AWS account and enable the ROSA service in your AWS account. Procedure Install rosa , the Red Hat OpenShift Service on AWS command-line interface (CLI). Download the latest release of the ROSA CLI for your operating system. Optional: Rename the executable file you downloaded to rosa . This documentation uses rosa to refer to the executable file. Optional: Add rosa to your path. Example USD mv rosa /usr/local/bin/rosa Enter the following command to verify your installation: USD rosa Example output Command line tool for Red Hat OpenShift Service on AWS. For further documentation visit https://access.redhat.com/documentation/en-us/red_hat_openshift_service_on_aws Usage: rosa [command] Available Commands: completion Generates completion scripts create Create a resource from stdin delete Delete a specific resource describe Show details of a specific resource download Download necessary tools for using your cluster edit Edit a specific resource grant Grant role to a specific resource help Help about any command init Applies templates to support Red Hat OpenShift Service on AWS install Installs a resource into a cluster link Link a ocm/user role from stdin list List all resources of a specific type login Log in to your Red Hat account logout Log out logs Show installation or uninstallation logs for a cluster revoke Revoke role from a specific resource uninstall Uninstalls a resource from a cluster unlink UnLink a ocm/user role from stdin upgrade Upgrade a resource verify Verify resources are configured correctly for cluster install version Prints the version of the tool whoami Displays user account information Flags: --color string Surround certain characters with escape sequences to display them in color on the terminal. Allowed options are [auto never always] (default "auto") --debug Enable debug mode. -h, --help help for rosa Use "rosa [command] --help" for more information about a command. Optional: Generate the command completion scripts for the ROSA CLI. The following example generates the Bash completion scripts for a Linux machine: USD rosa completion bash | sudo tee /etc/bash_completion.d/rosa Optional: Enable command completion for the ROSA CLI from your existing terminal. The following example enables Bash completion for rosa in an existing terminal on a Linux machine: USD source /etc/bash_completion.d/rosa Log in to your Red Hat account with rosa . Enter the following command. USD rosa login Replace <my_offline_access_token> with your token. Example output To login to your Red Hat account, get an offline access token at https://console.redhat.com/openshift/token/rosa ? Copy the token and paste it here: <my-offline-access-token> Example output continued I: Logged in as 'rh-rosa-user' on 'https://api.openshift.com' Enter the following command to verify that your AWS account has the necessary permissions. USD rosa verify permissions Example output I: Validating SCP policies... I: AWS SCP policies ok Note This command verifies permissions only for ROSA clusters that do not use the AWS Security Token Service (STS). Verify that your AWS account has the necessary quota to deploy an Red Hat OpenShift Service on AWS cluster. USD rosa verify quota --region=us-west-2 Example output I: Validating AWS quota... I: AWS quota ok Note Sometimes your AWS quota varies by region. If you receive any errors, try a different region. If you need to increase your quota, go to your AWS console , and request a quota increase for the service that failed. After both the permissions and quota checks pass, proceed to the step. Prepare your AWS account for cluster deployment: Run the following command to verify your Red Hat and AWS credentials are setup correctly. Check that your AWS Account ID, Default Region and ARN match what you expect. You can safely ignore the rows beginning with OCM for now. USD rosa whoami Example output AWS Account ID: 000000000000 AWS Default Region: us-east-2 AWS ARN: arn:aws:iam::000000000000:user/hello OCM API: https://api.openshift.com OCM Account ID: 1DzGIdIhqEWyt8UUXQhSoWaaaaa OCM Account Name: Your Name OCM Account Username: [email protected] OCM Account Email: [email protected] OCM Organization ID: 1HopHfA2hcmhup5gCr2uH5aaaaa OCM Organization Name: Red Hat OCM Organization External ID: 0000000 Initialize your AWS account. This step runs a CloudFormation template that prepares your AWS account for cluster deployment and management. This step typically takes 1-2 minutes to complete. USD rosa init Example output I: Logged in as 'rh-rosa-user' on 'https://api.openshift.com' I: Validating AWS credentials... I: AWS credentials are valid! I: Validating SCP policies... I: AWS SCP policies ok I: Validating AWS quota... I: AWS quota ok I: Ensuring cluster administrator user 'osdCcsAdmin'... I: Admin user 'osdCcsAdmin' created successfully! I: Verifying whether OpenShift command-line tool is available... E: OpenShift command-line tool is not installed. Run 'rosa download oc' to download the latest version, then add it to your PATH. Install the OpenShift CLI ( oc ) from the ROSA CLI. Enter this command to download the latest version of the oc CLI: USD rosa download oc After downloading the oc CLI, unzip it and add it to your path. Enter this command to verify that the oc CLI is installed correctly: USD rosa verify oc After installing ROSA, you are ready to create a cluster. 11.5.2. steps Create a ROSA cluster or Create an AWS PrivateLink cluster on ROSA . 11.5.3. Additional resources AWS prerequisites Required AWS service quotas and requesting increases Understanding the ROSA deployment workflow 11.6. Creating a ROSA cluster without AWS STS After you set up your environment and install Red Hat OpenShift Service on AWS (ROSA), create a cluster. This document describes how to set up a ROSA cluster. Alternatively, you can create a ROSA cluster with AWS PrivateLink. Tip AWS Security Token Service (STS) is the recommended credential mode for installing and interacting with clusters on Red Hat OpenShift Service on AWS because it provides enhanced security. 11.6.1. Creating your cluster You can create a Red Hat OpenShift Service on AWS (ROSA) cluster using the ROSA CLI ( rosa ). Prerequisites You have installed Red Hat OpenShift Service on AWS. Note AWS Shared VPCs are not currently supported for ROSA installs. Procedure You can create a cluster using the default settings or by specifying custom settings using the interactive mode. To view other options when creating a cluster, enter the rosa create cluster --help command. Creating a cluster can take up to 40 minutes. Note Multiple availability zones (AZ) are recommended for production workloads. The default is a single availability zone. Use --help for an example of how to set this option manually or use interactive mode to be prompted for this setting. To create your cluster with the default cluster settings: USD rosa create cluster --cluster-name=<cluster_name> Example output I: Creating cluster with identifier '1de87g7c30g75qechgh7l5b2bha6r04e' and name 'rh-rosa-test-cluster1' I: To view list of clusters and their status, run `rosa list clusters` I: Cluster 'rh-rosa-test-cluster1' has been created. I: Once the cluster is 'Ready' you will need to add an Identity Provider and define the list of cluster administrators. See `rosa create idp --help` and `rosa create user --help` for more information. I: To determine when your cluster is Ready, run `rosa describe cluster rh-rosa-test-cluster1`. To create a cluster using interactive prompts: USD rosa create cluster --interactive To configure your networking IP ranges, you can use the following default ranges. For more information when using manual mode, use the rosa create cluster --help | grep cidr command. In interactive mode, you are prompted for the settings. Node CIDR: 10.0.0.0/16 Service CIDR: 172.30.0.0/16 Pod CIDR: 10.128.0.0/14 Enter the following command to check the status of your cluster. During cluster creation, the State field from the output will transition from pending to installing , and finally to ready . USD rosa describe cluster --cluster=<cluster_name> Example output Name: rh-rosa-test-cluster1 OpenShift Version: 4.6.8 DNS: *.example.com ID: uniqueidnumber External ID: uniqueexternalidnumber AWS Account: 123456789101 API URL: https://api.rh-rosa-test-cluster1.example.org:6443 Console URL: https://console-openshift-console.apps.rh-rosa-test-cluster1.example.or Nodes: Master: 3, Infra: 2, Compute: 2 Region: us-west-2 Multi-AZ: false State: ready Channel Group: stable Private: No Created: Jan 15 2021 16:30:55 UTC Details Page: https://console.redhat.com/examplename/details/idnumber Note If installation fails or the State field does not change to ready after 40 minutes, check the installation troubleshooting documentation for more details. Track the progress of the cluster creation by watching the OpenShift installer logs: USD rosa logs install --cluster=<cluster_name> --watch 11.6.2. steps Configure identity providers 11.6.3. Additional resources Understanding the ROSA deployment workflow Deleting a ROSA cluster ROSA architecture models 11.7. Configuring a private cluster A Red Hat OpenShift Service on AWS cluster can be made private so that internal applications can be hosted inside a corporate network. In addition, private clusters can be configured to have only internal API endpoints for increased security. Privacy settings can be configured during cluster creation or after a cluster is established. 11.7.1. Enabling private cluster on a new cluster You can enable the private cluster setting when creating a new Red Hat OpenShift Service on AWS cluster. Important Private clusters cannot be used with AWS security token service (STS). However, STS supports AWS PrivateLink clusters. Prerequisites AWS VPC Peering, VPN, DirectConnect, or TransitGateway has been configured to allow private access. Procedure Enter the following command to create a new private cluster. USD rosa create cluster --cluster-name=<cluster_name> --private Note Alternatively, use --interactive to be prompted for each cluster option. 11.7.2. Enabling private cluster on an existing cluster After a cluster has been created, you can later enable the cluster to be private. Important Private clusters cannot be used with AWS security token service (STS). However, STS supports AWS PrivateLink clusters. Prerequisites AWS VPC Peering, VPN, DirectConnect, or TransitGateway has been configured to allow private access. Procedure Enter the following command to enable the --private option on an existing cluster. USD rosa edit cluster --cluster=<cluster_name> --private Note Transitioning your cluster between private and public can take several minutes to complete. 11.7.3. Additional resources Creating an AWS PrivateLink cluster on ROSA 11.8. Deleting access to a ROSA cluster Delete access to a Red Hat OpenShift Service on AWS (ROSA) cluster using the rosa command-line. Tip AWS Security Token Service (STS) is the recommended credential mode for installing and interacting with clusters on Red Hat OpenShift Service on AWS because it provides enhanced security. 11.8.1. Revoking dedicated-admin access using the ROSA CLI You can revoke access for a dedicated-admin user if you are the user who created the cluster, the organization administrator user, or the super administrator user. Prerequisites You have added an Identity Provider (IDP) to your cluster. You have the IDP user name for the user whose privileges you are revoking. You are logged in to the cluster. Procedure Enter the following command to revoke the dedicated-admin access of a user: USD rosa revoke user dedicated-admin --user=<idp_user_name> --cluster=<cluster_name> Enter the following command to verify that your user no longer has dedicated-admin access. The output does not list the revoked user. USD oc get groups dedicated-admins 11.8.2. Revoking cluster-admin access using the ROSA CLI Only the user who created the cluster can revoke access for cluster-admin users. Prerequisites You have added an Identity Provider (IDP) to your cluster. You have the IDP user name for the user whose privileges you are revoking. You are logged in to the cluster. Procedure Enter the following command to revoke the cluster-admin access of a user: USD rosa revoke user cluster-admins --user=myusername --cluster=mycluster Enter the following command to verify that the user no longer has cluster-admin access. The output does not list the revoked user. USD oc get groups cluster-admins 11.9. Deleting a ROSA cluster Delete a Red Hat OpenShift Service on AWS (ROSA) cluster using the rosa command-line. Tip AWS Security Token Service (STS) is the recommended credential mode for installing and interacting with clusters on Red Hat OpenShift Service on AWS because it provides enhanced security. 11.9.1. Prerequisites If Red Hat OpenShift Service on AWS created a VPC, you must remove the following items from your cluster before you can successfully delete your cluster: Network configurations, such as VPN configurations and VPC peering connections Any additional services that were added to the VPC If these configurations and services remain, the cluster does not delete properly. 11.9.2. Deleting a ROSA cluster and the cluster-specific IAM resources You can delete a Red Hat OpenShift Service on AWS (ROSA) with AWS Security Token Service (STS) cluster by using the ROSA CLI ( rosa ) or Red Hat OpenShift Cluster Manager. After deleting the cluster, you can clean up the cluster-specific Identity and Access Management (IAM) resources in your AWS account by using the ROSA CLI ( rosa ). The cluster-specific resources include the Operator roles and the OpenID Connect (OIDC) provider. Note The cluster deletion must complete before you remove the IAM resources, because the resources are used in the cluster deletion and clean-up processes. If add-ons are installed, the cluster deletion takes longer because add-ons are uninstalled before the cluster is deleted. The amount of time depends on the number and size of the add-ons. Important If the cluster that created the VPC during the installation is deleted, the associated installation program-created VPC will also be deleted, resulting in the failure of all the clusters that are using the same VPC. Additionally, any resources created with the same tagSet key-value pair of the resources created by the installation program and labeled with a value of owned will also be deleted. Prerequisites You have installed a ROSA cluster. You have installed and configured the latest ROSA CLI ( rosa ) on your installation host. Procedure Obtain the cluster ID, the Amazon Resource Names (ARNs) for the cluster-specific Operator roles and the endpoint URL for the OIDC provider: USD rosa describe cluster --cluster=<cluster_name> 1 1 Replace <cluster_name> with the name of your cluster. Example output Name: mycluster ID: 1s3v4x39lhs8sm49m90mi0822o34544a 1 ... Operator IAM Roles: 2 - arn:aws:iam::<aws_account_id>:role/mycluster-x4q9-openshift-machine-api-aws-cloud-credentials - arn:aws:iam::<aws_account_id>:role/mycluster-x4q9-openshift-cloud-credential-operator-cloud-crede - arn:aws:iam::<aws_account_id>:role/mycluster-x4q9-openshift-image-registry-installer-cloud-creden - arn:aws:iam::<aws_account_id>:role/mycluster-x4q9-openshift-ingress-operator-cloud-credentials - arn:aws:iam::<aws_account_id>:role/mycluster-x4q9-openshift-cluster-csi-drivers-ebs-cloud-credent - arn:aws:iam::<aws_account_id>:role/mycluster-x4q9-openshift-cloud-network-config-controller-cloud State: ready Private: No Created: May 13 2022 11:26:15 UTC Details Page: https://console.redhat.com/openshift/details/s/296kyEFwzoy1CREQicFRdZybrc0 OIDC Endpoint URL: https://oidc.op1.openshiftapps.com/<oidc_config_id> 3 1 Lists the cluster ID. 2 Specifies the ARNs for the cluster-specific Operator roles. For example, in the sample output the ARN for the role required by the Machine Config Operator is arn:aws:iam::<aws_account_id>:role/mycluster-x4q9-openshift-machine-api-aws-cloud-credentials . 3 Displays the endpoint URL for the cluster-specific OIDC provider. Important You require the cluster ID to delete the cluster-specific STS resources using the ROSA CLI ( rosa ) after the cluster is deleted. Delete the cluster: To delete the cluster by using Red Hat OpenShift Cluster Manager: Navigate to OpenShift Cluster Manager . Click the Options menu to your cluster and select Delete cluster . Type the name of your cluster at the prompt and click Delete . To delete the cluster using the ROSA CLI ( rosa ): Enter the following command to delete the cluster and watch the logs, replacing <cluster_name> with the name or ID of your cluster: USD rosa delete cluster --cluster=<cluster_name> --watch Important You must wait for the cluster deletion to complete before you remove the Operator roles and the OIDC provider. The cluster-specific Operator roles are required to clean-up the resources created by the OpenShift Operators. The Operators use the OIDC provider to authenticate. Delete the OIDC provider that the cluster Operators use to authenticate: USD rosa delete oidc-provider -c <cluster_id> --mode auto 1 1 Replace <cluster_id> with the ID of the cluster. Note You can use the -y option to automatically answer yes to the prompts. Optional. Delete the cluster-specific Operator IAM roles: Important The account-wide IAM roles can be used by other ROSA clusters in the same AWS account. Only remove the roles if they are not required by other clusters. USD rosa delete operator-roles -c <cluster_id> --mode auto 1 1 Replace <cluster_id> with the ID of the cluster. Troubleshooting If the cluster cannot be deleted because of missing IAM roles, see Additional Repairing a cluster that cannot be deleted . If the cluster cannot be deleted for other reasons: Check that there are no Add-ons for your cluster pending in the Hybrid Cloud Console . Check that all AWS resources and dependencies have been deleted in the Amazon Web Console. 11.10. Command quick reference for creating clusters and users Tip AWS Security Token Service (STS) is the recommended credential mode for installing and interacting with clusters on Red Hat OpenShift Service on AWS because it provides enhanced security. 11.10.1. Command quick reference list If you have already created your first cluster and users, this list can serve as a command quick reference list when creating additional clusters and users. ## Configures your AWS account and ensures everything is setup correctly USD rosa init ## Starts the cluster creation process (~30-40minutes) USD rosa create cluster --cluster-name=<cluster_name> ## Connect your IDP to your cluster USD rosa create idp --cluster=<cluster_name> --interactive ## Promotes a user from your IDP to dedicated-admin level USD rosa grant user dedicated-admin --user=<idp_user_name> --cluster=<cluster_name> ## Checks if your install is ready (look for State: Ready), ## and provides your Console URL to login to the web console. USD rosa describe cluster --cluster=<cluster_name> 11.10.2. Additional resources Understanding the ROSA deployment workflow
[ "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Action\": \"*\", \"Resource\": \"*\", \"Effect\": \"Allow\" } ] }", "aws sts get-caller-identity --output text", "<aws_account_id> arn:aws:iam::<aws_account_id>:user/<username> <aws_user_id>", "mv rosa /usr/local/bin/rosa", "rosa", "Command line tool for Red Hat OpenShift Service on AWS. For further documentation visit https://access.redhat.com/documentation/en-us/red_hat_openshift_service_on_aws Usage: rosa [command] Available Commands: completion Generates completion scripts create Create a resource from stdin delete Delete a specific resource describe Show details of a specific resource download Download necessary tools for using your cluster edit Edit a specific resource grant Grant role to a specific resource help Help about any command init Applies templates to support Red Hat OpenShift Service on AWS install Installs a resource into a cluster link Link a ocm/user role from stdin list List all resources of a specific type login Log in to your Red Hat account logout Log out logs Show installation or uninstallation logs for a cluster revoke Revoke role from a specific resource uninstall Uninstalls a resource from a cluster unlink UnLink a ocm/user role from stdin upgrade Upgrade a resource verify Verify resources are configured correctly for cluster install version Prints the version of the tool whoami Displays user account information Flags: --color string Surround certain characters with escape sequences to display them in color on the terminal. Allowed options are [auto never always] (default \"auto\") --debug Enable debug mode. -h, --help help for rosa Use \"rosa [command] --help\" for more information about a command.", "rosa completion bash | sudo tee /etc/bash_completion.d/rosa", "source /etc/bash_completion.d/rosa", "rosa login", "To login to your Red Hat account, get an offline access token at https://console.redhat.com/openshift/token/rosa ? Copy the token and paste it here: <my-offline-access-token>", "I: Logged in as 'rh-rosa-user' on 'https://api.openshift.com'", "rosa verify permissions", "I: Validating SCP policies I: AWS SCP policies ok", "rosa verify quota --region=us-west-2", "I: Validating AWS quota I: AWS quota ok", "rosa whoami", "AWS Account ID: 000000000000 AWS Default Region: us-east-2 AWS ARN: arn:aws:iam::000000000000:user/hello OCM API: https://api.openshift.com OCM Account ID: 1DzGIdIhqEWyt8UUXQhSoWaaaaa OCM Account Name: Your Name OCM Account Username: [email protected] OCM Account Email: [email protected] OCM Organization ID: 1HopHfA2hcmhup5gCr2uH5aaaaa OCM Organization Name: Red Hat OCM Organization External ID: 0000000", "rosa init", "I: Logged in as 'rh-rosa-user' on 'https://api.openshift.com' I: Validating AWS credentials I: AWS credentials are valid! I: Validating SCP policies I: AWS SCP policies ok I: Validating AWS quota I: AWS quota ok I: Ensuring cluster administrator user 'osdCcsAdmin' I: Admin user 'osdCcsAdmin' created successfully! I: Verifying whether OpenShift command-line tool is available E: OpenShift command-line tool is not installed. Run 'rosa download oc' to download the latest version, then add it to your PATH.", "rosa download oc", "rosa verify oc", "rosa create cluster --cluster-name=<cluster_name>", "I: Creating cluster with identifier '1de87g7c30g75qechgh7l5b2bha6r04e' and name 'rh-rosa-test-cluster1' I: To view list of clusters and their status, run `rosa list clusters` I: Cluster 'rh-rosa-test-cluster1' has been created. I: Once the cluster is 'Ready' you will need to add an Identity Provider and define the list of cluster administrators. See `rosa create idp --help` and `rosa create user --help` for more information. I: To determine when your cluster is Ready, run `rosa describe cluster rh-rosa-test-cluster1`.", "rosa create cluster --interactive", "rosa describe cluster --cluster=<cluster_name>", "Name: rh-rosa-test-cluster1 OpenShift Version: 4.6.8 DNS: *.example.com ID: uniqueidnumber External ID: uniqueexternalidnumber AWS Account: 123456789101 API URL: https://api.rh-rosa-test-cluster1.example.org:6443 Console URL: https://console-openshift-console.apps.rh-rosa-test-cluster1.example.or Nodes: Master: 3, Infra: 2, Compute: 2 Region: us-west-2 Multi-AZ: false State: ready Channel Group: stable Private: No Created: Jan 15 2021 16:30:55 UTC Details Page: https://console.redhat.com/examplename/details/idnumber", "rosa logs install --cluster=<cluster_name> --watch", "rosa create cluster --cluster-name=<cluster_name> --private", "rosa edit cluster --cluster=<cluster_name> --private", "rosa revoke user dedicated-admin --user=<idp_user_name> --cluster=<cluster_name>", "oc get groups dedicated-admins", "rosa revoke user cluster-admins --user=myusername --cluster=mycluster", "oc get groups cluster-admins", "rosa describe cluster --cluster=<cluster_name> 1", "Name: mycluster ID: 1s3v4x39lhs8sm49m90mi0822o34544a 1 Operator IAM Roles: 2 - arn:aws:iam::<aws_account_id>:role/mycluster-x4q9-openshift-machine-api-aws-cloud-credentials - arn:aws:iam::<aws_account_id>:role/mycluster-x4q9-openshift-cloud-credential-operator-cloud-crede - arn:aws:iam::<aws_account_id>:role/mycluster-x4q9-openshift-image-registry-installer-cloud-creden - arn:aws:iam::<aws_account_id>:role/mycluster-x4q9-openshift-ingress-operator-cloud-credentials - arn:aws:iam::<aws_account_id>:role/mycluster-x4q9-openshift-cluster-csi-drivers-ebs-cloud-credent - arn:aws:iam::<aws_account_id>:role/mycluster-x4q9-openshift-cloud-network-config-controller-cloud State: ready Private: No Created: May 13 2022 11:26:15 UTC Details Page: https://console.redhat.com/openshift/details/s/296kyEFwzoy1CREQicFRdZybrc0 OIDC Endpoint URL: https://oidc.op1.openshiftapps.com/<oidc_config_id> 3", "rosa delete cluster --cluster=<cluster_name> --watch", "rosa delete oidc-provider -c <cluster_id> --mode auto 1", "rosa delete operator-roles -c <cluster_id> --mode auto 1", "## Configures your AWS account and ensures everything is setup correctly rosa init ## Starts the cluster creation process (~30-40minutes) rosa create cluster --cluster-name=<cluster_name> ## Connect your IDP to your cluster rosa create idp --cluster=<cluster_name> --interactive ## Promotes a user from your IDP to dedicated-admin level rosa grant user dedicated-admin --user=<idp_user_name> --cluster=<cluster_name> ## Checks if your install is ready (look for State: Ready), ## and provides your Console URL to login to the web console. rosa describe cluster --cluster=<cluster_name>" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/install_rosa_classic_clusters/deploying-rosa-without-aws-sts
Chapter 10. Testing
Chapter 10. Testing As a storage administrator, you can do basic functionality testing to verify that the Ceph Object Gateway environment is working as expected. You can use the REST interfaces by creating an initial Ceph Object Gateway user for the S3 interface, and then create a subuser for the Swift interface. Prerequisites A healthy running Red Hat Ceph Storage cluster. Installation of the Ceph Object Gateway software. 10.1. Create an S3 user To test the gateway, create an S3 user and grant the user access. The man radosgw-admin command provides information on additional command options. Note In a multi-site deployment, always create a user on a host in the master zone of the master zone group. Prerequisites root or sudo access Ceph Object Gateway installed Procedure Create an S3 user: Syntax Replace name with the name of the S3 user: Example Verify the output to ensure that the values of access_key and secret_key do not include a JSON escape character ( \ ). These values are needed for access validation, but certain clients cannot handle if the values include JSON escape characters. To fix this problem, perform one of the following actions: Remove the JSON escape character. Encapsulate the string in quotes. Regenerate the key and ensure that it does not include a JSON escape character. Specify the key and secret manually. Do not remove the forward slash / because it is a valid character. 10.2. Create a Swift user To test the Swift interface, create a Swift subuser. Creating a Swift user is a two-step process. The first step is to create the user. The second step is to create the secret key. Note In a multi-site deployment, always create a user on a host in the master zone of the master zone group. Prerequisites Installation of the Ceph Object Gateway. Root-level access to the Ceph Object Gateway node. Procedure Create the Swift user: Syntax Replace NAME with the Swift user name, for example: Example Create the secret key: Syntax Replace NAME with the Swift user name, for example: Example 10.3. Test S3 access You need to write and run a Python test script for verifying S3 access. The S3 access test script will connect to the radosgw , create a new bucket, and list all buckets. The values for aws_access_key_id and aws_secret_access_key are taken from the values of access_key and secret_key returned by the radosgw_admin command. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the nodes. Procedure Enable the High Availability repository for Red Hat Enterprise Linux 9: Install the python3-boto3 package: Create the Python script: Add the following contents to the file: Syntax Replace endpoint with the URL of the host where you have configured the gateway service. That is, the gateway host . Ensure that the host setting resolves with DNS. Replace PORT with the port number of the gateway. Replace ACCESS and SECRET with the access_key and secret_key values from the Create an S3 User section in the Red Hat Ceph Storage Object Gateway Guide . Run the script: The output will be something like the following: 10.4. Test Swift access Swift access can be verified via the swift command line client. The command man swift will provide more information on available command line options. To install the swift client, run the following command: To test swift access, run the following command: Syntax Replace IP_ADDRESS with the public IP address of the gateway server and SWIFT_SECRET_KEY with its value from the output of the radosgw-admin key create command issued for the swift user. Replace PORT with the port number you are using with Beast. If you do not replace the port, it will default to port 80 . For example: The output should be:
[ "radosgw-admin user create --uid= name --display-name=\" USER_NAME \"", "radosgw-admin user create --uid=\"testuser\" --display-name=\"Jane Doe\" { \"user_id\": \"testuser\", \"display_name\": \"Jane Doe\", \"email\": \"\", \"suspended\": 0, \"max_buckets\": 1000, \"auid\": 0, \"subusers\": [], \"keys\": [ { \"user\": \"testuser\", \"access_key\": \"CEP28KDIQXBKU4M15PDC\", \"secret_key\": \"MARoio8HFc8JxhEilES3dKFVj8tV3NOOYymihTLO\" } ], \"swift_keys\": [], \"caps\": [], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"user_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"temp_url_keys\": [], \"type\": \"rgw\" }", "radosgw-admin subuser create --uid= NAME --subuser= NAME :swift --access=full", "radosgw-admin subuser create --uid=testuser --subuser=testuser:swift --access=full { \"user_id\": \"testuser\", \"display_name\": \"First User\", \"email\": \"\", \"suspended\": 0, \"max_buckets\": 1000, \"auid\": 0, \"subusers\": [ { \"id\": \"testuser:swift\", \"permissions\": \"full-control\" } ], \"keys\": [ { \"user\": \"testuser\", \"access_key\": \"O8JDE41XMI74O185EHKD\", \"secret_key\": \"i4Au2yxG5wtr1JK01mI8kjJPM93HNAoVWOSTdJd6\" } ], \"swift_keys\": [ { \"user\": \"testuser:swift\", \"secret_key\": \"13TLtdEW7bCqgttQgPzxFxziu0AgabtOc6vM8DLA\" } ], \"caps\": [], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"user_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"temp_url_keys\": [], \"type\": \"rgw\" }", "radosgw-admin key create --subuser= NAME :swift --key-type=swift --gen-secret", "radosgw-admin key create --subuser=testuser:swift --key-type=swift --gen-secret { \"user_id\": \"testuser\", \"display_name\": \"First User\", \"email\": \"\", \"suspended\": 0, \"max_buckets\": 1000, \"auid\": 0, \"subusers\": [ { \"id\": \"testuser:swift\", \"permissions\": \"full-control\" } ], \"keys\": [ { \"user\": \"testuser\", \"access_key\": \"O8JDE41XMI74O185EHKD\", \"secret_key\": \"i4Au2yxG5wtr1JK01mI8kjJPM93HNAoVWOSTdJd6\" } ], \"swift_keys\": [ { \"user\": \"testuser:swift\", \"secret_key\": \"a4ioT4jEP653CDcdU8p4OuhruwABBRZmyNUbnSSt\" } ], \"caps\": [], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"user_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"temp_url_keys\": [], \"type\": \"rgw\" }", "subscription-manager repos --enable=rhel-9-for-x86_64-highavailability-rpms", "dnf install python3-boto3", "vi s3test.py", "import boto3 endpoint = \"\" # enter the endpoint URL along with the port \"http:// URL : PORT \" access_key = ' ACCESS ' secret_key = ' SECRET ' s3 = boto3.client( 's3', endpoint_url=endpoint, aws_access_key_id=access_key, aws_secret_access_key=secret_key ) s3.create_bucket(Bucket='my-new-bucket') response = s3.list_buckets() for bucket in response['Buckets']: print(\"{name}\\t{created}\".format( name = bucket['Name'], created = bucket['CreationDate'] ))", "python3 s3test.py", "my-new-bucket 2022-05-31T17:09:10.000Z", "sudo yum install python-setuptools sudo easy_install pip sudo pip install --upgrade setuptools sudo pip install --upgrade python-swiftclient", "swift -A http:// IP_ADDRESS : PORT /auth/1.0 -U testuser:swift -K ' SWIFT_SECRET_KEY ' list", "swift -A http://10.10.143.116:80/auth/1.0 -U testuser:swift -K '244+fz2gSqoHwR3lYtSbIyomyPHf3i7rgSJrF/IA' list", "my-new-bucket" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/object_gateway_guide/testing
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/8.0_release_notes/making-open-source-more-inclusive
Chapter 4. Analyzing your projects with the MTR plugin
Chapter 4. Analyzing your projects with the MTR plugin You can analyze your projects with the MTR plugin by creating a run configuration, running an analysis, and then reviewing and resolving migration issues detected by the MTR plugin. 4.1. Creating a run configuration You can create a run configuration in the Issue Explorer . A run configuration specifies the project to analyze, migration path, and additional options. You can create multiple run configurations. Each run configuration must have a unique name. Prerequisite You must import your projects into the Eclipse IDE. Procedure In the Issue Explorer , click the MTR icon ( ) to create a run configuration. On the Input tab, complete the following fields: Select a migration path. Beside the Projects field, click Add and select one or more projects. Beside the Packages field, click Add and select one or more the packages. Note Specifying the packages for analysis reduces the run time. If you do not select any packages, all packages in the project are scanned. On the Options tab, you can select Generate Report to generate an HTML report. The report is displayed in the Report tab and saved as a file. Other options are displayed. See About MTR command-line arguments in the CLI Guide for details. On the Rules tab, you can select custom rulesets that you have imported or created for the MTR plugin. Click Run to start the analysis. 4.2. Analyzing projects You can analyze your projects by running the MTR plugin with a saved run configuration. Procedure In the MTR perspective, click the Run button ( ) and select a run configuration. The MTR plugin analyzes your projects. The Issue Explorer displays migration issues that are detected with the ruleset. When you have finished analyzing your projects, stop the MTR server in the Issue Explorer to conserve memory. 4.3. Reviewing issues You can review issues identified by the MTR plugin. Procedure Click Window Show View Issue Explorer . Optional: Filter the issues by clicking the Options menu , selecting Group By and an option. Right-click and select Issue Details to view information about the issue, including its severity and how to address it. The following icons indicate the severity and state of an issue: Table 4.1. Issue icons Icon Description The issue must be fixed for a successful migration. The issue is optional to fix for migration. The issue might need to be addressed during migration. The issue was resolved. The issue is stale. The code marked as an issue was modified since the last time that MTR identified it as an issue. A quick fix is available for this issue, which is mandatory to fix for a successful migration. A quick fix is available for this issue, which is optional to fix for migration. A quick fix is available for this issue, which may potentially be an issue during migration. Double-click an issue to open the associated line of code in an editor. 4.4. Resolving issues You can resolve issues detected by the MTR plugin by performing one of the following actions: You can double-click the issue to open it in an editor and edit the source code. The issue displays a Stale icon ( ) until the time you run the MTR plugin. You can right-click the issue and select Mark as Fixed . If the issue displays a Quick Fix icon ( ), you can right-click the issue and select Preview Quick Fix and then Apply Quick Fix .
null
https://docs.redhat.com/en/documentation/migration_toolkit_for_runtimes/1.2/html/eclipse_plugin_guide/analyzing-projects-with-plugin_eclipse-code-ready-studio-guide
Chapter 6. Configuring the heap size for Red Hat build of OpenJDK application on RHEL
Chapter 6. Configuring the heap size for Red Hat build of OpenJDK application on RHEL You can configure Red Hat build of OpenJDK to use a customized heap size. Procedure Add the maximum heap size option to the java command when running your application. For example, to set the maximum heap size to 100 megabytes, use the -Xmx100m option: Additional resources For more information about the Xmx option, see -Xmxsize in the Java documentation . Revised on 2024-05-09 16:46:06 UTC
[ "java -Xmx100m <your_application_name>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/configuring_red_hat_build_of_openjdk_11_on_rhel/configuring-heap-size-for-openjdk-application-on-rhel
Chapter 11. DDL Metadata
Chapter 11. DDL Metadata 11.1. DDL Metadata A VDB can define models/schemas using DDL. Here is a small example of how one can define a view inside the -vdb.xml file. See the <metadata> element under <model> . Example 11.1. Example to show view definition Another complete DDL based example is at the end of this section. Note The declaration of metadata using DDL, NATIVE or DDL-FILE is supported out of the box, however the MetadataRepository interface allows users to plug-in their own metadata facilities. For example, you can write a Hibernate based store that can feed the necessary metadata. You can find out more about custom metadata repositories in Red Hat JBoss Data Virtualization Development Guide: Server Development . Note The DDL based schema is not constrained to be defined only for the view models. Note The full grammar for DDL is in the appendix.
[ "<model visible = \"true\" type = \"VIRTUAL\" name = \"customers\"> <metadata type = \"DDL\"><![CDATA[ CREATE VIEW PARTS ( PART_ID integer PRIMARY KEY, PART_NAME varchar(255), PART_COLOR varchar(30), PART_WEIGHT varchar(255) ) AS select a.id as PART_ID, a.name as PART_NAME, b.color as PART_COLOR, b.weight as PART_WEIGHT from modelA.part a, modelB.part b where a.id = b.id ]]> </metadata> </model>" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/chap-DDL_Metadata
Appendix B. Revision History
Appendix B. Revision History Revision History Revision 7.1-1 Wed Aug 7 2019 Steven Levine Preparing document for 7.7 GA publication. Revision 6.1-2 Thu Oct 4 2018 Steven Levine Preparing document for 7.6 GA publication. Revision 5.1-2 Wed Mar 14 2018 Steven Levine Preparing document for 7.5 GA publication. Revision 5.1-1 Wed Dec 13 2017 Steven Levine Preparing document for 7.5 Beta publication. Revision 4.1-3 Tue Aug 1 2017 Steven Levine Document version for 7.4 GA publication. Revision 4.1-1 Wed May 10 2017 Steven Levine Preparing document for 7.4 Beta publication. Revision 3.1-2 Mon Oct 17 2016 Steven Levine Version for 7.3 GA publication. Revision 3.1-1 Wed Aug 17 2016 Steven Levine Preparing document for 7.3 Beta publication. Revision 2.1-5 Mon Nov 9 2015 Steven Levine Preparing document for 7.2 GA publication. Revision 2.1-1 Tue Aug 18 2015 Steven Levine Preparing document for 7.2 Beta publication. Revision 1.1-3 Tue Feb 17 2015 Steven Levine Version for 7.1 GA Revision 1.1-1 Thu Dec 04 2014 Steven Levine Version for 7.1 Beta Release Revision 0.1-9 Tue Jun 03 2014 John Ha Version for 7.0 GA Release Revision 0.1-4 Wed Nov 27 2013 John Ha Build for Beta of Red Hat Enterprise Linux 7 Revision 0.1-1 Wed Jan 16 2013 Steven Levine First version for Red Hat Enterprise Linux 7
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_overview/appe-publican-revision_history
Chapter 5. Enabling disk encryption
Chapter 5. Enabling disk encryption You can enable encryption of installation disks using either the TPM v2 or Tang encryption modes. Note In some situations, when you enable TPM disk encryption in the firmware for a bare-metal host and then boot it from an ISO that you generate with the Assisted Installer, the cluster deployment can get stuck. This can happen if there are left-over TPM encryption keys from a installation on the host. For more information, see BZ#2011634 . If you experience this problem, contact Red Hat support. 5.1. Enabling TPM v2 encryption Prerequisites Check to see if TPM v2 encryption is enabled in the BIOS on each host. Most Dell systems require this. Check the manual for your computer. The Assisted Installer will also validate that TPM is enabled in the firmware. See the disk-encruption model in the Assisted Installer API for additional details. Important Verify that a TPM v2 encryption chip is installed on each node and enabled in the firmware. Procedure Optional: Using the web console, in the Cluster details step of the user interface wizard, choose to enable TPM v2 encryption on either the control plane nodes, workers, or both. Optional: Using the API, follow the "Modifying hosts" procedure. Set the disk_encryption.enable_on setting to all , masters , or workers . Set the disk_encryption.mode setting to tpmv2 . Refresh the API token: USD source refresh-token Enable TPM v2 encryption: USD curl https://api.openshift.com/api/assisted-install/v2/clusters/USD{CLUSTER_ID} \ -X PATCH \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d ' { "disk_encryption": { "enable_on": "none", "mode": "tpmv2" } } ' | jq Valid settings for enable_on are all , master , worker , or none . 5.2. Enabling Tang encryption Prerequisites You have access to a Red Hat Enterprise Linux (RHEL) 8 machine that can be used to generate a thumbprint of the Tang exchange key. Procedure Set up a Tang server or access an existing one. See Network-bound disk encryption for instructions. You can set multiple Tang servers, but the Assisted Installer must be able to connect to all of them during installation. On the Tang server, retrieve the thumbprint for the Tang server using tang-show-keys : USD tang-show-keys <port> Optional: Replace <port> with the port number. The default port number is 80 . Example thumbprint 1gYTN_LpU9ZMB35yn5IbADY5OQ0 Optional: Retrieve the thumbprint for the Tang server using jose . Ensure jose is installed on the Tang server: USD sudo dnf install jose On the Tang server, retrieve the thumbprint using jose : USD sudo jose jwk thp -i /var/db/tang/<public_key>.jwk Replace <public_key> with the public exchange key for the Tang server. Example thumbprint 1gYTN_LpU9ZMB35yn5IbADY5OQ0 Optional: In the Cluster details step of the user interface wizard, choose to enable Tang encryption on either the control plane nodes, workers, or both. You will be required to enter URLs and thumbprints for the Tang servers. Optional: Using the API, follow the "Modifying hosts" procedure. Refresh the API token: USD source refresh-token Set the disk_encryption.enable_on setting to all , masters , or workers . Set the disk_encryption.mode setting to tang . Set disk_encyrption.tang_servers to provide the URL and thumbprint details about one or more Tang servers: USD curl https://api.openshift.com/api/assisted-install/v2/clusters/USD{CLUSTER_ID} \ -X PATCH \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d ' { "disk_encryption": { "enable_on": "all", "mode": "tang", "tang_servers": "[{\"url\":\"http://tang.example.com:7500\",\"thumbprint\":\"PLjNyRdGw03zlRoGjQYMahSZGu9\"},{\"url\":\"http://tang2.example.com:7500\",\"thumbprint\":\"XYjNyRdGw03zlRoGjQYMahSZGu3\"}]" } } ' | jq Valid settings for enable_on are all , master , worker , or none . Within the tang_servers value, comment out the quotes within the object(s). 5.3. Additional resources Modifying hosts
[ "source refresh-token", "curl https://api.openshift.com/api/assisted-install/v2/clusters/USD{CLUSTER_ID} -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d ' { \"disk_encryption\": { \"enable_on\": \"none\", \"mode\": \"tpmv2\" } } ' | jq", "tang-show-keys <port>", "1gYTN_LpU9ZMB35yn5IbADY5OQ0", "sudo dnf install jose", "sudo jose jwk thp -i /var/db/tang/<public_key>.jwk", "1gYTN_LpU9ZMB35yn5IbADY5OQ0", "source refresh-token", "curl https://api.openshift.com/api/assisted-install/v2/clusters/USD{CLUSTER_ID} -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d ' { \"disk_encryption\": { \"enable_on\": \"all\", \"mode\": \"tang\", \"tang_servers\": \"[{\\\"url\\\":\\\"http://tang.example.com:7500\\\",\\\"thumbprint\\\":\\\"PLjNyRdGw03zlRoGjQYMahSZGu9\\\"},{\\\"url\\\":\\\"http://tang2.example.com:7500\\\",\\\"thumbprint\\\":\\\"XYjNyRdGw03zlRoGjQYMahSZGu3\\\"}]\" } } ' | jq" ]
https://docs.redhat.com/en/documentation/assisted_installer_for_openshift_container_platform/2025/html/installing_openshift_container_platform_with_the_assisted_installer/assembly_enabling-disk-encryption
Preface
Preface Open Java Development Kit (OpenJDK) is a free and open-source implementation of the Java Platform, Standard Edition (Java SE). Eclipse Temurin is available in three LTS versions: OpenJDK 8u, OpenJDK 11u, and OpenJDK 17u. Binary files for Eclipse Temurin are available for macOS, Microsoft Windows, and multiple Linux x86 Operating Systems including Red Hat Enterprise Linux and Ubuntu.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_eclipse_temurin_11.0.21/pr01